BIT.TRIP beat beat is a downloadable game developed for the Nintendo Wii with a unique user interface. It uses 8-bit graphics in the foreground for a new take on the classic Pong style gameplay. It is a single player game where the user moves the paddle up and down to prevent rectangular "Pong balls" from reaching the left hand side of the screen. Rather than use a joystick to control the paddle, the player holds the Wiimote sideways and rotates it forwards or backwards to move the paddle up and down. The position of the paddle is determined by the degree of rotation of the Wiimote. What I find most interesting about the user interface is not what is done with the controls, but rather what is done with the graphics and audio. (The video below will start at 21:52 when you click play)
The game starts out in "Hyper" mode, which has a basic level of graphical effects and music. As each ball is deflected, a bar at the top of the screen fills up, and for each ball that is missed, a bar at the bottom of the screen fills up. If the bar at the top fills up, then the player progresses to "Mega" mode, which has more advanced synthesized music and additional graphical effects. If the bottom bar fills up, the player moves down from Mega to Hyper or from Hyper to Nether mode. In Nether mode, the music is completely stripped away except for sounds made with the Wiimote's speaker on a hit or miss, and the graphics become black and white in the style of the original Pong. This gives the player both a visual and audial cue about how they are performing based on the graphical and audial fidelity. Additionally, the sounds that are played are part of the music, so the only way to hear the complete song is to deflect every ball.
Another game that has an interesting graphical user interface is the game Audiosurf. The controls for the game are nothing out of the ordinary. The player moves a ship left or right in order to run into colored blocks in an attempt to make combinations of the the same color as the ship progresses forward on a track automatically. What makes this game interesting is that the tracks are procedurally generated based on the music selected to play in the background. The speed of the ship, which is based on the slope of the track, increases and decreases as the tempo of the song changes and the colors of the blocks are determined by the intensity of the music, with more blue blocks being generated during a more relaxed song, and more red blocks (which are worth more points) being generated during an intense song. A good example of a song with changing tempo and complexity is the song Chop Suey, by System of A Down as seen in the video below.
These games as well as many others made by independent developers are great examples of user interface design in an unconventional way. Even if the games themselves have typical control schemes, they allow you to interact with the music. Normally music is something you passively listen to, but these games give you ways to interact with the music other than simply listening to it.
Monday, September 28, 2009
Monday, September 21, 2009
Fantasy Football
You know what really grinds my gears? ESPN's fantasy football draft. I was drafting a couple of weeks ago, and after taking Matt Forte for my first pick (even though he has been of no use so far, I'm still hopeful), I decided to look at some of the other players available further down in ESPN's depth chart. Once you select one of the players, some information is displayed including past stats, outlook, etc. When it becomes time for your next pick, the geniuses designing the user interface for the draft thought it would be a good idea to automatically scroll down on the scroll bar making the currently selected player appear to be at the top of the list, even though you can scroll back up on the scroll bar to see who's really at the top of the list.
This happened to me while I was drafting. I had selected Greg Jennings, WR for Green Bay, so that I could look at his stats. When it became my turn, all of the players above him disappeared and he was at the top of the depth chart. Since several people missed our draft and they're teams automatically pick the player at the top of the depth chart, I assumed that everyone rated higher than Greg Jennings had just been picked, and I proceeded to select him with my second pick of the draft. I could have had Tom Brady as my QB! Only time will tell if this blunder was for bettor or worse, but at the end of the day, it was a poor choice in the user interface design.
This happened to me while I was drafting. I had selected Greg Jennings, WR for Green Bay, so that I could look at his stats. When it became my turn, all of the players above him disappeared and he was at the top of the depth chart. Since several people missed our draft and they're teams automatically pick the player at the top of the depth chart, I assumed that everyone rated higher than Greg Jennings had just been picked, and I proceeded to select him with my second pick of the draft. I could have had Tom Brady as my QB! Only time will tell if this blunder was for bettor or worse, but at the end of the day, it was a poor choice in the user interface design.
Monday, September 14, 2009
Intelligence or User Interface?
Giving a user a clean interface that allows them to do what they want and predicting what users want to do and doing it for them are both viable options in software design. Intelligent predictions can be very useful because they can speed things up, and in some cases they make it so the users do not necessarily be intelligent themselves. If someone who is not computer savvy wants to use a printer, it is best if they can just plug it in and have the driver download automatically so that it "just works". They may not be capable of finding a driver on their own. On the downside, when those predictions are not accurate, they can make for a painful experience when trying to undo the automated action and manually do what you were trying to accomplish in the first place. A well designed user interface has the advantage of giving the user complete control over what happens, but sometimes the users themselves don't know what they want to input. In a perfect world, I think accurate predictions would be the most useful way to get input if a computer could accurately and intelligently know exactly what you wanted to do every time you wanted to give it input. I would love to be able to look at a word document and have it format exactly the way I wanted it to without any extra effort on my part, but unfortunately it is not easy to predict what a user wants.
I think that it is most important to have a marriage between the two. Any automated system should have a good UI itself so that the user can control and undo any automated changes. One example of a poor automated system is the specific case of the letters ID being automatically changed to I'd when I send a text message on my Blackberry Curve. Personally, I think this is a poor choice in auto-correct, because I believe it is more confusing to ask for someone's I'd than to say Id like to grab some coffee. At the same time, the user interface surrounding the auto-correct itself is less than ideal. There is no clear and obvious way for me to undo the change. Being used to failed auto-corrects in other software, I knew enough to spell out a word longer than ID with ID in the front and then delete the extra letters to be left with ID, but other users might not know what to do, or they might even keep texting without even noticing. At the same time, the left and right scrolling on my wheel is inaccurate and an absolute pain to use. While this is through a mechanical defect and not by design, it still made the process of undoing an automated action even more painful, which is why any anticipation done in software should be surrounded by an intuitive UI that makes it easy for a user to pick and choose when they want to let the automatic system do its job.
An example of a great anticipation system is Visual Studio's Intellisense for the C# language. Intellisense predicts what function you are trying to type, and gives you the option to automatically fill in the rest of the function name. Also, if you have an object it will give you all of the functions available to that object, as well as tell you what the input and output types are and a small description of what the function does. Last summer I got together with a group of game developers, and we did a fun little project where we split into teams and programmed AI for a tank battle game, and then pitted our AI's against each other to see who's was the strongest. I showed up late the first week we did this, and consequently ended up on a group by myself. I had to code in C# without having any experience with C#, but the Intellisense was so good that I was able to start typing what I though I would do if I were using C++, and it would automatically tell me what it thought I should type. This was enough for me to get a working AI in C# within 30 minutes of hacking away with no C# experience. Intellisense didn't simply predict what I wanted to type, it also provided me with a simple user interface that allowed me to choose when I wanted it to automatically fill in the blank.
Ultimately, it comes down to a case by case decision that needs to be made by the developers. Some software may be better suited by concentrating efforts on a user interface while other software may be best served by an accurate artificial intelligence that can predict what the user would like to do. In most cases, there will not be a clean line to draw between how much effort gets put into each. The two should be complementary to each other in any way that makes the most sense when enhancing a user interface. Most importantly, in cases where software anticipates what the user wants, there should always be an intuitive user interface that allows the user to specify whether or not the automation is correct and undo any automatic changes.
I think that it is most important to have a marriage between the two. Any automated system should have a good UI itself so that the user can control and undo any automated changes. One example of a poor automated system is the specific case of the letters ID being automatically changed to I'd when I send a text message on my Blackberry Curve. Personally, I think this is a poor choice in auto-correct, because I believe it is more confusing to ask for someone's I'd than to say Id like to grab some coffee. At the same time, the user interface surrounding the auto-correct itself is less than ideal. There is no clear and obvious way for me to undo the change. Being used to failed auto-corrects in other software, I knew enough to spell out a word longer than ID with ID in the front and then delete the extra letters to be left with ID, but other users might not know what to do, or they might even keep texting without even noticing. At the same time, the left and right scrolling on my wheel is inaccurate and an absolute pain to use. While this is through a mechanical defect and not by design, it still made the process of undoing an automated action even more painful, which is why any anticipation done in software should be surrounded by an intuitive UI that makes it easy for a user to pick and choose when they want to let the automatic system do its job.
An example of a great anticipation system is Visual Studio's Intellisense for the C# language. Intellisense predicts what function you are trying to type, and gives you the option to automatically fill in the rest of the function name. Also, if you have an object it will give you all of the functions available to that object, as well as tell you what the input and output types are and a small description of what the function does. Last summer I got together with a group of game developers, and we did a fun little project where we split into teams and programmed AI for a tank battle game, and then pitted our AI's against each other to see who's was the strongest. I showed up late the first week we did this, and consequently ended up on a group by myself. I had to code in C# without having any experience with C#, but the Intellisense was so good that I was able to start typing what I though I would do if I were using C++, and it would automatically tell me what it thought I should type. This was enough for me to get a working AI in C# within 30 minutes of hacking away with no C# experience. Intellisense didn't simply predict what I wanted to type, it also provided me with a simple user interface that allowed me to choose when I wanted it to automatically fill in the blank.
Ultimately, it comes down to a case by case decision that needs to be made by the developers. Some software may be better suited by concentrating efforts on a user interface while other software may be best served by an accurate artificial intelligence that can predict what the user would like to do. In most cases, there will not be a clean line to draw between how much effort gets put into each. The two should be complementary to each other in any way that makes the most sense when enhancing a user interface. Most importantly, in cases where software anticipates what the user wants, there should always be an intuitive user interface that allows the user to specify whether or not the automation is correct and undo any automatic changes.
Monday, September 7, 2009
Splinter Cell - Conviction
Ubisoft is currently developing Splinter Cell: Conviction, the next iteration of their popular stealth action series for the Xbox 360. Splinter Cell and Metal Gear Solid are two of the most popular stealth action series in this generation of consoles, so it is natural that when Splinter Cell: Conviction comes out, it will be compared to Metal Gear Solid 4: Guns of the Patriots. Metal Gear Solid 4 is infamous for its use of cut-scenes to progress the story, and Ubisoft is trying to create a user interface that is more interactive in order to set themselves apart from MGS4.
Cut-scenes are commonly used to progress story in videogames because they give the developers full control over what is presented to the player. While it is a useful way to present information to the player because developers can shape the experience exactly how they would like, it is also a source of criticism because it takes away the interactivity that separates games from movies. MGS4 was praised for having some of the most impressive, cinematic cut-scenes in videogame history, but it was also criticized for forcing players spend hours without control over the characters if they wanted to enjoy the full story. Ubisoft is seeking to design a new way to move the storyline forward and display objectives while keeping the experience fluid and interactive.
In Splinter Cell: Conviction, both objectives and scenes such as flashbacks are projected onto flat surfaces like walls, ceilings and floors in real time so they can be viewed while the player is still controlling the main character. Additionally, they are attempting to only use smooth transitions from one camera angle to the next and never cut straight from one angle to another in order to keep the experience fluid. This can be seen in the E3 demo (above) shown this summer. While it could potentially turn out to be too overwhelming, too difficult to see and read, or prevent the digital world from feeling real enough, if the design pans out it could make for an immersive, interactive game that is stylish yet still gives the player enough control to make them always feel like they are a part of the game.
Cut-scenes are commonly used to progress story in videogames because they give the developers full control over what is presented to the player. While it is a useful way to present information to the player because developers can shape the experience exactly how they would like, it is also a source of criticism because it takes away the interactivity that separates games from movies. MGS4 was praised for having some of the most impressive, cinematic cut-scenes in videogame history, but it was also criticized for forcing players spend hours without control over the characters if they wanted to enjoy the full story. Ubisoft is seeking to design a new way to move the storyline forward and display objectives while keeping the experience fluid and interactive.
In Splinter Cell: Conviction, both objectives and scenes such as flashbacks are projected onto flat surfaces like walls, ceilings and floors in real time so they can be viewed while the player is still controlling the main character. Additionally, they are attempting to only use smooth transitions from one camera angle to the next and never cut straight from one angle to another in order to keep the experience fluid. This can be seen in the E3 demo (above) shown this summer. While it could potentially turn out to be too overwhelming, too difficult to see and read, or prevent the digital world from feeling real enough, if the design pans out it could make for an immersive, interactive game that is stylish yet still gives the player enough control to make them always feel like they are a part of the game.
Subscribe to:
Posts (Atom)