Microsoft has a vision for what the future of technology will be like in the year 2019, only ten years from now. They envision that the trend of touch interfaces will continue to grow and create a more natural user interface for all forms of information. They also envision that a lot of things will be made out of glass.
The thing I like most about their vision is the level of interactivity between various devices. The ability to drag data from one device seamlessly to another would be very natural and easy to use if it worked as smoothly as it does in their promotional video. It is also not too big of a leap from the current general concept of draging files from one folder or location to another on a computer. This could make the transition into such an interactive world more smooth.
However, I don't know how well the prodigious use of glass will actually work out. While it looks very cool, if there is a graph or chart in an important business meeting that is displayed on the window, having skyscrapers in the background could prove to be more than a little distracting.
Also, there are some instances where the extra technology seems like overkill and would be too expensive to implement. Specifically, having a digital boarding pass for each passenger who is travelling on a given day would be radically more expensive than the traditional paper boarding passes. That said, the ability to look up flight information in real time from your boardinng pass would be pretty useful when travelling through the airport.
While I don't know if this technology will be this pervasive in 2019, I think having a vision as bold as this one is important for the advancement of user interfaces. Microsoft is already trying to improve the connectivity and interactivity between computers, TVs, and cell phones, so it is only a matter of time before they try to make everything as interactive as it is in this video.
Monday, December 7, 2009
Monday, November 23, 2009
Rock Band Difficulty Selection
This entry is a comment on Robert Wettach's September 27th blog entry on selecting difficulty in Rock Band. In Rock Band, you don't know how the difficulty varies between the different instruments until you actually try the song. This can be a frustrating cause of many failed attempts and restarts. Rock Band 2 improved upon that by adding a chart that shows the difficulty across various instruments during song selection, but Rob was still frustrated because this information is not available while you are actually selecting your instrument. This was finally cured in The Beatles: Rock Band.
Guitar Hero 5 has recently made and advancement beyond what Rock Band has been able to provide. Guitar Hero 5 allows players to dynamically change the difficulty of their instrument in real time without interrupting the song. You don't have to fail a song and start over in order to change your difficulty. You don't even have to pause the song. This is especially relevant when playing at a party with a large group. If only one of the four players decides they want to change their difficulty, they don't have to interrupt the other four. They can alter the difficulty for their instrument alone.
On top of that, Guitar Hero 5 allows players to drop in and out of the game in the middle of a song. If a new player comes mid-song or someone decides they want to stop playing, they don't have to ruin it for the other players by forcing them to restart the song. Instead you can drop in without stopping the music at all. You can even change instruments and have multiple people on the same instrument. The ability to change your difficulty, switch instruments, pause, and drop in and out of the game in real time without interrupting the song or other players gives players the opportunity to alter their settings without forcing an inconvenient restart on other players.
While it is not as clear as Rock Band on the individual difficulties of each instrument, the ability to change difficulty in real time gives Guitar Hero 5 the advantage over Rock Band.
Guitar Hero 5 has recently made and advancement beyond what Rock Band has been able to provide. Guitar Hero 5 allows players to dynamically change the difficulty of their instrument in real time without interrupting the song. You don't have to fail a song and start over in order to change your difficulty. You don't even have to pause the song. This is especially relevant when playing at a party with a large group. If only one of the four players decides they want to change their difficulty, they don't have to interrupt the other four. They can alter the difficulty for their instrument alone.
On top of that, Guitar Hero 5 allows players to drop in and out of the game in the middle of a song. If a new player comes mid-song or someone decides they want to stop playing, they don't have to ruin it for the other players by forcing them to restart the song. Instead you can drop in without stopping the music at all. You can even change instruments and have multiple people on the same instrument. The ability to change your difficulty, switch instruments, pause, and drop in and out of the game in real time without interrupting the song or other players gives players the opportunity to alter their settings without forcing an inconvenient restart on other players.
While it is not as clear as Rock Band on the individual difficulties of each instrument, the ability to change difficulty in real time gives Guitar Hero 5 the advantage over Rock Band.
Monday, November 16, 2009
Security
One thing that never ceases to frustrate me are the locks on doors at Notre Dame. The locks on the doors to the computer labs in the engineering building require a passcode to be typed in before unlocking. The problem is, there is no way to undo a mistake that is made while typing a code in. Instead, you have to wait for a few seconds before you can continue. It would be much more convenient if you could clear the entry you've made so far and start again. Of course, it is understandable that the manufacturer might want to force you to wait after an incorrect input. In the same way that online passwords sometimes force you to wait after a certain number of incorrect inputs, this pause makes the process of trying to guess a password much more time consuming, making it an unfeasable task. That said, even with a forced pause, it would be nice to have an indication of when you are allowed to try again. Following an incorrect input with a correct input too quickly forces you to wait again before you can try. This process can be frustrating, and it would be easy to fix by simply using the flashing light to indicate when it is okay to try again.
Some of the labs have doors that require you to punch mechanical buttons in rather than digitally entering a code. Turning the knob on these doors resets it, so that you can try again. This is a much less frustrating design. Unfortunately, the codes for these doors can be much harder to remember. These locks allow a combination to include two numbers being pressed at the same time in order to increase the number of possible combinations using fewer buttons. Unfortunately, this makes the passwords harder to chunk into easy to remember sequences such as a 4 number year. Instead, you not only have to remember the numbers, but also which numbers are pressed simultaneously. This does not work well with how the human brain remembers things.
Another type of door lock on campus that was poorly designed from a user interface perspective are the locks on the dorm rooms of the West Quad dorms. These locks take the form of small switches that are located on the side of the doors and can only be seen when the doors are open. It took me a couple of weeks into my freshman year to realize they even existed without someone showing me. Furthermore, "unlocking" the door with your key does not actually unlock it, but rather just opens it. However, the door remains unlocked from the inside, so if you beleive you've unlocked your door by using the key (because that's how it works on every other door you've used in your life) and choose to leave, you can easily find yourself locked out of your room.
Fortunately, there are some much more user friendly security systems such as fingerprint readers. Fingerprint readers are ideal from the user's perspective because they are fast, intuitive, and don't require the user to remember any numbers or passwords. Unfortunately, they are also much more complicated to set up because you need prior knowledge of the fingerprint of the person being granted access. You cannot easily and freely distribute fingerprints as you do passwords, so in a setting like Notre Dame where the users are changing from year to year or in the case of a home where you might want to grant a friend or neighbor access, it is inconvenient to give new users the ability to unlock the system. Retinal scans tend to be popular forms of security in movies, but they tend to be less convenient to use than fingerprint scanners. Until retinal scanning becomes as convenient as it is in Minority Report, where Tom Cruise can simply walk off the subway and have his eyes scanned without having to pause even for a moment, we will just have to make due with keys and keypads.
Some of the labs have doors that require you to punch mechanical buttons in rather than digitally entering a code. Turning the knob on these doors resets it, so that you can try again. This is a much less frustrating design. Unfortunately, the codes for these doors can be much harder to remember. These locks allow a combination to include two numbers being pressed at the same time in order to increase the number of possible combinations using fewer buttons. Unfortunately, this makes the passwords harder to chunk into easy to remember sequences such as a 4 number year. Instead, you not only have to remember the numbers, but also which numbers are pressed simultaneously. This does not work well with how the human brain remembers things.
Another type of door lock on campus that was poorly designed from a user interface perspective are the locks on the dorm rooms of the West Quad dorms. These locks take the form of small switches that are located on the side of the doors and can only be seen when the doors are open. It took me a couple of weeks into my freshman year to realize they even existed without someone showing me. Furthermore, "unlocking" the door with your key does not actually unlock it, but rather just opens it. However, the door remains unlocked from the inside, so if you beleive you've unlocked your door by using the key (because that's how it works on every other door you've used in your life) and choose to leave, you can easily find yourself locked out of your room.
Fortunately, there are some much more user friendly security systems such as fingerprint readers. Fingerprint readers are ideal from the user's perspective because they are fast, intuitive, and don't require the user to remember any numbers or passwords. Unfortunately, they are also much more complicated to set up because you need prior knowledge of the fingerprint of the person being granted access. You cannot easily and freely distribute fingerprints as you do passwords, so in a setting like Notre Dame where the users are changing from year to year or in the case of a home where you might want to grant a friend or neighbor access, it is inconvenient to give new users the ability to unlock the system. Retinal scans tend to be popular forms of security in movies, but they tend to be less convenient to use than fingerprint scanners. Until retinal scanning becomes as convenient as it is in Minority Report, where Tom Cruise can simply walk off the subway and have his eyes scanned without having to pause even for a moment, we will just have to make due with keys and keypads.
Monday, November 9, 2009
Smart Glass
Now that mobile phones can connect to the internet, people can get information from the internet almost anywhere around the world. One useful resource on an internet enabled phone is a search engine. If at any time, someone wants to know more about something, they can take out their phone and search for the information they need. This connectivity will get faster and more pervasive over time. Smart glass is a theoretical use of this connectivity to allow us to gather information about our surroundings in real time.
Unlike a search engine in a mobile browser, smart glass does not require you to encode what you are looking at into words before searching for information about it. You simply send the image over the internet and receive the information you want in return. If this worked well, it would greatly simplify the process of searching the internet for answers to your questions. The use of glass also allows you to see the data you want overlaid on top of what you are looking at to organize it in an intuitive manner.
Unfortunately, this technology would require huge advances in artificial intelligence before it could become practical. It is unclear how this concept would allow you to specify what to do next if it returns irrelevant data. It is hard enough for a search engine based on text to return relevant data. Using images would only make it that much more complicated. Also, image processing technology is far from good enough to identify a lot of objects from every day life, such as a flower or an apple. There may however be a few specific cases where this would not be too difficult. Processing the image of a clear night sky might in order to identify constellations might be practical if the image capturing technology is precise enough.
Additionally, the use of additional technology such as a GPS system might compliment the search technology well enough for specific applications to work. For example, if the smart glass knows the precise location and orientation that it is being held in, it could potentially calculate how to draw an arrow along a road to direct the user in the style of Google Street View.
While this technology would be difficult to implement - particularly with the image recognition and the issue of returning relevant results, it would certainly make for an intuitive way to gather information on the world around us.
Unlike a search engine in a mobile browser, smart glass does not require you to encode what you are looking at into words before searching for information about it. You simply send the image over the internet and receive the information you want in return. If this worked well, it would greatly simplify the process of searching the internet for answers to your questions. The use of glass also allows you to see the data you want overlaid on top of what you are looking at to organize it in an intuitive manner.
Unfortunately, this technology would require huge advances in artificial intelligence before it could become practical. It is unclear how this concept would allow you to specify what to do next if it returns irrelevant data. It is hard enough for a search engine based on text to return relevant data. Using images would only make it that much more complicated. Also, image processing technology is far from good enough to identify a lot of objects from every day life, such as a flower or an apple. There may however be a few specific cases where this would not be too difficult. Processing the image of a clear night sky might in order to identify constellations might be practical if the image capturing technology is precise enough.
Additionally, the use of additional technology such as a GPS system might compliment the search technology well enough for specific applications to work. For example, if the smart glass knows the precise location and orientation that it is being held in, it could potentially calculate how to draw an arrow along a road to direct the user in the style of Google Street View.
While this technology would be difficult to implement - particularly with the image recognition and the issue of returning relevant results, it would certainly make for an intuitive way to gather information on the world around us.
Monday, November 2, 2009
Racing Game Controllers
Racing video games have been around for a long time, and they have evolved with the advent of the analog controller. Originally, racing games that utilized digital controllers only had one level of gas control. Either the gas was being floored or it was completely released. Additionally, the steering was limited to left, right, or straight. This does not bode well for racing games that are attempting to create a realistic, immersive experience.
Fortunately, the use of an analog stick on the Nintendo 64 controller allowed for multiple degrees of freedom when steering. Eventually Sony released its original Dual-Shock controller that contained two analogue sticks which allowed for even further control over the steering. Unfortunately, gas was an all or nothing affair.
With the Dual-Shock 2 controller, Sony implemented analogue buttons in addition to the sticks. This allowed the user to control the flow of gas with more precision. An even greater step towards a realistic driving experience was made with the "trigger" style controls used on the top of the Dreamcast and Xbox controllers. This not only gives the player control, but it also simulates the movement and force feedback from the gas pedal.
Of course, to simulate a real racing experience, a quality racing wheel is needed. Some of the better racing wheels not only come with pedals, they also include a gear shifter and force feedback from the wheel. A more hardcore setup can even include multiple monitors, a fixed chair, and a sound system.
Motion controls have also made for an interesting advancement in racing controls. The Wiimote has been used as a steering wheel in games such as Mario Kart. Holding the Wiimote sideways while rotating it like a steering wheel is one way to simulate the rotation without the need for extra controllers. Project Natal is even being used to simulate the racing experience without the use of extra peripherals. One of the Project Natal demos involves playing the game Burnout: Paradise with no controller. You simply move your left and right feet to step on the gas and brake, move your hands through the air to steer, and reach down with one hand to shift. While this has the definite advantage of not requiring any extra peripherals or setup, it also lacks the feedback of being able to feel the wheel as you turn it. Reports have said that the motion controls feel natural, so it may be a good alternative for those not willing to clutter up their room with extra gear.
Fortunately, the use of an analog stick on the Nintendo 64 controller allowed for multiple degrees of freedom when steering. Eventually Sony released its original Dual-Shock controller that contained two analogue sticks which allowed for even further control over the steering. Unfortunately, gas was an all or nothing affair.
With the Dual-Shock 2 controller, Sony implemented analogue buttons in addition to the sticks. This allowed the user to control the flow of gas with more precision. An even greater step towards a realistic driving experience was made with the "trigger" style controls used on the top of the Dreamcast and Xbox controllers. This not only gives the player control, but it also simulates the movement and force feedback from the gas pedal.
Of course, to simulate a real racing experience, a quality racing wheel is needed. Some of the better racing wheels not only come with pedals, they also include a gear shifter and force feedback from the wheel. A more hardcore setup can even include multiple monitors, a fixed chair, and a sound system.
Motion controls have also made for an interesting advancement in racing controls. The Wiimote has been used as a steering wheel in games such as Mario Kart. Holding the Wiimote sideways while rotating it like a steering wheel is one way to simulate the rotation without the need for extra controllers. Project Natal is even being used to simulate the racing experience without the use of extra peripherals. One of the Project Natal demos involves playing the game Burnout: Paradise with no controller. You simply move your left and right feet to step on the gas and brake, move your hands through the air to steer, and reach down with one hand to shift. While this has the definite advantage of not requiring any extra peripherals or setup, it also lacks the feedback of being able to feel the wheel as you turn it. Reports have said that the motion controls feel natural, so it may be a good alternative for those not willing to clutter up their room with extra gear.
Friday, October 16, 2009
Windows 7
I've been running Windows 7 on my computer ever since last summer, and one of the biggest changes to the operating system is in the user interface. In my opinion, Microsoft has done a great job at transitioning the user interface from one operating system to the next. When I first used Vista, I felt that it wasn't too different from XP. It still had the start button, toolbar and taskbar that I was familiar with, so I did not have any difficulty adjusting to the new interface. At the same time, it added the search bar after clicking the start button that allows you to search for programs rather than navigating through a menu system. Windows 7 provides a more drastic change than Vista did, but it still has a very familiar feeling that did not take much time to adjust to. Once again, it did not take much time to get used to the simple but useful changes they made to the user interface.
The start button is still there, and the search functionality is still in place, but the emphasis is now on tabs that are located in the taskbar. Mousing over the tabs even gives you a preview of what the window looks like, giving you an easy way to know what each of your minimized windows is doing and fast navigation between them. Interestingly enough, the tabs were actually in Windows 1.0 (pictured below) and were updated and brought back for the latest operating system.
It is only when I go back to using Windows XP that I realize how good of a job Microsoft actually did. The changes from operating system to operating system have been subtle enough to cause for a smooth transition, but when I use XP now that I have gotten used to Windows 7, it feels so primitive. Going back to XP from Vista caused no such feeling, so it is clear to me that Microsoft has drastically changed its user interface after just two generations, but they have done so in a smooth way without jarring old users with an unfamiliar interface. Of course, those who transition straight from XP to Windows 7 may feel differently, but I imagine the transition up will be a lot smoother than my experience when I move back down to XP.
The start button is still there, and the search functionality is still in place, but the emphasis is now on tabs that are located in the taskbar. Mousing over the tabs even gives you a preview of what the window looks like, giving you an easy way to know what each of your minimized windows is doing and fast navigation between them. Interestingly enough, the tabs were actually in Windows 1.0 (pictured below) and were updated and brought back for the latest operating system.
It is only when I go back to using Windows XP that I realize how good of a job Microsoft actually did. The changes from operating system to operating system have been subtle enough to cause for a smooth transition, but when I use XP now that I have gotten used to Windows 7, it feels so primitive. Going back to XP from Vista caused no such feeling, so it is clear to me that Microsoft has drastically changed its user interface after just two generations, but they have done so in a smooth way without jarring old users with an unfamiliar interface. Of course, those who transition straight from XP to Windows 7 may feel differently, but I imagine the transition up will be a lot smoother than my experience when I move back down to XP.
Monday, October 12, 2009
SIXAXIS
The controller for the Sony Playstation consoles has not changed much in shape or button layout since it came out for the original Playstation console. The first major change was the addition of dual analogue sticks. A rumble effect was later added and the joysticks became press-able buttons while remaining joysticks. The rumble was a good way to help immerse the players through the user interface by letting them actually feel what was going on. It was even used in Hideo Kojima's Metal Gear Solid 1 as a way to more directly interact with the player. At one point, a boss named Psycho Mantis was trying to prove his psychic abilities, and instructed the player to place the controller on the floor. Once the controller was on the floor, Psycho Mantis "moved" the controller with his psychic powers (via the rumble).
The PS3 introduced a wireless controller, replaced the L2 and R2 buttons with triggers, and most notably added SIXAXIS to the controller. SIXAXIS is used to measure movement along the controller in six different directions. As the controller lay in its flat position (below), it can be moved up and down, rotated left and right, and tilted forward or back. In order to fit the components required for SIXAXIS into the controller without altering its shape, the engineers at Sony decided they had to scrap the rumble feature. Little did they know, this would be a cause for a lot of complaints. Gamers really wanted to keep the rumble effect, and not many developers have used the SIXAXIS controls in a fun intuitive way.
A lot of games simply use the SIXAXIS controls by making the player shake the controller up and down at certain parts of the game. I find this tedious and unenjoyable, but there are two games that I've played which really make use of the SIXAXIS. When turning a wheel (think of the wheels you see used to open doors/hatches on ships), the player has to hold down on the L and R buttons to place their hands on the wheel, and then rotate the controller to turn the wheel. In order to reposition their hands to give it another crank, they let go of the L and R buttons, rotate the controller back in the other direction, and repeat. While it is very simple and a small part of the game, it is actually quite intuitive and makes you feel like you are actually turning the wheel. They didn't go overboard, but the found a place in the game where SIXAXIS could be used intuitively, and took advantage of it.
The other game that I've played which uses the SIXAXIS well is Flower. Instead of being a small component, Flower uses the SIXAXIS throughout the entirety of the game. The basic premise of the game is that you control the wind, and as you pass through flowers you open them up and collect a petal that rides with the wind. Only one button is needed to propel the wind forward, and the direction is dictated entirely by the orientation of the controller. It is a good example of a natural user interface.
While the SIXAXIS was meant to add another dimension to the user interface of the PS3, not many developers have taken advantage of it. It presents an interesting way to interact with the player, but finding a way to naturally incorporate it into a traditional game has not been done well by many.
The PS3 introduced a wireless controller, replaced the L2 and R2 buttons with triggers, and most notably added SIXAXIS to the controller. SIXAXIS is used to measure movement along the controller in six different directions. As the controller lay in its flat position (below), it can be moved up and down, rotated left and right, and tilted forward or back. In order to fit the components required for SIXAXIS into the controller without altering its shape, the engineers at Sony decided they had to scrap the rumble feature. Little did they know, this would be a cause for a lot of complaints. Gamers really wanted to keep the rumble effect, and not many developers have used the SIXAXIS controls in a fun intuitive way.
A lot of games simply use the SIXAXIS controls by making the player shake the controller up and down at certain parts of the game. I find this tedious and unenjoyable, but there are two games that I've played which really make use of the SIXAXIS. When turning a wheel (think of the wheels you see used to open doors/hatches on ships), the player has to hold down on the L and R buttons to place their hands on the wheel, and then rotate the controller to turn the wheel. In order to reposition their hands to give it another crank, they let go of the L and R buttons, rotate the controller back in the other direction, and repeat. While it is very simple and a small part of the game, it is actually quite intuitive and makes you feel like you are actually turning the wheel. They didn't go overboard, but the found a place in the game where SIXAXIS could be used intuitively, and took advantage of it.
The other game that I've played which uses the SIXAXIS well is Flower. Instead of being a small component, Flower uses the SIXAXIS throughout the entirety of the game. The basic premise of the game is that you control the wind, and as you pass through flowers you open them up and collect a petal that rides with the wind. Only one button is needed to propel the wind forward, and the direction is dictated entirely by the orientation of the controller. It is a good example of a natural user interface.
While the SIXAXIS was meant to add another dimension to the user interface of the PS3, not many developers have taken advantage of it. It presents an interesting way to interact with the player, but finding a way to naturally incorporate it into a traditional game has not been done well by many.
Monday, October 5, 2009
Natural User Interface
I was out in Redmond, Washington last summer, and one of the phrases that they used a lot at Microsoft was "Natural User Interface". They are striving for a natural user interface with all of their applications, operating systems, mobile products, and of course with Project Natal. While I was out there, I had the priveledge to try Natal and play their demo called "Ricochet". Ricochet is a breakout/brick breaker style game that takes place in three dimensions. You use your body as the paddle to reflect the balls towards the square panels at the end of a long corridor that you are trying to break. The ball starts out floating in the air above your player, and you have to swat the ball to get it moving. Unlike breakout where the paddle only moves along one dimension, your body must move in three dimensions. Not only does it have to block the ball in the x and y directions to prevent it from going past you, but you must also swing forward to give the ball momentum as it travels down the corridor.
While I was watching someone else play, I noticed there was a lot of lag between their movements and the movements of their character. It looked like it would be difficult to play with a lot of complex motions, but when I got to try it myself, the one word that best described my experience was "natural". I've played a lot of video games in my life, but I've never played a game that was as intuitive as Ricochet. It just felt "right". I could not tell if the latency was caused by the technology or if it was intentionally programmed into the game. Since I was standing a few feet away from the TV, the actions of my avatar were timed to move about when the ball would have reached me had it continued through the screen and towards my actual body. I'm not sure if this will be present in other Natal games, but in Ricochet it felt perfect.
Now I can only hope that Natal will have enough pinpoint accuracy to make the first fun beer pong video game.
While I was watching someone else play, I noticed there was a lot of lag between their movements and the movements of their character. It looked like it would be difficult to play with a lot of complex motions, but when I got to try it myself, the one word that best described my experience was "natural". I've played a lot of video games in my life, but I've never played a game that was as intuitive as Ricochet. It just felt "right". I could not tell if the latency was caused by the technology or if it was intentionally programmed into the game. Since I was standing a few feet away from the TV, the actions of my avatar were timed to move about when the ball would have reached me had it continued through the screen and towards my actual body. I'm not sure if this will be present in other Natal games, but in Ricochet it felt perfect.
Now I can only hope that Natal will have enough pinpoint accuracy to make the first fun beer pong video game.
Monday, September 28, 2009
BIT.TRIP Beat and Audiosurf
BIT.TRIP beat beat is a downloadable game developed for the Nintendo Wii with a unique user interface. It uses 8-bit graphics in the foreground for a new take on the classic Pong style gameplay. It is a single player game where the user moves the paddle up and down to prevent rectangular "Pong balls" from reaching the left hand side of the screen. Rather than use a joystick to control the paddle, the player holds the Wiimote sideways and rotates it forwards or backwards to move the paddle up and down. The position of the paddle is determined by the degree of rotation of the Wiimote. What I find most interesting about the user interface is not what is done with the controls, but rather what is done with the graphics and audio. (The video below will start at 21:52 when you click play)
The game starts out in "Hyper" mode, which has a basic level of graphical effects and music. As each ball is deflected, a bar at the top of the screen fills up, and for each ball that is missed, a bar at the bottom of the screen fills up. If the bar at the top fills up, then the player progresses to "Mega" mode, which has more advanced synthesized music and additional graphical effects. If the bottom bar fills up, the player moves down from Mega to Hyper or from Hyper to Nether mode. In Nether mode, the music is completely stripped away except for sounds made with the Wiimote's speaker on a hit or miss, and the graphics become black and white in the style of the original Pong. This gives the player both a visual and audial cue about how they are performing based on the graphical and audial fidelity. Additionally, the sounds that are played are part of the music, so the only way to hear the complete song is to deflect every ball.
Another game that has an interesting graphical user interface is the game Audiosurf. The controls for the game are nothing out of the ordinary. The player moves a ship left or right in order to run into colored blocks in an attempt to make combinations of the the same color as the ship progresses forward on a track automatically. What makes this game interesting is that the tracks are procedurally generated based on the music selected to play in the background. The speed of the ship, which is based on the slope of the track, increases and decreases as the tempo of the song changes and the colors of the blocks are determined by the intensity of the music, with more blue blocks being generated during a more relaxed song, and more red blocks (which are worth more points) being generated during an intense song. A good example of a song with changing tempo and complexity is the song Chop Suey, by System of A Down as seen in the video below.
These games as well as many others made by independent developers are great examples of user interface design in an unconventional way. Even if the games themselves have typical control schemes, they allow you to interact with the music. Normally music is something you passively listen to, but these games give you ways to interact with the music other than simply listening to it.
The game starts out in "Hyper" mode, which has a basic level of graphical effects and music. As each ball is deflected, a bar at the top of the screen fills up, and for each ball that is missed, a bar at the bottom of the screen fills up. If the bar at the top fills up, then the player progresses to "Mega" mode, which has more advanced synthesized music and additional graphical effects. If the bottom bar fills up, the player moves down from Mega to Hyper or from Hyper to Nether mode. In Nether mode, the music is completely stripped away except for sounds made with the Wiimote's speaker on a hit or miss, and the graphics become black and white in the style of the original Pong. This gives the player both a visual and audial cue about how they are performing based on the graphical and audial fidelity. Additionally, the sounds that are played are part of the music, so the only way to hear the complete song is to deflect every ball.
Another game that has an interesting graphical user interface is the game Audiosurf. The controls for the game are nothing out of the ordinary. The player moves a ship left or right in order to run into colored blocks in an attempt to make combinations of the the same color as the ship progresses forward on a track automatically. What makes this game interesting is that the tracks are procedurally generated based on the music selected to play in the background. The speed of the ship, which is based on the slope of the track, increases and decreases as the tempo of the song changes and the colors of the blocks are determined by the intensity of the music, with more blue blocks being generated during a more relaxed song, and more red blocks (which are worth more points) being generated during an intense song. A good example of a song with changing tempo and complexity is the song Chop Suey, by System of A Down as seen in the video below.
These games as well as many others made by independent developers are great examples of user interface design in an unconventional way. Even if the games themselves have typical control schemes, they allow you to interact with the music. Normally music is something you passively listen to, but these games give you ways to interact with the music other than simply listening to it.
Monday, September 21, 2009
Fantasy Football
You know what really grinds my gears? ESPN's fantasy football draft. I was drafting a couple of weeks ago, and after taking Matt Forte for my first pick (even though he has been of no use so far, I'm still hopeful), I decided to look at some of the other players available further down in ESPN's depth chart. Once you select one of the players, some information is displayed including past stats, outlook, etc. When it becomes time for your next pick, the geniuses designing the user interface for the draft thought it would be a good idea to automatically scroll down on the scroll bar making the currently selected player appear to be at the top of the list, even though you can scroll back up on the scroll bar to see who's really at the top of the list.
This happened to me while I was drafting. I had selected Greg Jennings, WR for Green Bay, so that I could look at his stats. When it became my turn, all of the players above him disappeared and he was at the top of the depth chart. Since several people missed our draft and they're teams automatically pick the player at the top of the depth chart, I assumed that everyone rated higher than Greg Jennings had just been picked, and I proceeded to select him with my second pick of the draft. I could have had Tom Brady as my QB! Only time will tell if this blunder was for bettor or worse, but at the end of the day, it was a poor choice in the user interface design.
This happened to me while I was drafting. I had selected Greg Jennings, WR for Green Bay, so that I could look at his stats. When it became my turn, all of the players above him disappeared and he was at the top of the depth chart. Since several people missed our draft and they're teams automatically pick the player at the top of the depth chart, I assumed that everyone rated higher than Greg Jennings had just been picked, and I proceeded to select him with my second pick of the draft. I could have had Tom Brady as my QB! Only time will tell if this blunder was for bettor or worse, but at the end of the day, it was a poor choice in the user interface design.
Monday, September 14, 2009
Intelligence or User Interface?
Giving a user a clean interface that allows them to do what they want and predicting what users want to do and doing it for them are both viable options in software design. Intelligent predictions can be very useful because they can speed things up, and in some cases they make it so the users do not necessarily be intelligent themselves. If someone who is not computer savvy wants to use a printer, it is best if they can just plug it in and have the driver download automatically so that it "just works". They may not be capable of finding a driver on their own. On the downside, when those predictions are not accurate, they can make for a painful experience when trying to undo the automated action and manually do what you were trying to accomplish in the first place. A well designed user interface has the advantage of giving the user complete control over what happens, but sometimes the users themselves don't know what they want to input. In a perfect world, I think accurate predictions would be the most useful way to get input if a computer could accurately and intelligently know exactly what you wanted to do every time you wanted to give it input. I would love to be able to look at a word document and have it format exactly the way I wanted it to without any extra effort on my part, but unfortunately it is not easy to predict what a user wants.
I think that it is most important to have a marriage between the two. Any automated system should have a good UI itself so that the user can control and undo any automated changes. One example of a poor automated system is the specific case of the letters ID being automatically changed to I'd when I send a text message on my Blackberry Curve. Personally, I think this is a poor choice in auto-correct, because I believe it is more confusing to ask for someone's I'd than to say Id like to grab some coffee. At the same time, the user interface surrounding the auto-correct itself is less than ideal. There is no clear and obvious way for me to undo the change. Being used to failed auto-corrects in other software, I knew enough to spell out a word longer than ID with ID in the front and then delete the extra letters to be left with ID, but other users might not know what to do, or they might even keep texting without even noticing. At the same time, the left and right scrolling on my wheel is inaccurate and an absolute pain to use. While this is through a mechanical defect and not by design, it still made the process of undoing an automated action even more painful, which is why any anticipation done in software should be surrounded by an intuitive UI that makes it easy for a user to pick and choose when they want to let the automatic system do its job.
An example of a great anticipation system is Visual Studio's Intellisense for the C# language. Intellisense predicts what function you are trying to type, and gives you the option to automatically fill in the rest of the function name. Also, if you have an object it will give you all of the functions available to that object, as well as tell you what the input and output types are and a small description of what the function does. Last summer I got together with a group of game developers, and we did a fun little project where we split into teams and programmed AI for a tank battle game, and then pitted our AI's against each other to see who's was the strongest. I showed up late the first week we did this, and consequently ended up on a group by myself. I had to code in C# without having any experience with C#, but the Intellisense was so good that I was able to start typing what I though I would do if I were using C++, and it would automatically tell me what it thought I should type. This was enough for me to get a working AI in C# within 30 minutes of hacking away with no C# experience. Intellisense didn't simply predict what I wanted to type, it also provided me with a simple user interface that allowed me to choose when I wanted it to automatically fill in the blank.
Ultimately, it comes down to a case by case decision that needs to be made by the developers. Some software may be better suited by concentrating efforts on a user interface while other software may be best served by an accurate artificial intelligence that can predict what the user would like to do. In most cases, there will not be a clean line to draw between how much effort gets put into each. The two should be complementary to each other in any way that makes the most sense when enhancing a user interface. Most importantly, in cases where software anticipates what the user wants, there should always be an intuitive user interface that allows the user to specify whether or not the automation is correct and undo any automatic changes.
I think that it is most important to have a marriage between the two. Any automated system should have a good UI itself so that the user can control and undo any automated changes. One example of a poor automated system is the specific case of the letters ID being automatically changed to I'd when I send a text message on my Blackberry Curve. Personally, I think this is a poor choice in auto-correct, because I believe it is more confusing to ask for someone's I'd than to say Id like to grab some coffee. At the same time, the user interface surrounding the auto-correct itself is less than ideal. There is no clear and obvious way for me to undo the change. Being used to failed auto-corrects in other software, I knew enough to spell out a word longer than ID with ID in the front and then delete the extra letters to be left with ID, but other users might not know what to do, or they might even keep texting without even noticing. At the same time, the left and right scrolling on my wheel is inaccurate and an absolute pain to use. While this is through a mechanical defect and not by design, it still made the process of undoing an automated action even more painful, which is why any anticipation done in software should be surrounded by an intuitive UI that makes it easy for a user to pick and choose when they want to let the automatic system do its job.
An example of a great anticipation system is Visual Studio's Intellisense for the C# language. Intellisense predicts what function you are trying to type, and gives you the option to automatically fill in the rest of the function name. Also, if you have an object it will give you all of the functions available to that object, as well as tell you what the input and output types are and a small description of what the function does. Last summer I got together with a group of game developers, and we did a fun little project where we split into teams and programmed AI for a tank battle game, and then pitted our AI's against each other to see who's was the strongest. I showed up late the first week we did this, and consequently ended up on a group by myself. I had to code in C# without having any experience with C#, but the Intellisense was so good that I was able to start typing what I though I would do if I were using C++, and it would automatically tell me what it thought I should type. This was enough for me to get a working AI in C# within 30 minutes of hacking away with no C# experience. Intellisense didn't simply predict what I wanted to type, it also provided me with a simple user interface that allowed me to choose when I wanted it to automatically fill in the blank.
Ultimately, it comes down to a case by case decision that needs to be made by the developers. Some software may be better suited by concentrating efforts on a user interface while other software may be best served by an accurate artificial intelligence that can predict what the user would like to do. In most cases, there will not be a clean line to draw between how much effort gets put into each. The two should be complementary to each other in any way that makes the most sense when enhancing a user interface. Most importantly, in cases where software anticipates what the user wants, there should always be an intuitive user interface that allows the user to specify whether or not the automation is correct and undo any automatic changes.
Monday, September 7, 2009
Splinter Cell - Conviction
Ubisoft is currently developing Splinter Cell: Conviction, the next iteration of their popular stealth action series for the Xbox 360. Splinter Cell and Metal Gear Solid are two of the most popular stealth action series in this generation of consoles, so it is natural that when Splinter Cell: Conviction comes out, it will be compared to Metal Gear Solid 4: Guns of the Patriots. Metal Gear Solid 4 is infamous for its use of cut-scenes to progress the story, and Ubisoft is trying to create a user interface that is more interactive in order to set themselves apart from MGS4.
Cut-scenes are commonly used to progress story in videogames because they give the developers full control over what is presented to the player. While it is a useful way to present information to the player because developers can shape the experience exactly how they would like, it is also a source of criticism because it takes away the interactivity that separates games from movies. MGS4 was praised for having some of the most impressive, cinematic cut-scenes in videogame history, but it was also criticized for forcing players spend hours without control over the characters if they wanted to enjoy the full story. Ubisoft is seeking to design a new way to move the storyline forward and display objectives while keeping the experience fluid and interactive.
In Splinter Cell: Conviction, both objectives and scenes such as flashbacks are projected onto flat surfaces like walls, ceilings and floors in real time so they can be viewed while the player is still controlling the main character. Additionally, they are attempting to only use smooth transitions from one camera angle to the next and never cut straight from one angle to another in order to keep the experience fluid. This can be seen in the E3 demo (above) shown this summer. While it could potentially turn out to be too overwhelming, too difficult to see and read, or prevent the digital world from feeling real enough, if the design pans out it could make for an immersive, interactive game that is stylish yet still gives the player enough control to make them always feel like they are a part of the game.
Cut-scenes are commonly used to progress story in videogames because they give the developers full control over what is presented to the player. While it is a useful way to present information to the player because developers can shape the experience exactly how they would like, it is also a source of criticism because it takes away the interactivity that separates games from movies. MGS4 was praised for having some of the most impressive, cinematic cut-scenes in videogame history, but it was also criticized for forcing players spend hours without control over the characters if they wanted to enjoy the full story. Ubisoft is seeking to design a new way to move the storyline forward and display objectives while keeping the experience fluid and interactive.
In Splinter Cell: Conviction, both objectives and scenes such as flashbacks are projected onto flat surfaces like walls, ceilings and floors in real time so they can be viewed while the player is still controlling the main character. Additionally, they are attempting to only use smooth transitions from one camera angle to the next and never cut straight from one angle to another in order to keep the experience fluid. This can be seen in the E3 demo (above) shown this summer. While it could potentially turn out to be too overwhelming, too difficult to see and read, or prevent the digital world from feeling real enough, if the design pans out it could make for an immersive, interactive game that is stylish yet still gives the player enough control to make them always feel like they are a part of the game.
Monday, August 31, 2009
This is my blog for my System Interface Design class at the University of Notre Dame. Last summer I got to try out the Microsoft Surface and Project Natal, so I'm pretty excited for this class. And of course, I have a Wii so programming with the Wiimote will be exciting too. I'm looking to get into videogame development, so I think I will learn about a lot of relevant topics in this class. I can't wait to get started on some projects.
Subscribe to:
Posts (Atom)