Thoughts from CHI
Thursday, April 27, 2006
Enhancement of text readability using ClearType:
This was a very interesting paper. The main premise was that a majority of people read better with cleartype fonts than without. To test their hypotheses they gave users two tasks -
1. Scanning task - Here the users were asked to count the number of occurences of a particular word (e.g red) in a given text frame in tabular form. According to their studies use of cleartype led to an increase of about 8% in the speed of users in the scanning task.
2. Reading task - Here users were given a series of passages to read and their times were measured. The speedup in this case was about 5-6% with the use of cleartype averaged over all users.
By itself the results looked quite ordinary, but the real interesting part was that about 30% of users did better using non-cleartype fonts! this also means that the remaining 70% saw a higher speedup than the above numbers indicate. Presumably a lot of questions followed this paper -
1. Does a person know about his preference? Can he find out in any way?
2. What kind of monitors did they use? (laptop computers)
3. Could there be a possible effect of age on user speeds?
4. Why was this difference seen? Any physiological factors causing this?
5. They confirmed that they used a variety of fonts like ariel, times new roman and verdana.
6. One member from the audience felt strongly that during the scanning task users were simultaneously doing two tasks, this memory load effect possibly skewed results.
7. There was also a suggestion on trying the same study with different letter spacing.
The presenters surmise on the reasons behind this was that some users can notice the subpixel coloring that happens in non-cleartype fonts which possibly leads to slightly increased processing times to process those alphabets.
This was a very interesting paper. The main premise was that a majority of people read better with cleartype fonts than without. To test their hypotheses they gave users two tasks -
1. Scanning task - Here the users were asked to count the number of occurences of a particular word (e.g red) in a given text frame in tabular form. According to their studies use of cleartype led to an increase of about 8% in the speed of users in the scanning task.
2. Reading task - Here users were given a series of passages to read and their times were measured. The speedup in this case was about 5-6% with the use of cleartype averaged over all users.
By itself the results looked quite ordinary, but the real interesting part was that about 30% of users did better using non-cleartype fonts! this also means that the remaining 70% saw a higher speedup than the above numbers indicate. Presumably a lot of questions followed this paper -
1. Does a person know about his preference? Can he find out in any way?
2. What kind of monitors did they use? (laptop computers)
3. Could there be a possible effect of age on user speeds?
4. Why was this difference seen? Any physiological factors causing this?
5. They confirmed that they used a variety of fonts like ariel, times new roman and verdana.
6. One member from the audience felt strongly that during the scanning task users were simultaneously doing two tasks, this memory load effect possibly skewed results.
7. There was also a suggestion on trying the same study with different letter spacing.
The presenters surmise on the reasons behind this was that some users can notice the subpixel coloring that happens in non-cleartype fonts which possibly leads to slightly increased processing times to process those alphabets.
Tuesday, April 25, 2006

Day 2 - Xbox a design critique
Well after a long first day I have come into this session halfway through. But here is what I salvaged.
The discussion was about how and why Microsoft burned $3.8 Billion since 2000 on the Xbox project. Their answer was that the xbox designers were given a little bit of freedom to explore the design space. Well that is a lot of freedom I would say.
The main motivation as many of us are aware were to achieve market penetration by having a low cost for users to switch. The xbox also follows something the designers deem the razor and blade model, where the blade costs subsidize the cost of the razor upfront. So all game-players can look forward to some high priced games once Microsoft has a good control on the game market, which is not very unlikely what with the rumors and delays surrounding the PS3.
The talk then progressed to talking about game play, with questions from one of the panelists Nicole from Xeo Design. She talked about the most important game playing mechanics like
Interact with other people and socialize, etc. A positive point that came up in a show of hands from the audience was that almost all of the audience felt that the setup time for a Xbox was less than 5 minutes to less than 20 minutes. It was humorous for Nicole to have taken more than 90 minutes to setup her xbox. I could feel many fervent gamers in the audience think that well after all games is serious men's business. but guess what, Nicole was not talking only about the setup, which she felt was a great OOB experience, it took her more than 90 minutes to finish reading her EULA's!
An interesting question that came from Maxime a game designer from Ubisoft was why did they not use either a Qwerty keyboard or some other input mechanism for text input as against the current a,b,c,d.. keyboard they have right now. Their answer was that they did not have sufficient time to test completely how a broad audience would react to newer forms of text input whereas for qwerty their tests concluded that speed and accuracy with the qwerty keyboard layout were less than the one they went with. Very surprising.
Some more papers - Day 1
mSpace Mobile Project
This paper focussed on an exploratory search where users can learn and investigate as opposed to a google like non-exploratory search. They likened it to facetted browsing (like some of those PC websites which allow you to configure your search by specifying minimum and maximum price, config parameters) , the difference here being that this allows use of a dynamic path creation by the user.
Their selling point ? "Information is always 0 pages away", the demo was quite compelling and they even have a version on the chi 2006 website. They had a demo of both a desktop version and a handheld version of the mSpace explorer. The handheld version did smart things like resizing the different panes to focus or zoom on one of them to better utilize the limited screen real estate. All in all a very good use of the google maps api to build something really new.
Perspective Cursor
This work was mainly about handling the cursor across multiple displays. They focussed on changing the perspective view of the cursor, so that in a fluid environment with multiple displays at varied alignments to each other, the user could drag and drop objects across them the cursor always appeared to be the same size and shape to the user.
Evidently as the perspective of the cursor was being changed, to anyone else observing the cursor there would be a distortion of the cursor.
Limitations of this approach ? Well you need some kind of a head tracking device.
mSpace Mobile Project
This paper focussed on an exploratory search where users can learn and investigate as opposed to a google like non-exploratory search. They likened it to facetted browsing (like some of those PC websites which allow you to configure your search by specifying minimum and maximum price, config parameters) , the difference here being that this allows use of a dynamic path creation by the user.
Their selling point ? "Information is always 0 pages away", the demo was quite compelling and they even have a version on the chi 2006 website. They had a demo of both a desktop version and a handheld version of the mSpace explorer. The handheld version did smart things like resizing the different panes to focus or zoom on one of them to better utilize the limited screen real estate. All in all a very good use of the google maps api to build something really new.
Perspective Cursor

This work was mainly about handling the cursor across multiple displays. They focussed on changing the perspective view of the cursor, so that in a fluid environment with multiple displays at varied alignments to each other, the user could drag and drop objects across them the cursor always appeared to be the same size and shape to the user.
Evidently as the perspective of the cursor was being changed, to anyone else observing the cursor there would be a distortion of the cursor.
Limitations of this approach ? Well you need some kind of a head tracking device.
Monday, April 24, 2006
Day 1 - Haptics in Mobile Interaction - Paper
This was one of the more anticipated papers at CHI on day 1. The presenter started off with a summary of reasons why haptics is an important tool in mobile interaction, chief among his reasons being -
1. Visual and auditory senses are often impaired in a mobile scenario e.g. driving
2. On a small screen there is only so much information you can put.2
3. Some mobile devices like PDA's or cellphones stay on your body all the time.
The most powerful features of a haptic interface that they presented were
1. It could possibly use a unique tactile feedback for different menu items on a mobile device, so that users did not have to use their auditory/visual senses to interact. The potential applications of such a interaction are mind-boggling, imagine using this as a non-intrusive way of using your cell phone in meetings, receiving more specific alerts while you are talking on a call, and so on.
2. This device could be made location aware using GPS. Well nowadays everyone wants to jump on the gps bandwagon just because it is so easy to show an application, but in this case I felt a genuine use case, where tactile feedback could be used in conjuntion with gps input to direct the user towards a destination. Cool, very cool.
The presentation was good enough for me at this point already, but guess what the best was yet to come! To cement their work, they had done considerable research on different types of tactile feedback and what they meant to users.
They had used piezo-electric actuators which was basically an array of thin sheets all of which moved, as controlled by a wave. They created a stretch skin effect on the users finger which was then identified by the user based on factors like direction of wave, amplitude, speed etc. Broadly they made the following classification in the paper -
1. Primary distinguishing factors - Direction of wave and waveform being used.
2. Secondary distinguishing factors - Amplitude of wave and its direction.
Apparently the secondary factors were difficult to distinguish between especially as speeds of waveforms increased.
Based on a number of experiments it seemed promising that conclusions the paper had drawn were pretty good.
A good question was raised by an IBM researcher who had done previous work in this field. He pointed out that their experiments had concluded that user performance in distinguishing different haptic feedback deteriorates rapidly with increased stress level in the user, like while driving or crossing the street etc. As this prototype was still tethered to a PC, the tests were all done under optimal conditions. It will be interesting to see whether they face the same results as the IBM guys, and if so how do they alleviate them.
And yes, I forgot to mention this paper won the best of CHI award this year :).
This was one of the more anticipated papers at CHI on day 1. The presenter started off with a summary of reasons why haptics is an important tool in mobile interaction, chief among his reasons being -
1. Visual and auditory senses are often impaired in a mobile scenario e.g. driving
2. On a small screen there is only so much information you can put.2
3. Some mobile devices like PDA's or cellphones stay on your body all the time.
The most powerful features of a haptic interface that they presented were
1. It could possibly use a unique tactile feedback for different menu items on a mobile device, so that users did not have to use their auditory/visual senses to interact. The potential applications of such a interaction are mind-boggling, imagine using this as a non-intrusive way of using your cell phone in meetings, receiving more specific alerts while you are talking on a call, and so on.
2. This device could be made location aware using GPS. Well nowadays everyone wants to jump on the gps bandwagon just because it is so easy to show an application, but in this case I felt a genuine use case, where tactile feedback could be used in conjuntion with gps input to direct the user towards a destination. Cool, very cool.
The presentation was good enough for me at this point already, but guess what the best was yet to come! To cement their work, they had done considerable research on different types of tactile feedback and what they meant to users.
They had used piezo-electric actuators which was basically an array of thin sheets all of which moved, as controlled by a wave. They created a stretch skin effect on the users finger which was then identified by the user based on factors like direction of wave, amplitude, speed etc. Broadly they made the following classification in the paper -
1. Primary distinguishing factors - Direction of wave and waveform being used.
2. Secondary distinguishing factors - Amplitude of wave and its direction.
Apparently the secondary factors were difficult to distinguish between especially as speeds of waveforms increased.
Based on a number of experiments it seemed promising that conclusions the paper had drawn were pretty good.
A good question was raised by an IBM researcher who had done previous work in this field. He pointed out that their experiments had concluded that user performance in distinguishing different haptic feedback deteriorates rapidly with increased stress level in the user, like while driving or crossing the street etc. As this prototype was still tethered to a PC, the tests were all done under optimal conditions. It will be interesting to see whether they face the same results as the IBM guys, and if so how do they alleviate them.
And yes, I forgot to mention this paper won the best of CHI award this year :).
Day 1 - Opening plenary
Scott Cook from Intuit gave the opening plenary for CHI 2006. I had heard a lot about the user-centered process that Intuit has been famous for. So I was hoping to get an inside glimpse to their ideas and design process.
I cannot say I was entirely disappointed, Scott did talk about it, infact he talked about a lot of things sometimes in more detail than I would have liked. But overall I would think that the opening talk served its purpose - as per the CHI 2006 logo, it talked about inventions at Intuit and elsewhere, it informed us about a host of ideas and above all it did a great job in inspiring many of those present in taking design, usability, innovation, yada yada more seriously. Seriously enough that we may survive the rest of the conference :)
Moving onto specific examples he gave, I could not help realizing that story telling is often the most powerful way of capturing audiences' attention. I guess all famous and charismatic leaders know this, the audience in the bottom of their hearts is still like a curious child, wanting to hear a story that they can relate to, tell others about. It is always easy to tell someone a story you heard about good design than it is to talk about just good design principles. And Scott knows this, very well.
The stories he told covered the wide gamut of fields, beginning right from the invention of Scotch tape in the 20's to the story of the humble trucker who changed the way the world did business and from which Maersk as we know it today was born.
Like a good inspirational speech should be, Scott's speech did not fail to touch upon the importance of failures and he even cited them within Intuit. All in all I think he gave a fitting start to the conference. Lets hope the conference lives up as well!
I cannot say I was entirely disappointed, Scott did talk about it, infact he talked about a lot of things sometimes in more detail than I would have liked. But overall I would think that the opening talk served its purpose - as per the CHI 2006 logo, it talked about inventions at Intuit and elsewhere, it informed us about a host of ideas and above all it did a great job in inspiring many of those present in taking design, usability, innovation, yada yada more seriously. Seriously enough that we may survive the rest of the conference :)
Moving onto specific examples he gave, I could not help realizing that story telling is often the most powerful way of capturing audiences' attention. I guess all famous and charismatic leaders know this, the audience in the bottom of their hearts is still like a curious child, wanting to hear a story that they can relate to, tell others about. It is always easy to tell someone a story you heard about good design than it is to talk about just good design principles. And Scott knows this, very well.
The stories he told covered the wide gamut of fields, beginning right from the invention of Scotch tape in the 20's to the story of the humble trucker who changed the way the world did business and from which Maersk as we know it today was born.
Like a good inspirational speech should be, Scott's speech did not fail to touch upon the importance of failures and he even cited them within Intuit. All in all I think he gave a fitting start to the conference. Lets hope the conference lives up as well!
Subscribe to:
Posts (Atom)