The following is a hyperlinked index to the main areas of my research, primarily represented by scientific papers and the intellectual property from the research.
My scientific research is centered in Human Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW). In the past I have made significant pioneering contribution to the larger research domain:
My research is characterized by a strong anchoring in the cognitive and social reality of people who might use the technology. In my approach, 50% of the inspiration and innovation comes from observing people and technology; the other 50% comes from working on and building the technology.
In recent years I have increasingly focused on two hard questions: how to make research have more impact, and how to make better decision about where to limited research resources. I co-developed a blueprint for an innovation process that embraces both technology prototyping and ethnographic exploration of usage domains.
User Activity Monitoring |
2005-6 |
||||
Usage informed innovation |
2002-4 |
||||
Paper tagging w/ RFID |
1999 |
||||
Paper tagging w/ barcodes |
1998 |
||||
Augmented reality |
1999 |
Adding social awareness to calling |
2000-1 |
||
Peripheral awareness |
1996-7 |
||||
Domestic media spaces |
1995-6 |
||||
Gesture interaction |
1993-4 |
||||
Pen-based interaction |
1991-2 |
||||
Walkup interface to shared electronic whiteboard |
1990 |
UITTL (pronounced like "whittle") is a general process for usage informed innovation. It was created in 2002 as a blueprint of current best practices in user-involved and usage-informed innovation, condensed by Jeanette Blomberg and Elin R Pedersen, based on our combined experience from more than 20 years of R&D experience.
The UITTL process defines several phases in the innovation process, including the survey, the design & deployment cycle(s), and the impact assessment. A UITTL project should be kept short, at most 6 months from start to finish, though we will typically see that an unfinished theme from one project might be taken up as the primary focus in another.
The UITTL process is a parallel track of ethno-style studies of people and rapid prototyping. Ethnographers will discover issues and opportunities in the usage domain. The entire team will study these in a series of design sessions and eventually prioritize them. A running prototype addressing the highest priority issues will be developed to be inserted into a real usage situation. Careful data gathering before and after the technology intervention will allow the team to assess the true impact, positive and negative, of the technology.
As of February 2005, four UITTL projects have been successfully completed while several other are still ongoing at client sites. The goal in each of these projects have been to provide a value proposition for a product concept, thus the results look more like a business prospect that a product specification.
The Firebird incubation project sought to identify and address major challenges in information management that we expect future business users will be facing. Firebird led to new ways of making relationships true 1st class citizen in information work: we designed a relation-centric interaction space, and we created a new method for automatically creating and maintaining relationships based on usage, along with an targeted implementation of this as a software tool.
Most computer interactions require high degrees of explicitness and even premeditation from the user. However, media spaces as they evolved in the early 90s were example of systems that did not require the same focused attention and intention as traditional computer applications.
Taking the lessons form mediaspaces, how could we best arrange technology so it would support people communicating or just staying in touch. It soon turned out that many subtle, implicit mechanisms and extensive ambiguity are at the core of successful human to human interaction.
The overall design focus in a series of projects was to explore human computer interactions with little or no explicit interaction required, using variations of media spaces as the technology basis. As part of the exploration we managed to tease out a framework for understanding the relationship between attention and intention (see Tacit Interaction framework, talk at Stanford). Of these projects, AROMA required the least interaction; TactGuide required intention but "occupied" only a narrow range of the haptic perception; Casablanca and Calls.Calm was the system closest to traditional intentional load.
The Casablanca project explored how media space concepts could be incorporated into households and family life. This effort included prototypes built for the researchers' own home use, field studies of households, and consumer testing of design concepts.
Peripheral awareness is a powerful human capability. The AROMA project pioneered a technological support for it.
AROMA is an attempt to mediate mutual awareness among people who are geographically dispersed but want to be in touch. AROMA captures activity in a remote location and creates an abstract representation of this activity in locations that are subscribing to it. The abstract representations serve to save bandwidth, protect privacy for the producer and lower the level of attention required by the consumer
Calls.calm helps people make better use of their communication channels, at times and in ways that fit both parties. When somebody ("caller") wants to get in touch with another person ("callee") she browses to the callee’s "interaction page". The interaction page is created dynamically, based on the specific relationship between caller and callee. It provides the caller with key information about the callee’s situation, allowing her to make an educated choice of time and means for communication.
Calls.calm may be best understood if you take the instant event of somebody placing a call to a friend or colleague -and then you stretch it in time. This manipulation allows the caller to make a more gradual approach to the callee, first learning something about the current situation of the callee and then deciding if and how to progress. An interaction space is provided to the caller - accessible through his or her communication device of choice. This interaction space can include visibility information that informs the caller about the status of the callee, accessibility information that provides the caller with a list of communication channels available to the caller, and continuity information that includes information and action facilitation data that reflect the ongoing interaction between the caller and the callee.
Most navigation tools take over control and require you to rely exclusively on their guidance. Tactguide doesn't monopolize your senses and the operation of it blends in with the overall way-finding task
Two general questions were explored in the context of paper interfaces to computational systems:
Do paper interfaces -- and other tangible interfaces -- help the user to off-load important intellectual tasks (like operating a computer) and replace them with motor tasks (like flipping through a stack of cards). Palette and PaperButtons explored the opportunities of enhancing paper by embedded computer readable tags.
If that is indeed the case, it becomes important to learn what can and should be represented on paper, and how to facilitate a smooth transition back and forth. Several UITTL projects have looked at ways to use image representation of information that doesn't need to be converted to electronic form. [Publications on both the UITTL method and specific paper interfaces explored in some of the UITTL projects can be expected starting mid 2005 - the delay is due to client's restrictions]
Palette takes some of the stress out of giving a presentation. Using Palette, the presenter controls her presentation by directly manipulating a pile of index cards. In preparation for a presentation the presenter will produce the index cards that are printed with slide content and a barcode - thus easily identified by both humans and computers.
Palette help the user during the demanding time of giving a presentation, but the solution was not perfect - at least two nagging problems remained: (1) techniques were lacking for bringing changes decided from the paper card version back into the computer; and (2) people tend to deploy increasingly more complex functionality and the simple "a card is a slide" model was hard to scale accordingly. The latter problem was partly addressed in the subsequent PaperButton project.
The LiveBoard and Tivoli projects were both part of the ubiquitous computing vision in which computation "blends into the woodwork". A main challenge in the Liveboard project was how to best put computational power and magic behind the well known whiteboard, without sacrificing its immediacy and ease of use.
Tivoli focused on providing computer support to freehand drawing while also providing a large interaction surface and support for remote collaboration. Tivoli allowed users to treat the LiveBoard as a simple multi-page whiteboard that they can scribble on with electronic pens.
The Liveboard prototyped the concept of interactive walls. New interaction paradigms were needed for work surfaces this large; they would have to support collaborative work and free form interaction. The UbiComp software team ported several applications to the Liveboard by adding support for multiple pen input and trying to add features to compensate for the lack of overview you might have when working at the board. A special "walk-up" user interface was also developed, allowing users to ignore the underlying unix machine.
A prevalent approach in pen computing in the early 90s was to quickly transform the user's scribblings into vectorized objects like letters, digit, boxes and lines. The Tivoli team objected to this approach as being disruptive in the natural flow of idea generation. The assumption behind Tivoli was that computing could be utilized in many other and better ways, and the approach was to interactively develop a tool that would carefully assess what would be helpful to the users and what would be getting in the way of their primary tasks. Pen-based interaction in this context opens up new user interface techniques, such as gesturing and wiping.
Being electronic, simultaneous whiteboard activity can be shared dynamically with connected Tivolis.
Later research looked into using gestures in the computer interaction: Using our hands comes so natural to us; how can we make the computer "understand" our gestures without having to strap ourselves to unlovely technology? Prototypes using video based capture of hand gestures were combined with the definition of a minimal gesture language.