Publications

By Maribeth Back (Clear Search)

2015
Publication Details
  • CHI 2015 (Extended Abstracts)
  • Apr 18, 2015

Abstract

Close
We present our ongoing research on automatic segmentation of motion gestures tracked by IMUs. We postulate that by recognizing gesture execution phases from motion data that we may be able to auto-delimit user gesture entries. We demonstrate that machine learning classifiers can be trained to recognize three distinct phases of gesture entry: the start, middle and end of a gesture motion. We further demonstrate that this type of classification can be done at the level of individual gestures. Furthermore, we describe how we captured a new data set for data exploration and discuss a tool we developed to allow manual annotations of gesture phase information. Initial results we obtained using the new data set annotated with our tool show a precision of 0.95 for recognition of the gesture phase and a precision of 0.93 for simultaneous recognition of the gesture phase and the gesture type.
2013
Publication Details
  • IEEE ISM 2013
  • Dec 9, 2013

Abstract

Close
Real-time tele-immersion requires low latency, synchronized multi-camera capture. Prior high definition (HD) capture systems were bulky. We in vestigate the suitability of using flocks of smartphone cameras for tele-immersion. Smartphones can potentially integrate HD capture and streaming into a single portable package. However, they are designed for archiving the captured video into a movie. Hence, we create a sequence of H.264 movies and stream them. We lower the capture delay by reducing the number of frames in each movie segment. Increasing the number of movie segments adds compression overhead. Smartphone video encoders do not sacrifice video quality to lower the compression latency or the stream size. On an iPhone 4S, our application that uses published APIs streams 1920x1080 videos at 16.5 fps with a delay of 712 msec between a real-life event and displaying an uncompressed bitmap of this event on a local laptop. For comparison, the bulky Cisco Tandberg required 300 msec delay. Stereoscopic video from two unsynchronized smartphones showed minimal visual artifacts in an indoor teleconference setting.
Publication Details
  • Interactive Tabletops and Surfaces (ITS) 2013
  • Oct 6, 2013

Abstract

Close
The expressiveness of touch input can be increased by detecting additional finger pose information at the point of touch such as finger rotation and tilt. PointPose is a prototype that performs finger pose estimation at the location of touch using a short-range depth sensor viewing the touch screen of a mobile device. We present an algorithm that extracts finger rotation and tilt from a point cloud generated by a depth sensor oriented towards the device's touchscreen. The results of two user studies we conducted show that finger pose information can be extracted reliably using our proposed method. We show this for controlling rotation and tilt axes separately and also for combined input tasks using both axes. With the exception of the depth sensor, which is mounted directly on the mobile device, our approach does not require complex external tracking hardware, and, furthermore, external computation is unnecessary as the finger pose extraction algorithm can run directly on the mobile device. This makes PointPose ideal for prototyping and developing novel mobile user interfaces that use finger pose estimation.
Publication Details
  • The International Symposium on Pervasive Displays
  • Jun 4, 2013

Abstract

Close
Existing user interfaces for the configuration of large shared displays with multiple inputs and outputs usually do not allow users easy and direct configuration of the display's properties such as window arrangement or scaling. To address this problem, we are exploring a gesture-based technique for manipulating display windows on shared display systems. To aid target selection under noisy tracking conditions, we propose VoroPoint, a modified Voronoi tessellation approach that increases the selectable target area of the display windows. By maximizing the available target area, users can select and interact with display windows with greater ease and precision.
2012
Publication Details
  • CIKM 2012 Books Online Workshop Keynote Address
  • Oct 29, 2012

Abstract

Close
Reading is part of how we understand the world, how we share knowledge, how we play, and even how we think. Although reading text is the dominant form of reading, most of the text we read— letters, numbers, words, and sentences—is surrounded by illustrations, photographs, and other kinds of symbols that we include as we read. As dynamic displays migrate into the real world at many scales, whether personal devices, handhelds, or large screens in both interior and exterior spaces, opportunities for reading migrate as well. As has happened continually throughout the history of reading, new technologies, physical forms and social patterns create new genres, which themselves may then combine or collide to morph into something new. At PARC, the RED (Research in Experimental Design) group examined emerging technologies for impact on media and the human relationship to information, especially reading. We explored new ways of experiencing text: new genres, new styles of interaction, and unusual media. Among the questions we considered: how might “the book” change? More particularly, how does the experience of reading change with the introduction of new technologies…and how does it remain the same? In this talk, we'll discuss the ideas behind the design and research process that led to creating eleven different experiences of new forms of reading. We’ll also consider how our technological context for reading has changed in recent years, and what influence the lessons from XFR may have on our ever-developing online reading experiences.

Through the Looking-Glass: Mirror Worlds for Augmented Awareness & Capability

Publication Details
  • ACM MM 2012
  • Oct 29, 2012

Abstract

Close
We describe a system for supporting mirror worlds - 3D virtual models of physical spaces that reflect the structure and activities of those spaces to help support context awareness and tasks such as planning and recollection of events. Through views on web pages, portable devices, or on 'magic window' displays located in the physical space, remote people may 'look in' to the space, while people within the space are provided information not apparent through unaided perception. For example, by looking at a mirror display, people can learn how long others have been present, or where they have been. People in one part of a building can get a sense of activities in the rest of the building, know who is present in their office, and look in to presentations in other rooms. The system can be used to bridge across sites and help provide different parts of an organization with a shared awareness of each other's space and activities. We describe deployments of our mirror world system at several locations.
Publication Details
  • Mobile HCI 2012 demo track
  • Sep 21, 2012

Abstract

Close
In this demonstration we will show a mobile remote control and monitoring application for a recipe development laboratory at a local chocolate production company. In collaboration with TCHO, a chocolate maker in San Francisco, we built a mobile Web app designed to allow chocolate makers to control their laboratory's machines. Sensor data is imported into the app from each machine in the lab. The mobile Web app is used for control, monitoring, and collaboration. We have tested and deployed this system at the real-world factory and it is now in daily use. This project is designed as part of a research exploration into enhanced collaboration in industrial settings between physically remote people and places, e.g. factories in China with clients in the US.
Publication Details
  • USENIX/ACM/IFIP Middleware
  • Sep 19, 2012

Abstract

Close
Faunus addresses the challenge of specifying and managing complex collaboration sessions. Many entities from various administrative domains orchestrate such sessions. Faunus decouples the entities that specify the session from entities that activate and manage them. It restricts the operations to specific agents using capabilities. It unifies the specification and management operations through its naming system. Each Faunus name is persistent and globally unique. A collection of attributes are attached to each name. Together, they represent a collection of services that form a collaboration session. Anyone can create a name; the creator has full read and write privileges that can be delegated to others. With the proper capability, anyone can modify session attributes between an active and inactive state. Though the system is designed for Internet scale deployments, the security model for providing and revoking capabilities currently assumes an Intranet style deployment. We have incorporated Faunus into a DisplayCast system that originally used Zeroconf. We are incorporating Faunus into another project that will fully exercise the power of Faunus.
Publication Details
  • DIS (Designing Interactive Systems) 2012 Demos track
  • Jun 11, 2012

Abstract

Close
We will demonstrate successive and final stages in the iterative design of a complex mixed reality system in a real-world factory setting. In collaboration with TCHO, a chocolate maker in San Francisco, we built a virtual “mirror” world of a real-world chocolate factory and its processes. Sensor data is imported into the multi-user 3D environment from hundreds of sensors and a number of cameras on the factory floor. The resulting virtual factory is used for simulation, visualization, and collaboration, using a set of interlinked, real-time layers of information. It can be a stand-alone or a web-based application, and also works on iOS and Android cell phones and tablet computers. A unique aspect of our system is that it is designed to enable the incorporation of lightweight social media-style interactions with co-workers along with factory data. Through this mixture of mobile, social, mixed and virtual technologies, we hope to create systems for enhanced collaboration in industrial settings between physically remote people and places, such as factories in China with managers in the US.
2010

The Virtual Chocolate Factory:Mixed Reality Industrial Collaboration and Control

Publication Details
  • ACM Multimedia 2010 - Industrial Exhibits
  • Oct 25, 2010

Abstract

Close
We will exhibit several aspects of a complex mixed reality system that we have built and deployed in a real-world factory setting. In our system, virtual worlds, augmented realities, and mobile applications are all fed from the same infrastructure. In collaboration with TCHO, a chocolate maker in San Francisco, we built a virtual “mirror” world of a real-world chocolate factory and its processes. Sensor data is imported into the multi-user 3D environment from hundreds of sensors on the factory floor. The resulting virtual factory is used for simulation, visualization, and collaboration, using a set of interlinked, real-time layers of information. Another part of our infrastructure is designed to support appropriate industrial uses for mobile devices such as cell phones and tablet computers. We deployed this system at the real-world factory in 2009, and it is now is daily use there. By simultaneously developing mobile, virtual, and web-based display and collaboration environments, we aimed to create an infrastructure that did not skew toward one type of application but that could serve many at once, interchangeably. Through this mixture of mobile, social, mixed and virtual technologies, we hope to create systems for enhanced collaboration in industrial settings between physically remote people and places, such as factories in China with managers in the US.
Publication Details
  • ICME 2010, Singapore, July 19-23 2010
  • Jul 19, 2010

Abstract

Close
Virtual, mobile, and mixed reality systems have diverse uses for data visualization and remote collaboration in industrial settings, especially factories. We report our experiences in designing complex mixed-reality collaboration, control, and display systems for a real-world factory, for delivering real-time factory information to multiple users. In collaboration with (blank for review), a chocolate maker in San Francisco, our research group is building a virtual “mirror” world of a real-world chocolate factory and its processes. Real-world sensor data (such as temperature and machine state) is imported into the 3D environment from hundreds of sensors on the factory floor. Multi-camera imagery from the factory is also available in the multi-user 3D factory environment. The resulting "virtual factory" is designed for simulation, visualization, and collaboration, using a set of interlinked, real-time 3D and 2D layers of information about the factory and its processes. We are also looking at appropriate industrial uses for mobile devices such as cell phones and tablet computers, and how they intersect with virtual worlds and mixed realities. For example, an experimental iPhone web app provides mobile laboratory monitoring and control. The app allows a real-time view into the lab via steerable camera and remote control of lab machines. The mobile system is integrated with the database underlying the virtual factory world. These systems were deployed at the real-world factory and lab in 2009, and are now in beta development. Through this mashup of mobile, social, mixed and virtual technologies, we hope to create industrial systems for enhanced collaboration between physically remote people and places – for example, factories in China with managers in Japan or the US.
Publication Details
  • In Proc. CHI 2010
  • Apr 10, 2010

Abstract

Close
The modern workplace is inherently collaborative, and this collaboration relies on effective communication among coworkers. Many communication tools – email, blogs, wikis, Twitter, etc. – have become increasingly available and accepted in workplace communications. In this paper, we report on a study of communications technologies used over a one year period in a small US corporation. We found that participants used a large number of communication tools for different purposes, and that the introduction of new tools did not impact significantly the use of previously-adopted technologies. Further, we identified distinct classes of users based on patterns of tool use. This work has implications for the design of technology in the evolving ecology of communication tools.
Publication Details
  • IEEE Virtual Reality 2010 conference
  • Mar 19, 2010

Abstract

Close
This project investigates practical uses of virtual, mobile, and mixed reality systems in industrial settings, in particular control and collaboration applications for factories. In collaboration with TCHO, a chocolate maker start-up in San Francisco, we have built virtual mirror-world representations of a real-world chocolate factory and are importing its data and modeling its processes. The system integrates mobile devices such as cell phones and tablet computers. The resulting "virtual factory" is a cross-reality environment designed for simulation, visualization, and collaboration, using a set of interlinked, real-time 3D and 2D layers of information about the factory and its processes.
2009
Publication Details
  • Book chapter in "Designing User Friendly Augmented Work Environments" Series: Computer Supported Cooperative Work Lahlou, Saadi (Ed.) 2009, Approx. 340 p. 117 illus., Hardcove
  • Sep 30, 2009

Abstract

Close
The Usable Smart Environment project (USE) aims at designing easy-to-use, highly functional next-generation conference rooms. Our first design prototype focuses on creating a "no wizards" room for an American executive; that is, a room the executive could walk into and use by himself, without help from a technologist. A key idea in the USE framework is that customization is one of the best ways to create a smooth user experience. Since the system needs to fit both with the personal leadership style of the executive and the corporation's meeting culture, we began the design process by exploring the work flow in and around meetings attended by the executive. Based on our work flow analysis and the scenarios we developed from it, USE developed a flexible, extensible architecture specifically designed to enhance ease of use in smart environment technologies. The architecture allows customization and personalization of smart environments for particular people and groups, types of work, and specific physical spaces. The first USE room was designed for FXPAL's executive "Ian" and installed in Niji, a small executive conference room at FXPAL. The room Niji currently contains two large interactive whiteboards for projection of presentation material, for annotations using a digital whiteboard, or for teleconferencing; a Tandberg teleconferencing system; an RFID authentication plus biometric identification system; printing via network; a PDA-based simple controller, and a tabletop touch-screen console. The console is used for the USE room control interface, which controls and switches between all of the equipment mentioned above.
Publication Details
  • Book chapter in "Understanding the New Generation Office: Collective Intelligence of 100 Specialists" (book project in Japan, by New Era Office Research Center, Tokyo)
  • Aug 18, 2009

Abstract

Close

A personal interface for information mash-up: exploring worlds both physical and virtual

Book chapter in "Understanding the New Generation Office: Collective Intelligence of 100 Specialists" (book project in Japan, by New Era Office Research Center, Tokyo) , August 18, 2009

This is a Big Idea piece for a collective intelligence book project by the New Era Office Research Center, Tokyo. It is written at the invitation of FX colleague Koushi Kawamoto. The project asks the same questions of 100 specialists: Answer these four questions about an idea for a next-generation workplace: 1. Want: what do I want to be able to do? 2. Should: what should a system to support this "want" be able to do? 3. Create: imagine what an instance of this idea might be. 4. Can: how could this instance be realized in reality?

WANT: In my ideal work environment, the data I need on everything and everyone should be available at my fingertips, all the time, in many configurations that I can mix-and-match to suit the needs of any task. This includes things like: • documents of all types • people's status, tasks, and availability • audio, video, mobile, and virtual world communication channels • links to the physical world as appropriate, for example sensors delivering factory data, or the state of the machines I use daily in the workplace (printers, my PC, conference room systems), or awareness data about my colleagues. CAN: How can we approach this problem? Let's consider the creation of a personal interface or instrument for information mashup, capable of interacting with complex data structures, for tuning smart environments, and for exploring worlds both physical and virtual, in business, social and personal realms. Like any interactive system this idea has two parts: human-facing and system-facing. These can be called Interstitia I (extending human interactivity) and Interstitia II (enabling smart environments).
Publication Details
  • Presentation at SIGGRAPH 2009, New Orleans, LA. ACM.
  • Aug 3, 2009

Abstract

Close
FXPAL, a research lab in Silicon Valley, and TCHO, a chocolate manufacturer in San Francisco, have been collaborating on exploring emerging technologies for industry. The two companies seek ways to bring people closer to the products they consume, clarifying end-to-end production processes with technologies like sensor networks for fine-grained monitoring and control, mobile process control, and real/virtual mashups using virtual and augmented realities. This work lies within and extends the area of research called mixed- or cross-reality

Mirror World Chocolate Factory

Publication Details
  • IEEE Pervasive Computing July-August 2009 (Journal, Works in Progress section)
  • Jul 18, 2009

Abstract

Close
FXPAL, a research lab in Silicon Valley, and TCHO, a chocolate manufacturer in San Francisco, have been collaborating on exploring emerging technologies for industry. The two companies seek ways to bring people closer to the products they consume, clarifying end-to-end production processes with technologies like sensor networks for fine-grained monitoring and control, mobile process control, and real/virtual mashups using virtual and augmented realities.
Publication Details
  • Journal article in Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2009), 23, 263-274. Printed in the USA. 2009 Cambridge University Press.
  • Jun 17, 2009

Abstract

Close
Modern design embraces digital augmentation, especially in the interplay of digital media content and the physical dispersion and handling of information. Based on the observation that small paper memos with sticky backs (such as Post-Its ™) are a powerful and frequently used design tool, we have created Post-Bits, a new interface device with a physical embodiment that can be handled as naturally as paper sticky notes by designers, yet add digital information affordances as well. A Post-Bit is a design prototype of a small electronic paper device for handling multimedia content, with interaction control and display in one thin flexible sheet. Tangible properties of paper such as flipping, flexing, scattering, and rubbing are mapped to controlling aspects of the multimedia content such as scrubbing, sorting, or up- or downloading dynamic media (images, video, text). In this paper we discuss both the design process involved in building a prototype of a tangible interface using new technologies, and how the use of Post-Bits as a tangible design tool can impact two common design tasks: design ideation or brainstorming, and storyboarding for interactive systems or devices.
2008
Publication Details
  • Fuji Xerox Technical Report
  • Dec 15, 2008

Abstract

Close
We have developed an interactive video search system that allows the searcher to rapidly assess query results and easily pivot off those results to form new queries. The system is intended to maximize the use of the discriminative power of the human searcher. The typical video search scenario we consider has a single searcher with the ability to search with text and content-based queries. In this paper, we evaluate a new collaborative modification of our search system. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to focus, enhance and augment the team's search and communication activities. In our evaluations, algorithmic mediation improved the collaborative performance of both retrieval (allowing a team of searchers to find relevant information more efficiently and effectively), and exploration (allowing the searchers to find relevant information that cannot be found while working individually). We present analysis and conclusions from comparative evaluations of the search system.

Rethinking the Podium

Publication Details
  • Chapter in "Interactive Artifacts and Furniture Supporting Collaborative Work and Learning", ed. P. Dillenbourg, J. Huang, and M. Cherubini. Published Nov. 28, 2008, Springer. Computer Supported Collaborative learning Series Vol 10.
  • Nov 28, 2008

Abstract

Close
As the use of rich media in mobile devices and smart environments becomes more sophisticated, so must the design of the everyday objects used as controllers and interfaces. Many new interfaces simply tack electronic systems onto existing forms. However, an original physical design for a smart artefact, that integrates new systems as part of the form of the device, can enhance the end-use experience. The Convertible Podium is an experiment in the design of a smart artefact with complex integrated systems for the use of rich media in meeting rooms. It combines the highly designed look and feel of a modern lectern with systems that allow it to serve as a central control station for rich media manipulation. The interface emphasizes tangibility and ease of use in controlling multiple screens, multiple media sources (including mobile devices) and multiple distribution channels, and managing both data and personal representation in remote telepresence.

Cerchiamo: a collaborative exploratory search tool

Publication Details
  • CSCW 2008 (Demo), San Diego, CA, ACM Press.
  • Nov 10, 2008

Abstract

Close
We describe Cerchiamo, a collaborative exploratory search system that allows teams of searchers to explore document collections synchronously. Working with Cerchiamo, team members use independent interfaces to run queries, browse results, and make relevance judgments. The system mediates the team members' search activity by passing and reordering search results and suggested query terms based on the teams' actions. The combination of synchronous influence with independent interaction allows team members to be more effective and efficient in performing search tasks.

Remix rooms: Redefining the smart conference room

Publication Details
  • CSCW 2008 (Workshop)
  • Nov 8, 2008

Abstract

Close
In this workshop we will explore how the experience of smart conference rooms can be broadened to include different contexts and media such as context-aware mobile systems, personal and professional videoconferencing, virtual worlds, and social software. How should the technologies behind conference room systems reflect the rapidly changing expectations around personal devices and social online spaces like Facebook, Twitter, and Second Life? What kinds of systems are needed to support meetings in technologically complex environments? How can a mashup of conference room spaces and technologies account for differing social and cultural practices around meetings? What requirements are imposed by security and privacy issues in public and semi-public spaces?

mTable: Browsing Photos and Videos on a Tabletop System

Publication Details
  • ACM Multimedia 2008 (Video)
  • Oct 27, 2008

Abstract

Close
In this video demo, we present mTable, a multimedia tabletop system for browsing photo and video collections. We have developed a set of applications for visualizing and exploring photos, a board game for labeling photos, and a 3D cityscape metaphor for browsing videos. The system is suitable for use in a living room or office lounge, and can support multiple displays by visualizing the collections on the tabletop and showing full-size images and videos on another flat panel display in the room.

UbiMEET: Design and Evaluation of Smart Environments in the Workplace

Publication Details
  • Ubicomp 2008 (Workshop)
  • Sep 21, 2008

Abstract

Close
This workshop is the fourth in a series of UbiComp workshops on smart environment technologies and applications for the workplace. It offers a unique window into the state of the art through the participation of a range of researchers, designers and builders who exchange both basic research and real-world case experiences; and invites participants to share ideas about them. This year we focus on understanding appropriate design processes and creating valid evaluation metrics for smart environments (a recurrent request from previous workshop participants). What design processes allow integration of new ubicomp-style systems with existing technologies in a room that is in daily use? What evaluation methods and metrics give us an accurate picture, and how can that information best be applied in an iterative design process?
Publication Details
  • SIGIR 2008. (Singapore, Singapore, July 20 - 24, 2008). ACM, New York, NY, 315-322. Best Paper Award.
  • Jul 22, 2008

Abstract

Close
We describe a new approach to information retrieval: algorithmic mediation for intentional, synchronous collabo- rative exploratory search. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to focus, enhance and augment the team's search and communication activities. Collaborative search outperformed post hoc merging of similarly instrumented single user runs. Algorithmic mediation improved both collaborative search (allowing a team of searchers to nd relevant in- formation more efficiently and effectively), and exploratory search (allowing the searchers to find relevant information that cannot be found while working individually).
Publication Details
  • 1st International Workshop on Collaborative Information Retrieval. JCDL 2008.
  • Jun 20, 2008

Abstract

Close
People can help other people find information in networked information seeking environments. Recently, many such systems and algorithms have proliferated in industry and in academia. Unfortunately, it is difficult to compare the systems in meaningful ways because they often define collaboration in different ways. In this paper, we propose a model of possible kinds of collaboration, and illustrate it with examples from literature. The model contains four dimensions: intent, concurrency, depth and location. This model can be used to classify existing systems and to suggest possible opportunities for design in this space.
2007
Publication Details
  • Workshop at Ubicomp 2007
  • Sep 16, 2007

Abstract

Close
The past two years at UbiComp, our workshops on design and usability in next generation conference rooms engendered lively conversations in the community of people working in smart environments. The community is clearly vital and growing. This year we would like to build on the energy from previous workshops while taking on a more interactive and exploratory format. The theme for this workshop is "embodied meeting support" and includes three tracks: mobile interaction, tangible interaction, and sensing in smart environments. We encourage participants to present work that focuses on one track or that attempts to bridge multiple tracks.
Publication Details
  • Book chapter in: A Document (Re)turn. Contributions from a Research Field in Transition (Taschenbuch), Roswitha Skare, Niels Windfeld Lund, Andreas Vårheim (eds.), Peter Lang Publishing, Incorporated, 2007.
  • Feb 19, 2007

Abstract

Close
When people are checking in to flights, making reports to their company manager, composing music, delivering papers for exams in schools, or examining patients in hospitals, they all deal with documents and processes of documentation. In earlier times, documentation took place primarily in libraries and archives. While the latter are still important document institutions, documents today play a far more essential role in social life in many different domains and cultures. In this book, which celebrates the ten year anniversary of documentation studies in Tromsø, experts from many different disciplines, professional domains as well as cultures around the world present their way of dealing with documents, demonstrating many potential directions for the emerging broad field of documentation studies.
2006
Publication Details
  • UbiComp 2006 Workshop position paper
  • Sep 20, 2006

Abstract

Close
We describe our work-in-progress: a "wizard-free" conference room designed for ease of use, yet retaining next-generation functionality. Called USE (Usable Smart Environments), our system uses multi-display systems, immersive conferencing, and secure authentication. It is based in cross-cultural ethnographic studies on the way people use conference rooms. The USE project has developed a flexible, extensible architecture specifically designed to enhance ease of use in smart environment technologies. The architecture allows customization and personalization of smart environments for particular people and groups, types of work, and specific physical spaces. The system consists of a database of devices with attributes, rooms and meetings that implements a prototype-instance inheritance mechanism through which contextual information (e.g. IP addresses application settings, phone numbers for teleconferencing systems, etc.) can be associated

Usable ubiquitous computing in next generation conference rooms: design, architecture and evaluation

Publication Details
  • International workshop at UbiComp 2006.
  • Sep 17, 2006

Abstract

Close
In the UbiComp 2005 workshop "Ubiquitous computing in next generation conference rooms" we learned that usability is one of the primary challenges in these spaces. Nearly all "smart" rooms, though they often have interesting and effective functionality, are very difficult to simply walk in and use. Most such rooms have resident experts who keep the room's systems functioning, and who often must be available on an everyday basis to enable the meeting technologies. The systems in these rooms are designed for and assume the presence of these human "wizards"; they are seldom designed with usability in mind. In addition, people don't know what to expect in these rooms; as yet there is no technology standard for next-generation conference rooms. The challenge here is to strike an effective balance between usability and new kinds of functionality (such as multiple displays, new interfaces, rich media systems, new uploading/access/security systems, robust mobile integration, to name just a few of the functions we saw in last year's workshop). So, this year, we propose a workshop to focus more specifically on how the design of next-generation conference rooms can support usability: the tasks facing the real people who use these rooms daily. Usability in ubiquitous computing has been the topic of several papers and workshops. Focusing on usability in next-generation conference rooms lets us bring to bear some of the insights from this prior work in a delineated application space. In addition the workshop will be informed by the most recent usability research in ubiquitous computing, rich media, context-aware mobile systems, multiple display environments, and interactive physical environments. We also are vitally concerned with how usability in smart environments tracks (or doesn't) across cultures. Conference room research has been and remains a focal point for some of the most interesting and applied work in ubiquitous computing. It is also an area where there are many real-world applications and daily opportunities for user feed-back: in short, a rich area for exploring usable ubiquitous computing. We see a rich opportunity to draw together researchers not only from conference room research but also from areas such as interactive furniture/smart environments, rich media, social computing, remote conferencing, and mobile devices for a lively exchange of ideas on usability in applied ubicomp systems for conference rooms.
Publication Details
  • Proceedings of AVI '06 (Short Paper), ACM Press, pp. 258-261.
  • May 23, 2006

Abstract

Close
During grouping tasks for data exploration and sense-making, the criteria are normally not well-defined. When users are bringing together data objects thought to be similar in some way, implicit brushing continually detects for groups on the freeform workspace, analyzes the groups' text content or metadata, and draws attention to related data by displaying visual hints and animation. This provides helpful tips for further grouping, group meaning refinement and structure discovery. The sense-making process is further enhanced by retrieving relevant information from a database or network during the brushing. Closely related to implicit brushing, target snapping provides a useful means to move a data object to one of its related groups on a large display. Natural dynamics and smooth animations also help to prevent distractions and allow users to concentrate on the grouping and thinking tasks. Two different prototype applications, note grouping for brainstorming and photo browsing, demonstrate the general applicability of the technique.
Publication Details
  • Proceedings of ACM DIS (Designing Interactive Systems) 2006, Penn State, Penn.
  • Apr 5, 2006

Abstract

Close
What does a student need to know to be a designer? Beyond a list of separate skills, what mindset does a student need to develop for designerly action now and into the future? In the excitement of the cognitive revolution, Simon proposed a way of thinking about design that promised to make it more manageable and cognitive: to think of design as a planning problem. Yet, as Suchman argued long ago, planning accounts may be applied to problems that are not at base accomplished by planning, to the detriment of design vision. This paper reports on a pedagogy that takes Suchman's criticism to heart and avoids dressing up design methods as more systematic and predictive than they in fact are. The idea is to teach design through expo-sure to not just one, but rather, many methods---that is, sets of rules or behaviors that produce artifacts for further reflec-tion and development. By introducing a large number of design methods, decoupled from theories, models or frame-works, we teach (a) important cross-methodological regu-larities in competence as a designer, (b) that the practice of design can itself be designed and (c) that method choice affects design outcomes. This provides a rich and produc-tive notion of design particularly necessary for the world of pervasive and ubiquitous computing.
2005
Publication Details
  • Video track, ACM Multimedia 2005.
  • Nov 13, 2005

Abstract

Close
A Post-Bit is a prototype of a small ePaper device for handling multimedia content, combining interaction control and display into one package. Post-Bits are modeled after paper Post-Its™; the functions of each Post-Bit combine the affordances of physical tiny sticky memos and digital handling of information. Post-Bits enable us to arrange multimedia contents in our embodied physical spaces. Tangible properties of paper such as flipping, flexing, scattering and rubbing are mapped to controlling aspects of the content. In this paper, we introduce the integrated design and functionality of the Post-Bit system, including four main components: the ePaper sticky memo/player, with integrated sensors and connectors; a small container/binder that a few Post-Bits can fit into, for ordering and multiple connections; the data and power port that allows communication with the host com-puter; and finally the software and GUI interface that reside on the host PC and manage multimedia transfer.
Publication Details
  • We organized and ran a full-day workshop at the UbiComp 2005 Conference in Tokyo, Japan, September 11, 2005.
  • Sep 29, 2005

Abstract

Close
Designing the technologies, applications, and physical spaces for next-generation conference rooms (This is a day-long workshop in Tokyo.) Next-generation conference rooms are often designed to anticipate the onslaught of new rich media presentation and ideation systems. Throughout the past couple of decades, many researchers have attempted to reinvent the conference room, aiming at shared online or visual/virtual spaces, smart tables or walls, media support and tele-conferencing systems of varying complexity. Current research in high-end room systems often features a multiplicity of thin, bright display screens (both large and small), along with interactive whiteboards, robotic cameras, and smart remote conferencing systems. Added into the mix one can find a variety of meeting capture and metadata management systems, automatic or not, focused on capturing different aspects of meetings in different media: to the Web, to one's PDA or phone, or to a company database. Smart spaces and interactive furniture design projects have shown systems embedded in tables, podiums, walls, chairs and even floors and lighting. Exploiting the capabilities of all these technologies in one room, however, is a daunting task. For example, faced with three or more display screens, all but a few presenters are likely to opt for simply replicating the same image on all of them. Even more daunting is the design challenge: how to choose which capabilities are vital to particular tasks, or for a particular room, or are well suited to a particular culture. In this workshop we'll explore how the design of next-generation conference rooms can be informed by the most recent research in rich media, context-aware mobile systems, ubiquitous displays, and interactive physical environments. How should conference room systems reflect the rapidly changing expectations around personal devices and smart spaces? What kinds of systems are needed to support meetings in technologically complex environments? How can design of conference room spaces and technologies account for differing social and cultural practices around meetings? What requirements are imposed by security and privacy issues in public spaces? What aspects of meeting capture and access technologies have proven to be useful, and how should a smart environment enable them? What intersections exist with other research areas such as digital libraries? Conference room research has been and remains a focal point for some of the most interesting and applied work in ubiquitous computing. What lessons can we take from the research to date as we move forward? We are confident that a lively and useful discussion will be engendered by bringing directions from recent ubicomp research in games, multimedia applications, and social software to ongoing research in conference rooms systems: integrating architecture and tangible media, information design and display, and mobile and computer-mediated communications.
Publication Details
  • Paper presented at SIGGRAPH 2005, Los Angeles.
  • Sep 29, 2005

Abstract

Close
The Convertible Podium is a central control station for rich media in next-generation classrooms. It integrates flexible control systems for multimedia software and hardware, and is designed for use in classrooms with multiple screens, multiple media sources and multiple distribution channels. The built-in custom electronics and unique convertible podium frame allows intuitive conversion between use modes (either manual or automatic). The at-a-touch sound and light control system gives control over the classroom environment. Presentations can be pre-authored for effective performance, and quickly altered on the fly. The counter-weighted and motorized conversion system allows one person to change modes simply by lifting the top of the Podium to the correct position for each mode. The Podium is lightweight, mobile, and wireless, and features an onboard 21" LCD display, document cameras and other capture devices, tangible controls for hardware and software, and also possesses embedded RFID sensing for automatic data retrieval and file management. It is designed to ease the tasks involved in authoring and presenting in a rich media classroom, as well as supporting remote telepresence and integration with other mobile devices.
Publication Details
  • Short presentation in UbiComp 2005 workshop in Tokyo, Japan.
  • Sep 11, 2005

Abstract

Close
As the use of rich media in mobile devices and smart environments becomes more sophisticated, so must the design of the everyday objects used as containers or controllers. Rather than simply tacking electronics onto existing furniture or other objects, the design of a smart object can enhance existing ap-plications in unexpected ways. The Convertible Podium is an experiment in the design of a smart object with complex integrated systems, combining the highly designed look and feel of a modern lectern with systems that allow it to serve as a central control station for rich media manipulation in next-generation confer-ence rooms. It enables easy control of multiple independent screens, multiple media sources (including mobile devices) and multiple distribution channels. The Podium is designed to ease the tasks involved in authoring and presenting in a rich media meeting room, as well as supporting remote telepresence and in-tegration with mobile devices.
Publication Details
  • Demo and presentation in UbiComp 2005 workshop in Tokyo, Japan.
  • Sep 11, 2005

Abstract

Close
A Post-Bit is a prototype of a small ePaper device for handling multimedia content, combining interaction control and display into one package. Post-Bits are modeled after paper Post-Its™; the functions of each Post-Bit combine the affordances of physical tiny sticky memos and digital handling of information. Post-Bits enable us to arrange multimedia contents in our embodied physical spaces. Tangible properties of paper such as flipping, flexing, scattering and rubbing are mapped to controlling aspects of the content. In this paper, we introduce the integrated design and functionality of the Post-Bit system, including four main components: the ePaper sticky memo/player, with integrated sensors and connectors; a small container/binder that a few Post-Bits can fit into, for ordering and multiple connections; the data and power port that allows communication with the host com-puter; and finally the software and GUI interface that reside on the host PC and manage multimedia transfer.