Publications

FXPAL publishes in top scientific conferences and journals.

2008

Cerchiamo: a collaborative exploratory search tool

Publication Details
  • CSCW 2008 (Demo), San Diego, CA, ACM Press.
  • Nov 10, 2008

Abstract

Close
We describe Cerchiamo, a collaborative exploratory search system that allows teams of searchers to explore document collections synchronously. Working with Cerchiamo, team members use independent interfaces to run queries, browse results, and make relevance judgments. The system mediates the team members' search activity by passing and reordering search results and suggested query terms based on the teams' actions. The combination of synchronous influence with independent interaction allows team members to be more effective and efficient in performing search tasks.
Publication Details
  • Workshop held in conjunction with CSCW2008
  • Nov 8, 2008

Abstract

Close
It is increasingly common to find Multiple Display Environments (MDEs) in a variety of settings, including the workplace, the classroom, and perhaps soon, the home. While some technical challenges exist even in single-user MDEs, collaborative use of MDEs offers a rich set of opportunities for research and development. In this workshop, we will bring together experts in designing, developing, building and evaluating MDEs to improve our collective understanding of design guidelines, relevant real-world activities, evaluation methods and metrics, and opportunities for remote as well as collocated collaboration. We intend to create not only a broader understanding of this growing field, but also to foster a community of researchers interested in bringing these environments from the laboratory to the real world. In this workshop, we intended to explore the following research themes:
  • Elicitation and process of distilling design guidelines for MDE systems and interfaces.
  • Investigation and classification of activities suited for MDEs.
  • Exploration and assessment of how existing groupware theories apply to collaboration in MDEs.
  • Evaluation techniques and metrics for assessing effectiveness of prototype MDE systems and interfaces.
  • Exploration of MDE use beyond strictly collocated collaboration.

Remix rooms: Redefining the smart conference room

Publication Details
  • CSCW 2008 (Workshop)
  • Nov 8, 2008

Abstract

Close
In this workshop we will explore how the experience of smart conference rooms can be broadened to include different contexts and media such as context-aware mobile systems, personal and professional videoconferencing, virtual worlds, and social software. How should the technologies behind conference room systems reflect the rapidly changing expectations around personal devices and social online spaces like Facebook, Twitter, and Second Life? What kinds of systems are needed to support meetings in technologically complex environments? How can a mashup of conference room spaces and technologies account for differing social and cultural practices around meetings? What requirements are imposed by security and privacy issues in public and semi-public spaces?

Reading in the Office

Publication Details
  • BooksOnline'08, October 30, 2008
  • Oct 30, 2008

Abstract

Close
Reading online poses a number of technological challenges. Advances in technology such as touch screens, light-weight high-power computers, and bi-stable displays have periodically renewed interest in online reading over the last twenty years, only to see that interest decline to a small early-adopter community. The recent release of the Kindle by Amazon is another attempt to create an online reading device. Has publicity surrounding Kindle and other such devices has reached critical mass to allow them to penetrate the consumer market successfully, or will we see a decline in interest over the next couple of years echoing the lifecycle of Softbook™ and Rocket eBook™ devices that preceded them? I argue that the true value of online reading lies in supporting activities beyond reading per se: activities such as annotation, reading and comparing multiple documents, transitions between reading, writing and retrieval, etc. Whether the current hardware will be successful in the long term may depend on its abilities to address the reading needs of knowledge workers, not just leisure readers.
Publication Details
  • ACM Multimedia 2008
  • Oct 27, 2008

Abstract

Close
Audio monitoring has many applications but also raises pri- vacy concerns. In an attempt to help alleviate these con- cerns, we have developed a method for reducing the intelli- gibility of speech while preserving intonation and the ability to recognize most environmental sounds. The method is based on identifying vocalic regions and replacing the vocal tract transfer function of these regions with the transfer function from prerecorded vowels, where the identity of the replacement vowel is independent of the identity of the spoken syllable. The audio signal is then re-synthesized using the original pitch and energy, but with the modi ed vocal tract transfer function. We performed an intelligibility study which showed that environmental sounds remained recognizable but speech intelligibility can be dramatically reduced to a 7% word recognition rate.
Publication Details
  • Proceedings of ACM Multimedia '08, pp. 817-820 (Short Paper).
  • Oct 27, 2008

Abstract

Close
We present an automatic zooming technique that leverages content analysis for viewing a document page on a small display such as a mobile phone or PDA. The page can come from a scanned document (bitmap image) or an electronic document (text and graphics data plus metadata). The page with text and graphics is segmented into regions. For each region, a scale-distortion function is constructed based on image analysis of the signal distortion that occurs at different scales. During interactive viewing of the document, as the user navigates by moving the viewport around the page, the zoom factor is automatically adjusted by optimizing the scale-distortion functions of the regions visible in the viewport.

mTable: Browsing Photos and Videos on a Tabletop System

Publication Details
  • ACM Multimedia 2008 (Video)
  • Oct 27, 2008

Abstract

Close
In this video demo, we present mTable, a multimedia tabletop system for browsing photo and video collections. We have developed a set of applications for visualizing and exploring photos, a board game for labeling photos, and a 3D cityscape metaphor for browsing videos. The system is suitable for use in a living room or office lounge, and can support multiple displays by visualizing the collections on the tabletop and showing full-size images and videos on another flat panel display in the room.
Publication Details
  • ACM Multimedia 2008
  • Oct 27, 2008

Abstract

Close
PicNTell is a new technique for generating compelling screencasts where users can quickly record desktop activities and generate videos that are embeddable on popular video sharing distributions such as YouTube®. While standard video editing and screen capture tools are useful for some editing tasks, they have two main drawbacks: (1) they require users to import and organize media in a separate interface, and (2) they do not support natural (or camcorder-like) screen recording, and instead usually require the user to define a specific region or window to record. In this paper we review current screen recording use, and present the PicNTell system, pilot studies, and a new six degree-of-freedom tracker we are developing in response to our findings.
Publication Details
  • ACM Multimedia 2008
  • Oct 27, 2008

Abstract

Close
This demo introduces a tool for accessing an e-document by capturing one or more images of a real object or document hardcopy. This tool is useful when a file name or location of the file is unknown or unclear. It can save field workers and office workers from remembering/exploring numerous directories and file names. Frequently, it can convert tedious keyboard typing in a search box to a simple camera click. Additionally, when a remote collaborator cannot clearly see an object or a document hardcopy through remote collaboration cameras, this tool can be used to automatically retrieve and send the original e-document to a remote screen or printer.

Ranked Feature Fusion Models for Ad Hoc Retrieval

Publication Details
  • CIKM (Conference on Information and Knowledge Management) 2008, October, Napa, CA
  • Oct 27, 2008

Abstract

Close
We introduce the Ranked Feature Fusion framework for information retrieval system design. Typical information retrieval formalisms such as the vector space model, the best-match model and the language model first combine features (such as term frequency and document length) into a unified representation, and then use the representation to rank documents. We take the opposite approach: Documents are first ranked by the relevance of a single feature value and are assigned scores based on their relative ordering within the collection. A separate ranked list is created for every feature value and these lists are then fused to produce a final document scoring. This new ``rank then combine'' approach is extensively evaluated and is shown to be as effective as traditional ``combine then rank'' approaches. The model is easy to understand and contains fewer parameters than other approaches. Finally, the model is easy to extend (integration of new features is trivial) and modify. This advantage includes but is not limited to relevance feedback and distribution flattening.
Publication Details
  • ACM Multimedia
  • Oct 27, 2008

Abstract

Close
Retail establishments want to know about traffic flow and patterns of activity in order to better arrange and staff their business. A large number of fixed video cameras are commonly installed at these locations. While they can be used to observe activity in the retail environment, assigning personnel to this is too time consuming to be valuable for retail analysis. We have developed video processing and visualization techniques that generate presentations appropriate for examining traffic flow and changes in activity at different times of the day. Taking the results of video tracking software as input, our system aggregates activity in different regions of the area being analyzed, determines the average speed of moving objects in the region, and segments time based on significant changes in the quantity and/or location of activity. Visualizations present the results as heat maps to show activity and object counts and average velocities overlaid on the map of the space.

Virtual Physics Circus (video)

Publication Details
  • ACM Multimedia 2008
  • Oct 27, 2008

Abstract

Close
This video shows the Virtual Physics Circus, a kind of playground for experimenting with simple physical models. The system makes it easy to create worlds with common physical objects such as swings, vehicles, ramps, and walls, and interactively play with those worlds. The system can be used as a creative art medium as well as to gain understanding and intuition about physical systems. The system can be controlled by a number of UI devices such as mouse, keyboard, joystick, and tags which are tracked in 6 degrees of freedom.
Publication Details
  • ACM Multimedia 2008 Workshop: TrecVid Summarization 2008 (TVS'08)
  • Oct 26, 2008

Abstract

Close
In this paper we describe methods for video summarization in the context of the TRECVID 2008 BBC Rushes Summarization task. Color, motion, and audio features are used to segment, filter, and cluster the video. We experiment with varying the segment similarity measure to improve the joint clustering of segments with and without camera motion. Compared to our previous effort for TRECVID 2007 we have reduced the complexity of the summarization process as well as the visual complexity of the summaries themselves. We find our objective (inclusion) performance to be competitive with systems exhibiting similar subjective performance.
Publication Details
  • Demonstration at UIST 2008
  • Oct 20, 2008

Abstract

Close
The iPhone takes a fresh approach at defining the user interface for mobile devices, which invites further innovation for new generations of touch enabled mobile devices. At the same time, some of its interaction designs provide challenges. For example, swiping gestures can be used anywhere on the screen of an iPhone for navigation, no scroll bars are used. This makes navigation remarkably seamless and easy, at the expense of selection tasks that would also be supported naturally by the same gestures. In this demo, we show techniques that enable both activities simultaneously with minimal interference. We also demonstrate other user interface designs that are driven by the features and and a desire to overcome the limits of small displays for iPhone-type devices. This includes diagonal scrolling as a means to maximize line width and font size for mobile reading, and a graphical authentication method.

UbiMEET: Design and Evaluation of Smart Environments in the Workplace

Publication Details
  • Ubicomp 2008 (Workshop)
  • Sep 21, 2008

Abstract

Close
This workshop is the fourth in a series of UbiComp workshops on smart environment technologies and applications for the workplace. It offers a unique window into the state of the art through the participation of a range of researchers, designers and builders who exchange both basic research and real-world case experiences; and invites participants to share ideas about them. This year we focus on understanding appropriate design processes and creating valid evaluation metrics for smart environments (a recurrent request from previous workshop participants). What design processes allow integration of new ubicomp-style systems with existing technologies in a room that is in daily use? What evaluation methods and metrics give us an accurate picture, and how can that information best be applied in an iterative design process?

General Certificateless Encryption and Timed-Release Encryption

Publication Details
  • SCN 2008
  • Sep 10, 2008

Abstract

Close
While recent timed-release encryption (TRE) schemes are implicitly supported by a certificateless encryption (CLE) mechanism, the security models of CLE and TRE differ and there is no generic trans- formation from a CLE to a TRE. This paper gives a generalized model for CLE that fulfills the requirements of TRE. This model is secure against adversaries with adaptive trapdoor extraction capabilities for arbitrary identifiers, decryption capabilities for arbitrary public keys, and partial decryption capabilities. It also supports hierarchical identifiers. We pro- pose a concrete scheme under our generalized model and prove it secure without random oracles, yielding the first strongly-secure SMCLE and the first TRE in the standard model. In addition, our technique of partial decryption is different from the previous approach.
Publication Details
  • Social Mobile Media Workshop
  • Aug 1, 2008

Abstract

Close
Mobile media applications need to balance user and group goals, attentional constraints, and limited screen real estate. In this paper, we describe the development and testing of two application sketches designed to explore these tradeoffs. The first is retrospective and time- based and the second is prospective and space-based. We found that attentional demands dominate and mobile media applications should therefore be lightweight and hands-free as much as possible.
Publication Details
  • IADIS e-Learning 2008
  • Jul 22, 2008

Abstract

Close
While researchers have been exploring automatic presentation capture since the 1990's, real world adoption has been limited. Our research focuses on simplifying presentation capture and retrieval to reduce adoption barriers. ProjectorBox is our attempt to create a smart appliance that seamlessly captures, indexes, and archives presentation media, with streamlined user interfaces for searching, skimming, and sharing content. In this paper we describe the design of ProjectorBox and compare its use across corporate and educational settings. While our evaluation confirms the usability and utility of our approach across settings, it also highlights differences in usage and user needs, suggesting enhancements for both markets. We describe new features we have implemented to address corporate needs for enhanced privacy and security, and new user interfaces for content discovery.

Algorithmic Mediation for Collaborative Exploratory Search.

Publication Details
  • SIGIR 2008. (Singapore, Singapore, July 20 - 24, 2008). ACM, New York, NY, 315-322. Best Paper Award.
  • Jul 22, 2008

Abstract

Close
We describe a new approach to information retrieval: algorithmic mediation for intentional, synchronous collabo- rative exploratory search. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to focus, enhance and augment the team's search and communication activities. Collaborative search outperformed post hoc merging of similarly instrumented single user runs. Algorithmic mediation improved both collaborative search (allowing a team of searchers to nd relevant in- formation more efficiently and effectively), and exploratory search (allowing the searchers to find relevant information that cannot be found while working individually).

Experiments in Interactive Video Search by Addition and Subtraction

Publication Details
  • ACM Conf. on Image and Video Retrieval (CIVR) 2008
  • Jul 7, 2008

Abstract

Close
We have developed an interactive video search system that allows the searcher to rapidly assess query results and easily pivot on those results to form new queries. The system is intended to maximize the use of the discriminative power of the human searcher. This is accomplished by providing a hierarchical segmentation, streamlined interface, and redundant visual cues throughout. The typical search scenario includes a single searcher with the ability to search with text and content-based queries. In this paper, we evaluate new variations on our basic search system. In particular we test the system using only visual content-based search capabilities, and using paired searchers in a realtime collaboration. We present analysis and conclusions from these experiments.

FXPAL Collaborative Exploratory Video Search System

Publication Details
  • CIVR 2008 VideOlympics (Demo)
  • Jul 7, 2008

Abstract

Close
This paper describes FXPAL's collaborative, exploratory interactive video search application. We introduce a new approach to information retrieval: algorithmic mediation in support of intentional, synchronous collaborative exploratory search. Using our system, two or more users with a common information need search together, simultaneously. The collaborative system provides tools, user interfaces and, most importantly, algorithmically-mediated retrieval to focus, enhance and augment the team's search and communication activities.

Collaborative Information Seeking in Electronic Environments

Publication Details
  • Information Seeking Support Systems Workshop. An Invitational Workshop Sponsored by the National Science Foundation. Available online at http://www.ils.unc.edu/ISSS/
  • Jun 26, 2008

Abstract

Close
Collaboration in information seeking, while common in practice, is just being recognized as an important research area. Several studies have documented various collaboration strategies that people have adopted (and adapted), and some initial systems have been built. This field is in its infancy, however. We need to understand which real-world tasks are best suited for collaborative work. We need to extend models of information seeking to accommodate explicit and implicit collaboration. We need to invent a suite of algorithms to mediate search activities. We need to devise evaluation metrics that take into account multiple people's contributions to search.
Publication Details
  • IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2008
  • Jun 24, 2008

Abstract

Close
Current approaches to pose estimation and tracking can be classified into two categories: generative and discriminative. While generative approaches can accurately determine human pose from image observations, they are computationally intractable due to search in the high dimensional human pose space. On the other hand, discriminative approaches do not generalize well, but are computationally efficient. We present a hybrid model that combines the strengths of the two in an integrated learning and inference framework. We extend the Gaussian process latent variable model (GPLVM) to include an embedding from observation space (the space of image features) to the latent space. GPLVM is a generative model, but the inclusion of this mapping provides a discriminative component, making the model observation driven. Observation Driven GPLVM (OD-GPLVM) not only provides a faster inference approach, but also more accurate estimates (compared to GPLVM) in cases where dynamics are not sufficient for the initialization of search in the latent space. We also extend OD-GPLVM to learn and estimate poses from parameterized actions/gestures. Parameterized gestures are actions which exhibit large systematic variation in joint angle space for different instances due to difference in contextual variables. For example, the joint angles in a forehand tennis shot are function of the height of the ball (Figure 2). We learn these systematic variations as a function of the contextual variables. We then present an approach to use information from scene/object to provide context for human pose estimation for such parameterized actions.

Vital Sign Estimation from Passive Thermal Video

Publication Details
  • IEEE Computer Society Conference on Computer Vision and Pattern Recognition
  • Jun 24, 2008

Abstract

Close
Conventional wired detection of vital signs limits the use of these important physiological parameters by many applications, such as airport health screening, elder care, and workplace preventive care. In this paper, we explore contact-free heart rate and respiratory rate detection through measuring infrared light modulation emitted near superficial blood vessels or a nasal area respectively. To deal with complications caused by subjects' movements, facial expressions, and partial occlusions of the skin, we propose a novel algorithm based on contour segmentation and tracking, clustering of informative pixels, and dominant frequency component estimation. The proposed method achieves robust subject regions-of-interest alignment and motion compensation in infrared video with low SNR. It relaxes some strong assumptions used in previous work and substantially improves on previously reported performance. Preliminary experiments on heart rate estimation for 20 subjects and respiratory rate estimation for 8 subjects exhibit promising results.

1st International Workshop on Collaborative Information Retrieval

Publication Details
  • JCDL 2008
  • Jun 20, 2008

Abstract

Close
Explicit support for collaboration is becoming increasingly important for certain kinds of collection-building activities in digital libraries. In the last few years, several research groups have also pursued various issues related to collaboration during search [4][5][6]. We can represent collaboration in search on two dimensions - synchrony and intent. Asynchronous collaboration means that people are not working on the same problem simultaneously; implicit collaboration occurs when the system uses information from others' use of the system to inform new searches, but does not guarantee consistency of search goals. In this workshop, we are concerned with the top-left quadrant of Figure 1 that represents small groups of people working toward a common goal at the same time. These synchronous, explicit collaborations could occur amongst remotely situated users, each with their own computers, or amongst a co-located group sharing devices; these spatial configurations add yet another dimension to be considered when designing collaborative search systems.

A Taxonomy of Collaboration in Online Information Seeking

Publication Details
  • 1st International Workshop on Collaborative Information Retrieval. JCDL 2008.
  • Jun 20, 2008

Abstract

Close
People can help other people find information in networked information seeking environments. Recently, many such systems and algorithms have proliferated in industry and in academia. Unfortunately, it is difficult to compare the systems in meaningful ways because they often define collaboration in different ways. In this paper, we propose a model of possible kinds of collaboration, and illustrate it with examples from literature. The model contains four dimensions: intent, concurrency, depth and location. This model can be used to classify existing systems and to suggest possible opportunities for design in this space.

Simple and Effective Defense Against Evil Twin Access Points

Publication Details
  • Proceedings ACM WiSec, pp. 220-235, 2008
  • Mar 31, 2008

Abstract

Close
Wireless networking is becoming widespread in many public places such as cafes. Unsuspecting users may become victims of attacks based on ``evil twin'' access points. These rogue access points are operated by criminals in an attempt to launch man-in-the-middle attacks. We present a simple protection mechanism against binding to an evil twin. The mechanism leverages short authentication string protocols for the exchange of cryptographic keys. The short string verification is performed by encoding the short strings as a sequence of colors, rendered sequentially by the user's device and by the designated access point of the cafe. The access point must have a light capable of showing two colors and must be mounted prominently in a position where users can have confidence in its authenticity. We conducted a usability study with patrons in several cafes and participants found our protection mechanism very usable.

FXPAL Interactive Search Experiments for TRECVID 2007

Publication Details
  • TRECVid 2007
  • Mar 1, 2008

Abstract

Close
In 2007 FXPAL submitted results for two tasks: rushes summarization and interactive search. The rushes summarization task has been described at the ACM Multimedia workshop. Interested readers are referred to that publication for details. We describe our interactive search experiments in this notebook paper.

Exiting the Cleanroom: On Ecological Validity and Ubiquitous Computing

Publication Details
  • Human-Computer Interaction Journal
  • Feb 15, 2008

Abstract

Close
Over the past decade and a half, corporations and academies have invested considerable time and money in the realization of ubiquitous computing. Yet design approaches that yield ecologically valid understandings of ubiquitous computing systems, which can help designers make design decisions based on how systems perform in the context of actual experience, remain rare. The central question underlying this paper is: what barriers stand in the way of real-world, ecologically valid design for ubicomp? Using a literature survey and interviews with 28 developers, we illustrate how issues of sensing and scale cause ubicomp systems to resist iteration, prototype creation, and ecologically valid evaluation. In particular, we found that developers have difficulty creating prototypes that are both robust enough for realistic use and able to handle ambiguity and error, and that they struggle to gather useful data from evaluations either because critical events occur infrequently, because the level of use necessary to evaluate the system is difficult to maintain, or because the evaluation itself interferes with use of the system. We outline pitfalls for developers to avoid as well as practical solutions, and we draw on our results to outline research challenges for the future. Crucially, we do not argue for particular processes, sets of metrics, or intended outcomes but rather focus on prototyping tools and evaluation methods that support realistic use in realistic settings that can be selected according to the needs and goals of a particular developer or researcher.
2007
Publication Details
  • The 3rd International Conference on Collaborative Computing: Networking, Applications and Worksharing
  • Nov 12, 2007

Abstract

Close
This paper summarizes our environment-image/videosupported collaboration technologies developed in the past several years. These technologies use environment images and videos as active interfaces and use visual cues in these images and videos to orient device controls, annotations and other information access. By using visual cues in various interfaces, we expect to make the control interface more intuitive than buttonbased control interfaces and command-based interfaces. These technologies can be used to facilitate high-quality audio/video capture with limited cameras and microphones. They can also facilitate multi-screen presentation authoring and playback, teleinteraction, environment manipulation with cell phones, and environment manipulation with digital pens.

Collaborative Exploratory Search

Publication Details
  • HCIR 2007, Boston, Massachusetts (HCIR = Human Computer Interaction and Information Retrieval)
  • Nov 2, 2007

Abstract

Close
We propose to mitigate the deficiencies of correlated search with collaborative search, that is, search in which a small group of people shares a common information need and actively (and synchronously) collaborates to achieve it. Furthermore, we propose a system architecture that mediates search activity of multiple people by combining their inputs and by specializing results delivered to them to take advantage of their skills and knowledge.

DOTS: Support for Effective Video Surveillance

Publication Details
  • Fuji Xerox Technical Report No. 17, pp. 83-100
  • Nov 1, 2007

Abstract

Close
DOTS (Dynamic Object Tracking System) is an indoor, real-time, multi-camera surveillance system, deployed in a real office setting. DOTS combines video analysis and user interface components to enable security personnel to effectively monitor views of interest and to perform tasks such as tracking a person. The video analysis component performs feature-level foreground segmentation with reliable results even under complex conditions. It incorporates an efficient greedy-search approach for tracking multiple people through occlusion and combines results from individual cameras into multi-camera trajectories. The user interface draws the users' attention to important events that are indexed for easy reference. Different views within the user interface provide spatial information for easier navigation. DOTS, with over twenty video cameras installed in hallways and other public spaces in our office building, has been in constant use for a year. Our experiences led to many changes that improved performance in all system components.
Publication Details
  • UIST 2007 Poster & Demo
  • Oct 7, 2007

Abstract

Close
We are exploring the use of collaborative games to generate meaningful textual tags for photos. We have designed Pho-toPlay to take advantage of the social engagement typical of board games and provide a collocated ludic environment conducive to the creation of text tags. We evaluated Photo-Play and found that it was fun and socially engaging for players. The milieu of the game also facilitated playing with personal photos, which resulted in more specific tags such as named entities than when playing with randomly selected online photos. Players also had a preference for playing with personal photos.
Publication Details
  • TRECVID Video Summarization Workshop at ACM Multimedia 2007
  • Sep 28, 2007

Abstract

Close
This paper describes a system for selecting excerpts from unedited video and presenting the excerpts in a short sum- mary video for eciently understanding the video contents. Color and motion features are used to divide the video into segments where the color distribution and camera motion are similar. Segments with and without camera motion are clustered separately to identify redundant video. Audio fea- tures are used to identify clapboard appearances for exclu- sion. Representative segments from each cluster are selected for presentation. To increase the original material contained within the summary and reduce the time required to view the summary, selected segments are played back at a higher rate based on the amount of detected camera motion in the segment. Pitch-preserving audio processing is used to bet- ter capture the sense of the original audio. Metadata about each segment is overlayed on the summary to help the viewer understand the context of the summary segments in the orig- inal video.
Publication Details
  • ICDSC 2007, pp. 132-139
  • Sep 25, 2007

Abstract

Close
Our analysis and visualization tools use 3D building geometry to support surveillance tasks. These tools are part of DOTS, our multicamera surveillance system; a system with over 20 cameras spread throughout the public spaces of our building. The geometric input to DOTS is a floor plan and information such as cubicle wall heights. From this input we construct a 3D model and an enhanced 2D floor plan that are the bases for more specific visualization and analysis tools. Foreground objects of interest can be placed within these models and dynamically updated in real time across camera views. Alternatively, a virtual first-person view suggests what a tracked person can see as she moves about. Interactive visualization tools support complex camera-placement tasks. Extrinsic camera calibration is supported both by visualizations of parameter adjustment results and by methods for establishing correspondences between image features and the 3D model.

DOTS: Support for Effective Video Surveillance

Publication Details
  • ACM Multimedia 2007, pp. 423-432
  • Sep 24, 2007

Abstract

Close
DOTS (Dynamic Object Tracking System) is an indoor, real-time, multi-camera surveillance system, deployed in a real office setting. DOTS combines video analysis and user interface components to enable security personnel to effectively monitor views of interest and to perform tasks such as tracking a person. The video analysis component performs feature-level foreground segmentation with reliable results even under complex conditions. It incorporates an efficient greedy-search approach for tracking multiple people through occlusion and combines results from individual cameras into multi-camera trajectories. The user interface draws the users' attention to important events that are indexed for easy reference. Different views within the user interface provide spatial information for easier navigation. DOTS, with over twenty video cameras installed in hallways and other public spaces in our office building, has been in constant use for a year. Our experiences led to many changes that improved performance in all system components.
Publication Details
  • IEEE Intl. Conf. on Semantic Computing
  • Sep 17, 2007

Abstract

Close
We present methods for semantic annotation of multimedia data. The goal is to detect semantic attributes (also referred to as concepts) in clips of video via analysis of a single keyframe or set of frames. The proposed methods integrate high performance discriminative single concept detectors in a random field model for collective multiple concept detection. Furthermore, we describe a generic framework for semantic media classification capable of capturing arbitrary complex dependencies between the semantic concepts. Finally, we present initial experimental results comparing the proposed approach to existing methods.

Embodied Meeting Support: Mobile, Tangible, Senseable Interaction in Smart Environments

Publication Details
  • Workshop at Ubicomp 2007
  • Sep 16, 2007

Abstract

Close
The past two years at UbiComp, our workshops on design and usability in next generation conference rooms engendered lively conversations in the community of people working in smart environments. The community is clearly vital and growing. This year we would like to build on the energy from previous workshops while taking on a more interactive and exploratory format. The theme for this workshop is "embodied meeting support" and includes three tracks: mobile interaction, tangible interaction, and sensing in smart environments. We encourage participants to present work that focuses on one track or that attempts to bridge multiple tracks.

FXPAL MediaMagic Video Search System

Publication Details
  • ACM Conf. on Image and Video Retrieval 2007
  • Jul 29, 2007

Abstract

Close
This paper describes FXPAL's interactive video search application, "MediaMagic". FXPAL has participated in the TRECVID interactive search task since 2004. In our search application we employ a rich set of redundant visual cues to help the searcher quickly sift through the video collection. A central element of the interface and underlying search engine is a segmentation of the video into stories, which allows the user to quickly navigate and evaluate the relevance of moderately-sized, semantically-related chunks.
Publication Details
  • ICME 2007
  • Jul 2, 2007

Abstract

Close
The recent emergence of multi-core processors enables a new trend in the usage of computers. Computer vision applications, which require heavy computation and lots of bandwidth, usually cannot run in real-time. Recent multi-core processors can potentially serve the needs of such workloads. In addition, more advanced algorithms can be developed utilizing the new computation paradigm. In this paper, we study the performance of an articulated body tracker on multi-core processors. The articulated body tracking workload encapsulates most of the important aspects of a computer vision workload. It takes multiple camera inputs of a scene with a single human object, extracts useful features, and performs statistical inference to find the body pose. We show the importance of properly parallelizing the workload in order to achieve great performance: speedups of 26 on 32 cores. We conclude that: (1) data-domain parallelization is better than function-domain parallelization for computer vision applications; (2) data-domain parallelism by image regions and particles is very effective; (3) reducing serial code in edge detection brings significant performance improvements; (4) domain knowledge about low/mid/high level of vision computation is helpful in parallelizing the workload.

Featured Wand for 3D Interaction

Publication Details
  • ICME 2007
  • Jul 2, 2007

Abstract

Close
Our featured wand, automatically tracked by video cameras, provides an inexpensive and natural way for users to interact with devices such as large displays. The wand supports six degrees of freedom for manipulation of 3D applications like Google Earth. Our system uses a 'line scan' to estimate the wand pose tracking which simplifies processing. Several applications are demonstrated.
Publication Details
  • ICME 2007, pp. 1015-1018
  • Jul 2, 2007

Abstract

Close
We describe a new interaction technique that allows users to control nonlinear video playback by directly manipulating objects seen in the video. This interaction technique is simi-lar to video "scrubbing" where the user adjusts the playback time by moving the mouse along a slider. Our approach is superior to variable-scale scrubbing in that the user can con-centrate on interesting objects and does not have to guess how long the objects will stay in view. Our method relies on a video tracking system that tracks objects in fixed cameras, maps them into 3D space, and handles hand-offs between cameras. In addition to dragging objects visible in video windows, users may also drag iconic object representations on a floor plan. In that case, the best video views are se-lected for the dragged objects.
Publication Details
  • ICME 2007, pp. 675-678
  • Jul 2, 2007

Abstract

Close
In this paper we describe the analysis component of an indoor, real-time, multi-camera surveillance system. The analysis includes: (1) a novel feature-level foreground segmentation method which achieves efficient and reliable segmentation results even under complex conditions, (2) an efficient greedy search based approach for tracking multiple people through occlusion, and (3) a method for multi-camera handoff that associates individual trajectories in adjacent cameras. The analysis is used for an 18 camera surveillance system that has been running continuously in an indoor business over the past several months. Our experiments demonstrate that the processing method for people detection and tracking across multiple cameras is fast and robust.

POEMS: A Paper Based Meeting Service Management Tool

Publication Details
  • ICME 2007
  • Jul 2, 2007

Abstract

Close
As more and more tools are developed for meeting support tasks, properly using these tools to get expected results becomes too complicated for many meeting participants. To address this problem, we propose POEMS (Paper Offered Environment Management Service) that allows meeting participants to control services in a meeting environment through a digital pen and an environment photo on digital paper. Unlike state-of-the-art device control interfaces that require interaction with text commands, buttons, or other artificial symbols, our photo enabled service access is more intuitive. Compared with PC and PDA supported control, this new approach is more flexible and cheap. With this system, a meeting participant can initiate a whiteboard on a selected public display by tapping the display image in the photo, or print out a display by drawing a line from the display image to a printer image in the photo. The user can also control video or other active applications on a display by drawing a link between a printed controller and the image of the display. This paper presents the system architecture, implementation tradeoffs, and various meeting control scenarios.
Publication Details
  • ICME 2007
  • Jul 2, 2007

Abstract

Close
As more and more tools are developed for meeting support tasks, properly using these tools to get expected results becomes very complicated for many meeting participants. To address this problem, we propose POEMS (Paper Offered Environment Management Service) that can facilitate the activation of various services with a pen and paper based interface. With this tool, meeting participants can control meeting support devices on the same paper that they take notes. Additionally, a meeting participant can also share his/her paper drawings on a selected public display or initiate a collaborative discussion on a selected public display with a page of paper. Compared with traditional interfaces, such as tablet PC or PDA based interfaces, the interface of this tool has much higher resolution and is much cheaper and easier to deploy. The paper interface is also natural to use for ordinary people.
Publication Details
  • IEEE Pervasive Computing Magazine, Vol. 6, No. 3, Jul-Sep 2007.
  • Jul 1, 2007

Abstract

Close
AnySpot is a web service-based platform for seamlessly connecting people to their personal and shared documents wherever they go. We describe the principles behind AnySpot's design and report our experience deploying it in a large, multi-national organization.
Publication Details
  • Pervasive 2007 Invited Demo
  • May 13, 2007

Abstract

Close
We present an investigation of interaction models for slideshow applications in a multi-display environment. Three models are examined: Direct Manipulation, Billiard Ball, and Flow. These concepts can be demonstrated by the ModSlideShow prototype, which is designed as a configurable modular display system where each display unit communicates with its neighbors and fundamental operations that act locally can be composed to support the higher level interaction models. We also describe the gesture input scheme, animation feedback, and other enhancements.
Publication Details
  • CHI 2007, pp. 1167-1176
  • Apr 28, 2007

Abstract

Close
A common video surveillance task is to keep track of people moving around the space being monitored. It is often difficult to track activity between cameras because locations such as hallways in office buildings can look quite similar and do not indicate the spatial proximity of the cameras. We describe a spatial video player that orients nearby video feeds with the field of view of the main playing video to aid in tracking between cameras. This is compared with the traditional bank of cameras with and without interactive maps for identifying and selecting cameras. We additionally explore the value of static and rotating maps for tracking activity between cameras. The study results show that both the spatial video player and the map improve user performance when compared to the camera-bank interface. Also, subjects change cameras more often with the spatial player than either the camera bank or the map, when available.
Publication Details
  • CHI 2007
  • Apr 28, 2007

Abstract

Close
We present the iterative design of Momento, a tool that provides integrated support for situated evaluation of ubiquitous computing applications. We derived requirements for Momento from a user-centered design process that included interviews, observations and field studies of early versions of the tool. Motivated by our findings, Momento supports remote testing of ubicomp applications, helps with participant adoption and retention by minimizing the need for new hardware, and supports mid-to-long term studies to address infrequently occurring data. Also, Momento can gather log data, experience sampling, diary, and other qualitative data.

Video Segmentation via Temporal Pattern Classification

Publication Details
  • IEEE Transactions on Multimedia
  • Apr 1, 2007

Abstract

Close
We present a general approach to temporal media segmentation using supervised classification. Given standard low-level features representing each time sample, we build intermediate features via pairwise similarity. The intermediate features comprehensively characterize local temporal structure, and are input to an efficient supervised classifier to identify shot boundaries. We integrate discriminative feature selection based on mutual information to enhance performance and reduce processing requirements. Experimental results using large-scale test sets provided by the TRECVID evaluations for abrupt and gradual shot boundary detection are presented, demonstrating excellent performance.

Abstract

Close
3D renderings can often look cold and impersonal or even cartoonish. They can also appear too crisply detailed . This can cause viewers to concentrate on specific details when they should be focusing on a more general idea or concept. With the techniques covered in this tutorial you will be able to turn your 3D renderings into "hand drawn" looking illustrations.

Context-Aware Telecommunication Services

Publication Details
  • UNESCO Encyclopedia of Life Support Systems
  • Apr 1, 2007

Abstract

Close
This chapter describes how the changing information about an individual's location, environment, and social situation can be used to initiate and facilitate people's interactions with one another, individually and in groups. Context-aware communication is contrasted with other forms of context-aware computing and we characterize applications in terms of design decisions along two dimensions: the extent of autonomy in context sensing and the extent of autonomy in communication action. A number of context-aware communication applications from the research literature are presented in five application categories. Finally, a number of issues related to the design of context-aware communication applications are presented.
Publication Details
  • Proceedings of the AAAI Spring Symposium 2007 on quantum interaction organized by Keith von Rijsbergen, Peter Bruza, Bill Lawless, and Don Sofge
  • Mar 26, 2007

Abstract

Close
This survey, aimed at information processing researchers, highlights intriguing but lesser known results, corrects misconceptions, and suggests research areas. Themes include: certainty in quantum algorithms; the "fewer worlds" theory of quantum mechanics; quantum learning; probability theory versus quantum mechanics.
Publication Details
  • Book chapter in: A Document (Re)turn. Contributions from a Research Field in Transition (Taschenbuch), Roswitha Skare, Niels Windfeld Lund, Andreas Vårheim (eds.), Peter Lang Publishing, Incorporated, 2007.
  • Feb 19, 2007

Abstract

Close
When people are checking in to flights, making reports to their company manager, composing music, delivering papers for exams in schools, or examining patients in hospitals, they all deal with documents and processes of documentation. In earlier times, documentation took place primarily in libraries and archives. While the latter are still important document institutions, documents today play a far more essential role in social life in many different domains and cultures. In this book, which celebrates the ten year anniversary of documentation studies in Tromsø, experts from many different disciplines, professional domains as well as cultures around the world present their way of dealing with documents, demonstrating many potential directions for the emerging broad field of documentation studies.

Adaptive News Access

Publication Details
  • Book chapter in "The Adaptive Web: Methods and Strategies of Web Personalization" (Springer, LNCS #4321)
  • Feb 1, 2007

Abstract

Close
This chapter describes how the adaptive web technologies discussed in this book have been applied to news access. First, we provide an overview of different types of adaptivity in the context of news access and identify corre-sponding algorithms. For each adaptivity type, we briefly discuss representative systems that use the described techniques. Next, we discuss an in-depth case study of a personalized news system. As part of this study, we outline a user modeling approach specifically designed for news personalization, and present results from an evaluation that attempts to quantify the effect of adaptive news access from a user perspective. We conclude by discussing recent trends and novel systems in the adaptive news space.

Content-based Recommendation Systems

Publication Details
  • Book chapter in "The Adaptive Web: Methods and Strategies of Web Personalization" (Springer, LNCS #4321)
  • Feb 1, 2007

Abstract

Close
This chapter discusses content-based recommendation systems, i.e., systems that recommend an item to a user based upon a description of the item and a profile of the user's interests. Content-based recommendation systems may be used in a variety of domains ranging from recommending web pages, news articles, restau-rants, television programs, and items for sale. Although the details of various systems differ, content-based recommendation systems share in common a means for describing the items that may be recommended, a means for creating a profile of the user that describes the types of items the user likes, and a means of comparing items to the user profile to determine what to recommend. The user profile is often created and updated automatically in response to feedback on the desirability of items that have been presented to the user.
Publication Details
  • PSD Magazine 2/2007 - Photoshop Art & Special Effects
  • Feb 1, 2007

Abstract

Close
With the techniques covered in this tutorial you will be able to produce two classic visual effects. First, I'll show you how to make animated titles by importing Photoshop files into Aftereffects. Next we'll add new scenic elements to some video footage, again using Photoshop. This technique will allow you to add or remove elements like tree or buildings from a shot. These techniques, especially the one we will use to alter the scene, are common to most visual effects. Watch the classic old 1933 version of King Kong. Willis O'Brien, the stop motion genius that animated Kong, pioneered the art of extending, or completely fabricating, scenery. Layering several elements painted on glass in front his puppets and rear projected footage allowed O'brien and RKO's visual effects artist Linwood Dunn to create King Kong's fantastic jungle scenes. It is said that these set-ups could be many feet deep.
2006
Publication Details
  • Henry Hexmoor, Marcin Paprzycki, Niranjan Suri (eds) Scalable Computing: Practice and Experience Volume 7, No. 4, December 2006
  • Dec 23, 2006

Abstract

Close
Current search engines crawl the Web, download content, and digest this content locally. For multimedia content, this involves considerable volumes of data. Furthermore, this process covers only publicly available content because content providers are concerned that they otherwise loose control over the distribution of their intellectual property. We present the prototype of our secure and distributed search engine, which dynamically pushes content based feature extraction to image providers. Thereby, the volume of data that is transported over the network is significantly reduced, and the concerns mentioned above are alleviated. The distribution of feature extraction and matching algorithms is done by mobile software agents. Subsequent search requests performed upon the resulting feature indices by means of remote feature comparison can either be realized through mobile software agents, or by the use of implicitly created Web services which wrap the remote comparison functionality, and thereby improve the interoperability of the search engine. We give a description of the search engine's architecture and implementation, depict our concepts to integrate agent and Web service technology, and present quantitative evaluation results. Furthermore, we discuss related security mechanisms for content protection and server security.

Security Risks in Java-based Mobile Code Systems

Publication Details
  • Henry Hexmoor, Marcin Paprzycki, Niranjan Suri (eds) Scalable Computing: Practice and Experience Volume 7, No. 4, December 2006
  • Dec 23, 2006

Abstract

Close
Java is the predominant language for mobile agent systems, both for implementing mobile agent execution environments and for writing mobile agent applications. This is due to inherent support for code mobility by means of dynamic class loading and separable class name spaces, as well as a number of security properties, such as language safety and access control by means of stack introspection. However, serious questions must be raised whether Java is actually up to the task of providing a secure execution environment for mobile agents. At the time of writing, it has neither resource control nor proper application separation. In this article we take an in-depth look at Java as a foundation for secure mobile agent systems.
Publication Details
  • MobCops 2006 Workshop in conjunction with IEEE/ACM CollaborateCom 2006, Atlanta, Georgia, USA.
  • Nov 17, 2006

Abstract

Close
Load balancing has been an increasingly important issue for handling computational intensive tasks in a distributed system such as in Grid and cluster computing. In such systems, multiple server instances are installed for handling requests from client applications, and each request (or task) typically needs to stay in a queue before an available server is assigned to process it. In this paper, we propose a high-performance queueing method for implementing a shared queue for collaborative clusters of servers. Each cluster of servers maintains a local queue and queues of different clusters are networked to form a unified (or shared) queue that may dispatch tasks to all available servers. We propose a new randomized algorithm for forwarding requests in an overcrowded local queue to a networked queue based on load information of the local and neighboring clusters. The algorithm achieves both load balancing and locality awareness.

Term Context Models for Information Retrieval

Publication Details
  • CIKM (Conference on information and Knowledge Management) 2006, Arlington, VA
  • Nov 7, 2006

Abstract

Close
At their heart, most if not all information retrieval models utilize some form of term frequency. The notion is that the more often a query term occurs in a document, the more likely it is that document meets an information need. We examine an alternative. We propose a model which assesses the presence of a term in a document not by looking at the actual occurrence of that term, but by a set of nonindependent supporting terms, i.e. context. This yields a weighting for terms in documents which is different from and complementary to tf-based methods, and is beneficial for retrieval.
Publication Details
  • In Proceedings of the fourth ACM International Workshop on Video Surveillance & Sensor Networks VSSN '06, Santa Barbara, CA, pp. 19-26
  • Oct 27, 2006

Abstract

Close
Video surveillance systems have become common across a wide number of environments. While these installations have included more video streams, they also have been placed in contexts with limited personnel for monitoring the video feeds. In such settings, limited human attention, combined with the quantity of video, makes it difficult for security personnel to identify activities of interest and determine interrelationships between activities in different video streams. We have developed applications to support security personnel both in analyzing previously recorded video and in monitoring live video streams. For recorded video, we created storyboard visualizations that emphasize the most important activity as heuristically determined by the system. We also developed an interactive multi-channel video player application that connects camera views to map locations, alerts users to unusual and suspicious video, and visualizes unusual events along a timeline for later replay. We use different analysis techniques to determine unusual events and to highlight them in video images. These tools aid security personnel by directing their attention to the most important activity within recorded video or among several live video streams.
Publication Details
  • UIST 2006 Companion
  • Oct 16, 2006

Abstract

Close
Video surveillance requires keeping the human in the loop. Software can aid security personnel in monitoring and using video. We have developed a set of interface components designed to locate and follow important activity within security video. By recognizing and visualizing localized activity, presenting overviews of activity over time, and temporally and geographically contextualizing video playback, we aim to support security personnel in making use of the growing quantity of security video.
Publication Details
  • UIST 2006 Companion
  • Oct 16, 2006

Abstract

Close
With the growing quantity of security video, it becomes vital that video surveillance software be able to support security personnel in monitoring and tracking activities. We have developed a multi-stream video player that plays recorded and live videos while drawing the users' attention to activity in the video. We will demonstrate the features of the video player and in particular, how it focuses on keeping the human in the loop and drawing their attention to activities in the video.
Publication Details
  • Proceedings of IEEE Multimedia Signal Processing 2006
  • Oct 3, 2006

Abstract

Close
This paper presents a method for facilitating document redirection in a physical environment via a mobile camera. With this method, a user is able to move documents among electronic devices, post a paper document to a selected public display, or make a printout of a white board with simple point-and-capture operations. More specifically, the user can move a document from its source to a destination by capturing a source image and a destination image in a consecutive order. The system uses SIFT (Scale Invariant Feature Transform) features of captured images to identify the devices a user is pointing to, and issues corresponding commands associated with identified devices. Unlike RF/IR based remote controls, this method uses object visual features as an all time 'transmitter' for many tasks, and therefore is easy to deploy. We present experiments on identifying three public displays and a document scanner in a conference room for evaluation.

The USE Project: Designing Smart Spaces for Real People

Publication Details
  • UbiComp 2006 Workshop position paper
  • Sep 20, 2006

Abstract

Close
We describe our work-in-progress: a "wizard-free" conference room designed for ease of use, yet retaining next-generation functionality. Called USE (Usable Smart Environments), our system uses multi-display systems, immersive conferencing, and secure authentication. It is based in cross-cultural ethnographic studies on the way people use conference rooms. The USE project has developed a flexible, extensible architecture specifically designed to enhance ease of use in smart environment technologies. The architecture allows customization and personalization of smart environments for particular people and groups, types of work, and specific physical spaces. The system consists of a database of devices with attributes, rooms and meetings that implements a prototype-instance inheritance mechanism through which contextual information (e.g. IP addresses application settings, phone numbers for teleconferencing systems, etc.) can be associated

Usable ubiquitous computing in next generation conference rooms: design, architecture and evaluation

Publication Details
  • International workshop at UbiComp 2006.
  • Sep 17, 2006

Abstract

Close
In the UbiComp 2005 workshop "Ubiquitous computing in next generation conference rooms" we learned that usability is one of the primary challenges in these spaces. Nearly all "smart" rooms, though they often have interesting and effective functionality, are very difficult to simply walk in and use. Most such rooms have resident experts who keep the room's systems functioning, and who often must be available on an everyday basis to enable the meeting technologies. The systems in these rooms are designed for and assume the presence of these human "wizards"; they are seldom designed with usability in mind. In addition, people don't know what to expect in these rooms; as yet there is no technology standard for next-generation conference rooms. The challenge here is to strike an effective balance between usability and new kinds of functionality (such as multiple displays, new interfaces, rich media systems, new uploading/access/security systems, robust mobile integration, to name just a few of the functions we saw in last year's workshop). So, this year, we propose a workshop to focus more specifically on how the design of next-generation conference rooms can support usability: the tasks facing the real people who use these rooms daily. Usability in ubiquitous computing has been the topic of several papers and workshops. Focusing on usability in next-generation conference rooms lets us bring to bear some of the insights from this prior work in a delineated application space. In addition the workshop will be informed by the most recent usability research in ubiquitous computing, rich media, context-aware mobile systems, multiple display environments, and interactive physical environments. We also are vitally concerned with how usability in smart environments tracks (or doesn't) across cultures. Conference room research has been and remains a focal point for some of the most interesting and applied work in ubiquitous computing. It is also an area where there are many real-world applications and daily opportunities for user feed-back: in short, a rich area for exploring usable ubiquitous computing. We see a rich opportunity to draw together researchers not only from conference room research but also from areas such as interactive furniture/smart environments, rich media, social computing, remote conferencing, and mobile devices for a lively exchange of ideas on usability in applied ubicomp systems for conference rooms.
Publication Details
  • International Conference on Pattern Recognition
  • Aug 20, 2006

Abstract

Close
This paper describes a framework for detecting unusual events in surveillance videos. Most surveillance systems consist of multiple video streams, but traditional event detection systems treat individual video streams independently or combine them in the feature extraction level through geometric reconstruction. Our framework combines multiple video streams in the inference level, with a coupled hidden Markov Model (CHMM). We use two-stage training to bootstrap a set of usual events, and train a CHMM over the set. By thresholding the likelihood of a test segment being generated by the model, we build a unusual event detector. We evaluate the performance of our detector through qualitative and quantitative experiments on two sets of real world videos.
Publication Details
  • Interactive Video; Algorithms and Technologies Hammoud, Riad (Ed.) 2006, XVI, 250 p., 109 illus., Hardcover.
  • Jun 7, 2006

Abstract

Close
This chapter describes tools for browsing and searching through video to enable users to quickly locate video passages of interest. Digital video databases containing large numbers of video programs ranging from several minutes to several hours in length are becoming increasingly common. In many cases, it is not sufficient to search for relevant videos, but rather to identify relevant clips, typically less than one minute in length, within the videos. We offer two approaches for finding information in videos. The first approach provides an automatically generated interactive multi-level summary in the form of a hypervideo. When viewing a sequence of short video clips, the user can obtain more detail on the clip being watched. For situations where browsing is impractical, we present a video search system with a flexible user interface that incorporates dynamic visualizations of the underlying multimedia objects. The system employs automatic story segmentation, and displays the results of text and image-based queries in ranked sets of story summaries. Both approaches help users to quickly drill down to potentially relevant video clips and to determine the relevance by visually inspecting the material.

Visualization in Audio-Based Music Information Retrieval

Publication Details
  • Computer Music Journal Vol. 30, Issue 2, pp. 42-62, 2006.
  • Jun 6, 2006

Abstract

Close
Music Information Retrieval (MIR) is an emerging research area that explores how music stored digitally can be effectively organized, searched, retrieved and browsed. The explosive growth of online music distribution, portable music players and lowering costs of recording indicate that in the near future most of recorded music in human history will be available digitally. MIR is steadily growing as a research area as can be evidenced by the international conference on music information retrieval (ISMIR) series soon in its sixth year and the increasing number of MIR-related publications in the Computer Music Journal as well as other journals and conferences.
Publication Details
  • Complexity, Vol 11, No 5.
  • Jun 3, 2006

Abstract

Close
Technology-the collection of devices and methods available to human society-evolves by constructing new devices and methods from ones that previously exist, and in turn offering these as possible components-building blocks-for the construction of further new devices and elements. The collective of technology in this way forms a network of elements where novel elements are created from existing ones and where more complicated elements evolve from simpler ones. We model this evolution within a simple artificial system on the computer. The elements in our system are logic circuits. New elements are formed by combination from simpler existing elements (circuits), and if a novel combination satisfies one of a set of needs it is retained as a building block for further combination. We study the properties of the resulting buildout. We find that our artificial system can create complicated technologies (circuits), but only by first creating simpler ones as building blocks. Our results mirror Lenski et al.'s, that complex features can be created in biological evolution only if simpler functions are first favored and act as stepping stones. We also find evidence that the resulting collection of technologies exists at self-organized criticality.

Implicit Brushing and Target Snapping: Data Exploration and Sense-making on Large Displays

Publication Details
  • Proceedings of AVI '06 (Short Paper), ACM Press, pp. 258-261.
  • May 23, 2006

Abstract

Close
During grouping tasks for data exploration and sense-making, the criteria are normally not well-defined. When users are bringing together data objects thought to be similar in some way, implicit brushing continually detects for groups on the freeform workspace, analyzes the groups' text content or metadata, and draws attention to related data by displaying visual hints and animation. This provides helpful tips for further grouping, group meaning refinement and structure discovery. The sense-making process is further enhanced by retrieving relevant information from a database or network during the brushing. Closely related to implicit brushing, target snapping provides a useful means to move a data object to one of its related groups on a large display. Natural dynamics and smooth animations also help to prevent distractions and allow users to concentrate on the grouping and thinking tasks. Two different prototype applications, note grouping for brainstorming and photo browsing, demonstrate the general applicability of the technique.
Publication Details
  • The 15th International World Wide Web Conference (WWW2006)
  • May 23, 2006

Abstract

Close
In a landmark article, over a half century ago, Vannevar Bush envisioned a "Memory Extender" device he dubbed the "memex". Bush's ideas anticipated and inspired numerous breakthroughs, including hypertext, the Internet, the World Wide Web, and Wikipedia. However, despite these triumphs, the memex has still not lived up to its potential in corporate settings. One reason is that corporate users often don't have sufficient time or incentives to contribute to a corporate memory or to explore others' contributions. At FXPAL, we are investigating ways to automatically create and retrieve useful corporate memories without any added burden on anyone. In this paper we discuss how ProjectorBox a smart appliance for automatic presentation capture and PAL Bar a system for proactively retrieving contextually relevant corporate memories have enabled us to integrate content from a variety of sources to create a cohesive multimedia corporate memory for our organization.

Tunnel Vector: A New Routing Algorithm with Scalability

Publication Details
  • The 9th IEEE Global Internet Symposium in conjunction with the 25th IEEE INFOCOM Conference, Barcelona, Catalunya, Spain, April 28 - 29, 2006
  • Apr 28, 2006

Abstract

Close
Routing algorithms such as Distance Vector and Link States have the routing table size as O(n), where n is the number of destination identifiers, thus providing only limited scalability for large networks when n is high. As the distributed hash table (DHT) techniques are extraordinarily scalable with n, our work aims at adapting a DHT approach to the design of a network-layer routing algorithm so that the average routing table size can be significantly reduced to O(log n) without losing much routing efficiency. Nonetheless, this scheme requires a major breakthrough to address some fundamental challenges. Specifically, unlike a DHT, a network-layer routing algorithm must (1) exchange its control messages without an underlying network, (2) handle link insertion/deletion and link-cost updates, and (3) provide routing efficiency. Thus, we are motivated to propose a new network-layer routing algorithm, Tunnel Vector (TV), using DHT-like multilevel routing without an underlying network. TV exchanges its control messages only via physical links and is self-configurable in response to linkage updates. In TV, the routing path of a packet is near optimal while the routing table size is O(log n) per node, with high probability. Thus, TV is suitable for routing in a very large network.

"It's Just A Method!" A Pedagogical Experiment in Interdisciplinary Design

Publication Details
  • Proceedings of ACM DIS (Designing Interactive Systems) 2006, Penn State, Penn.
  • Apr 5, 2006

Abstract

Close
What does a student need to know to be a designer? Beyond a list of separate skills, what mindset does a student need to develop for designerly action now and into the future? In the excitement of the cognitive revolution, Simon proposed a way of thinking about design that promised to make it more manageable and cognitive: to think of design as a planning problem. Yet, as Suchman argued long ago, planning accounts may be applied to problems that are not at base accomplished by planning, to the detriment of design vision. This paper reports on a pedagogy that takes Suchman's criticism to heart and avoids dressing up design methods as more systematic and predictive than they in fact are. The idea is to teach design through expo-sure to not just one, but rather, many methods---that is, sets of rules or behaviors that produce artifacts for further reflec-tion and development. By introducing a large number of design methods, decoupled from theories, models or frame-works, we teach (a) important cross-methodological regu-larities in competence as a designer, (b) that the practice of design can itself be designed and (c) that method choice affects design outcomes. This provides a rich and produc-tive notion of design particularly necessary for the world of pervasive and ubiquitous computing.
Publication Details
  • EACL (11th Conference of the European Chapter of the Association for Computational Linguistics)
  • Apr 3, 2006

Abstract

Close
Probabilistic Latent Semantic Analysis (PLSA) models have been shown to provide a better model for capturing polysemy and synonymy than Latent Semantic Analysis (LSA). However, the parameters of a PLSA model are trained using the Expectation Maximization (EM) algorithm, and as a result, the trained model is dependent on the initialization values so that performance can be highly variable. In this paper we present a method for using LSA analysis to initialize a PLSA model. We also investigated the performance of our method for the tasks of text segmentation and retrieval on personal-size corpora, and present results demonstrating the efficacy of our proposed approach.

FXPAL at TRECVID 2005

Publication Details
  • Proceedings of TRECVID 2005
  • Mar 14, 2006

Abstract

Close
In 2005 FXPAL submitted results for 3 tasks at TRECVID: shot boundary detection, high-level feature extraction, and interactive search.
Publication Details
  • International Journal of Web Services Practices
  • Jan 17, 2006

Abstract

Close
Mobile users often require access to their documents while away from the office. While pre-loading documents in a repository can make those documents available remotely, people need to know in advance which documents they might need. Furthermore, it may be difficult to view, print, or share the document through a portable device such as cell phone. We describe DoKumobility, a network of web services for mobile users for managing, printing, and sharing documents. In this paper, we describe the infrastructure and illustrate its use with several applications. We conclude with a discussion of lessons learned and future work.
2005

On-Demand Overlay Networking of Collaborative Applications

Publication Details
  • IEEE CollaborateCom 2005 - The First IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing
  • Dec 19, 2005

Abstract

Close
We propose a new overlay network, called Generic Identifier Network (GIN), for collaborative nodes to share objects with transactions across affiliated organizations by merging the organizational local namespaces upon mutual agreement. Using local namespaces instead of a global namespace can avoid excessive dissemination of organizational information, reduce maintenance costs, and improve robustness against external security attacks. GIN can forward a query with an O(1) latency stretch with high probability and achieve high performance. In the absence of a complete distance map, its heuristic algorithms for self configuration are scalable and efficient. Routing tables are maintained using soft-state mechanisms for fault tolerance and adapting to performance updates of network distances. Thus, GIN has significant new advantages for building an efficient and scalable Distributed Hash Table for modern collaborative applications across organizations.
Publication Details
  • Proceedings of SPIE International Symposium ITCom 2005 on Multimedia Systems and Applications VIII, Boston, Massachusetts, USA, October 2005.
  • Dec 7, 2005

Abstract

Close
Meeting environments, such as conference rooms, executive briefing centers, and exhibition spaces, are now commonly equipped with multiple displays, and will become increasingly display-rich in the future. Existing authoring / presentation tools such as PowerPoint, however, provide little support for effective utilization of multiple displays. Even using advanced multi-display enabled multimedia presentation tools, the task of assigning material to displays is tedious and distracts presenters from focusing on content. This paper describes a framework for automatically assigning presentation material to displays, based on a model of the quality of views of audience members. The framework is based on a model of visual fidelity which takes into account presentation content, audience members' locations, the limited resolution of human eyes, and display location, orientation, size, resolution, and frame rate. The model can be used to determine presentation material placement based on average or worst case audience member view quality, and to warn about material that would be illegible. By integrating this framework with a previous system for multi-display presentation [PreAuthor, others], we created a tool that accepts PowerPoint and/or other media input files, and automatically generates a layout of material onto displays for each state of the presentation. The tool also provides an interface allowing the presenter to modify the automatically generated layout before or during the actual presentation. This paper discusses the framework, possible application scenarios, examples of the system behavior, and our experience with system use.

Post-Bit: Multimedia E-paper Stickies

Publication Details
  • Video track, ACM Multimedia 2005.
  • Nov 13, 2005

Abstract

Close
A Post-Bit is a prototype of a small ePaper device for handling multimedia content, combining interaction control and display into one package. Post-Bits are modeled after paper Post-Its™; the functions of each Post-Bit combine the affordances of physical tiny sticky memos and digital handling of information. Post-Bits enable us to arrange multimedia contents in our embodied physical spaces. Tangible properties of paper such as flipping, flexing, scattering and rubbing are mapped to controlling aspects of the content. In this paper, we introduce the integrated design and functionality of the Post-Bit system, including four main components: the ePaper sticky memo/player, with integrated sensors and connectors; a small container/binder that a few Post-Bits can fit into, for ordering and multiple connections; the data and power port that allows communication with the host com-puter; and finally the software and GUI interface that reside on the host PC and manage multimedia transfer.
Publication Details
  • ACM Multimedia 2005, Technical Demonstrations.
  • Nov 5, 2005

Abstract

Close
The MediaMetro application provides an interactive 3D visualization of multimedia document collections using a city metaphor. The directories are mapped to city layouts using algorithms similar to treemaps. Each multimedia document is represented by a building and visual summaries of the different constituent media types are rendered onto the sides of the building. From videos, Manga storyboards with keyframe images are created and shown on the façade; from slides and text, thumbnail images are produced and subsampled for display on the building sides. The images resemble windows on a building and can be selected for media playback. To support more facile navigation between high overviews and low detail views, a novel swooping technique was developed that combines altitude and tilt changes with zeroing in on a target.

Seamless presentation capture, indexing, and management

Publication Details
  • Internet Multimedia Management Systems VI (SPIE Optics East 2005)
  • Oct 26, 2005

Abstract

Close
Technology abounds for capturing presentations. However, no simple solution exists that is completely automatic. ProjectorBox is a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as a presenter's laptop, to display devices, such as a projector. It seamlessly captures high-resolution slide images, text and audio. It requires no operator, specialized software, or changes to current presentation practice. Automatic media analysis is used to detect presentation content and segment presentations. The analysis substantially enhances the web-based user interface for browsing, searching, and exporting captured presentations. ProjectorBox has been in use for over a year in our corporate conference room, and has been deployed in two universities. Our goal is to develop automatic capture services that address both corporate and educational needs.

ProjectorBox: Seamless presentation capture for classrooms

Publication Details
  • World Conference on E-Learning in Corporate, Government, Healthcare, & Higher Education (E-Learn 2005)
  • Oct 24, 2005

Abstract

Close
Automatic lecture capture can help students, instructors, and educational institutions. Students can focus less on note-taking and more on what the instructor is saying. Instructors can provide access to lecture archives to help students study for exams and make-up missed classes. And online lecture recordings can be used to support distance learning. For these and other reasons, there has been great interest in automatically capturing classroom presentations. However, there is no simple solution that is completely automatic. ProjectorBox is our attempt to create a "zero user interaction" appliance that automatically captures, indexes, and manages presentation multimedia. It operates continuously to record the RGB information sent from presentation devices, such as an instructor's laptop, to display devices such as a projector. It seamlessly captures high-resolution slide images, text, and audio. A web-based user interface allows students to browse, search, replay, and export captured presentations.
Publication Details
  • In Proceedings of International Conference on Computer Vision, 2005, page 1026-1033
  • Oct 17, 2005

Abstract

Close
Recent years have witnessed the rise of many effective text information retrieval systems. By treating local visual features as terms, training images as documents and input images as queries, we formulate the problem of object recognition into that of text retrieval. Our formulation opens up the opportunity to integrate some powerful text retrieval tools with computer vision techniques. In this paper, we propose to improve the efficiency of articulated object recognition by an Okapi-Chamfer matching algorithm. The algorithm is based on the inverted index technique. The inverted index is a widely used way to effectively organize a collection of text documents. With the inverted index, only documents that contain query terms are accessed and used for matching. To enable inverted indexing in an image database, we build a lexicon of local visual features by clustering the features extracted from the training images. Given a query image, we extract visual features and quantize them based on the lexicon, and then look up the inverted index to identify the subset of training images with non-zero matching score. To evaluate the matching scores in the subset, we combined the modified Okapi weighting formula with the Chamfer distance. The performance of the Okapi-Chamfer matching algorithm is evaluated on a hand posture recognition system. We test the system with both synthesized and real world images. Quantitative results demonstrate the accuracy and efficiency our system.
Publication Details
  • IEEE Trans. Multimedia, Vol. 7 No. 5, pp. 981-990
  • Oct 11, 2005

Abstract

Close
Abstract-We present a system for automatically extracting the region of interest and controlling virtual cameras control based on panoramic video. It targets applications such as classroom lectures and video conferencing. For capturing panoramic video, we use the FlyCam system that produces high resolution, wide-angle video by stitching video images from multiple stationary cameras. To generate conventional video, a region of interest (ROI) can be cropped from the panoramic video. We propose methods for ROI detection, tracking, and virtual camera control that work in both the uncompressed and compressed domains. The ROI is located from motion and color information in the uncompressed domain and macroblock information in the compressed domain, and tracked using a Kalman filter. This results in virtual camera control that simulates human controlled video recording. The system has no physical camera motion and the virtual camera parameters are readily available for video indexing.
Publication Details
  • http://www.strata.com/gallery_detail.asp?id=1480&page=1&category=48
  • Oct 1, 2005

Abstract

Close
I produced these Illustrations for two multimedia applications that were developed by FX Palo Alto Laboratory and California State University at Sacramento's Department of Psychology. The applications were part of a study to see how primary school age children learn with certain multimedia tools. Each illustration was viewed as part of a fairly complex screen of information as well as on its own.
Publication Details
  • We organized and ran a full-day workshop at the UbiComp 2005 Conference in Tokyo, Japan, September 11, 2005.
  • Sep 29, 2005

Abstract

Close
Designing the technologies, applications, and physical spaces for next-generation conference rooms (This is a day-long workshop in Tokyo.) Next-generation conference rooms are often designed to anticipate the onslaught of new rich media presentation and ideation systems. Throughout the past couple of decades, many researchers have attempted to reinvent the conference room, aiming at shared online or visual/virtual spaces, smart tables or walls, media support and tele-conferencing systems of varying complexity. Current research in high-end room systems often features a multiplicity of thin, bright display screens (both large and small), along with interactive whiteboards, robotic cameras, and smart remote conferencing systems. Added into the mix one can find a variety of meeting capture and metadata management systems, automatic or not, focused on capturing different aspects of meetings in different media: to the Web, to one's PDA or phone, or to a company database. Smart spaces and interactive furniture design projects have shown systems embedded in tables, podiums, walls, chairs and even floors and lighting. Exploiting the capabilities of all these technologies in one room, however, is a daunting task. For example, faced with three or more display screens, all but a few presenters are likely to opt for simply replicating the same image on all of them. Even more daunting is the design challenge: how to choose which capabilities are vital to particular tasks, or for a particular room, or are well suited to a particular culture. In this workshop we'll explore how the design of next-generation conference rooms can be informed by the most recent research in rich media, context-aware mobile systems, ubiquitous displays, and interactive physical environments. How should conference room systems reflect the rapidly changing expectations around personal devices and smart spaces? What kinds of systems are needed to support meetings in technologically complex environments? How can design of conference room spaces and technologies account for differing social and cultural practices around meetings? What requirements are imposed by security and privacy issues in public spaces? What aspects of meeting capture and access technologies have proven to be useful, and how should a smart environment enable them? What intersections exist with other research areas such as digital libraries? Conference room research has been and remains a focal point for some of the most interesting and applied work in ubiquitous computing. What lessons can we take from the research to date as we move forward? We are confident that a lively and useful discussion will be engendered by bringing directions from recent ubicomp research in games, multimedia applications, and social software to ongoing research in conference rooms systems: integrating architecture and tangible media, information design and display, and mobile and computer-mediated communications.

The Convertible Podium: A rich media teaching tool for next-generation classrooms

Publication Details
  • Paper presented at SIGGRAPH 2005, Los Angeles.
  • Sep 29, 2005

Abstract

Close
The Convertible Podium is a central control station for rich media in next-generation classrooms. It integrates flexible control systems for multimedia software and hardware, and is designed for use in classrooms with multiple screens, multiple media sources and multiple distribution channels. The built-in custom electronics and unique convertible podium frame allows intuitive conversion between use modes (either manual or automatic). The at-a-touch sound and light control system gives control over the classroom environment. Presentations can be pre-authored for effective performance, and quickly altered on the fly. The counter-weighted and motorized conversion system allows one person to change modes simply by lifting the top of the Podium to the correct position for each mode. The Podium is lightweight, mobile, and wireless, and features an onboard 21" LCD display, document cameras and other capture devices, tangible controls for hardware and software, and also possesses embedded RFID sensing for automatic data retrieval and file management. It is designed to ease the tasks involved in authoring and presenting in a rich media classroom, as well as supporting remote telepresence and integration with other mobile devices.
Publication Details
  • INTERACT '05 short paper
  • Sep 12, 2005

Abstract

Close
Indexes such as bookmarks and recommendations are helpful for accessing multimedia documents. This paper describes the 3D Syllabus system, which is designed to visualize indexes to multimedia training content along with the information structures. A double-sided landscape with balloons and cubes represents the personal and group indexes, respectively. The 2D ground plane organizes the indexes as a table and the third dimension of height indicates their importance scores. Additional visual properties of the balloons and cubes provide other information about the indexes and their content. Paths are represented by pipes connecting the balloons. A reliminary evaluation of the 3D Syllabus prototype suggests that it is more efficient than a typical training CD-ROM and is more enjoyable to use.
Publication Details
  • INTERACT 2005, LNCS 3585, pp. 781-794
  • Sep 12, 2005

Abstract

Close
A video database can contain a large number of videos ranging from several minutes to several hours in length. Typically, it is not sufficient to search just for relevant videos, because the task still remains to find the relevant clip, typically less than one minute of length, within the video. This makes it important to direct the users attention to the most promising material and to indicate what material they already investigated. Based on this premise, we created a video search system with a powerful and flexible user interface that incorporates dynamic visualizations of the underlying multimedia objects. The system employes an automatic story segmentation, combines text and visual search, and displays search results in ranked sets of story keyframe collages. By adapting the keyframe collages based on query relevance and indicating which portions of the video have already been explored, we enable users to quickly find relevant sections. We tested our system as part of the NIST TRECVID interactive search evaluation, and found that our user interface enabled users to find more relevant results within the allotted time than other systems employing more sophisticated analysis techniques but less helpful user interfaces.
Publication Details
  • M.F. Costabile and F. Paternò (Eds.): INTERACT 2005, LNCS 3585
  • Sep 12, 2005

Abstract

Close
We developed and studied an experimental system, RealTourist, which lets a user to plan a conference trip with the help of a remote tourist consultant who could view the tourist's eye-gaze superimposed onto a shared map. Data collected from the experiment were analyzed in conjunction with literature review on speech and eye-gaze patterns. This inspective, exploratory research identified various functions of gaze-overlay on shared spatial material including: accurate and direct display of partner's eye-gaze, implicit deictic referencing, interest detection, common focus and topic switching, increased redundancy and ambiguity reduction, and an increase of assurance, confidence, and understanding. This study serves two purposes. The first is to identify patterns that can serve as a basis for designing multimodal human-computer dialogue systems with eye-gaze locus as a contributing channel. The second is to investigate how computer-mediated communication can be supported by the display of the partner's eye-gaze.

The Convertible Podium: a rich media control station

Publication Details
  • Short presentation in UbiComp 2005 workshop in Tokyo, Japan.
  • Sep 11, 2005

Abstract

Close
As the use of rich media in mobile devices and smart environments becomes more sophisticated, so must the design of the everyday objects used as containers or controllers. Rather than simply tacking electronics onto existing furniture or other objects, the design of a smart object can enhance existing ap-plications in unexpected ways. The Convertible Podium is an experiment in the design of a smart object with complex integrated systems, combining the highly designed look and feel of a modern lectern with systems that allow it to serve as a central control station for rich media manipulation in next-generation confer-ence rooms. It enables easy control of multiple independent screens, multiple media sources (including mobile devices) and multiple distribution channels. The Podium is designed to ease the tasks involved in authoring and presenting in a rich media meeting room, as well as supporting remote telepresence and in-tegration with mobile devices.

Post-Bits: an e-paper sticky memo system

Publication Details
  • Demo and presentation in UbiComp 2005 workshop in Tokyo, Japan.
  • Sep 11, 2005

Abstract

Close
A Post-Bit is a prototype of a small ePaper device for handling multimedia content, combining interaction control and display into one package. Post-Bits are modeled after paper Post-Its™; the functions of each Post-Bit combine the affordances of physical tiny sticky memos and digital handling of information. Post-Bits enable us to arrange multimedia contents in our embodied physical spaces. Tangible properties of paper such as flipping, flexing, scattering and rubbing are mapped to controlling aspects of the content. In this paper, we introduce the integrated design and functionality of the Post-Bit system, including four main components: the ePaper sticky memo/player, with integrated sensors and connectors; a small container/binder that a few Post-Bits can fit into, for ordering and multiple connections; the data and power port that allows communication with the host com-puter; and finally the software and GUI interface that reside on the host PC and manage multimedia transfer.
Publication Details
  • Sixteenth ACM Conference on Hypertext and Hypermedia
  • Sep 6, 2005

Abstract

Close
Hyper-Hitchcock is a hypervideo editor enabling the direct manipulation authoring of a particular form of hypervideo called "detail-on-demand video." This form of hypervideo allows a single link out of the currently playing video to provide more details on the content currently being presented. The editor includes a workspace to select, group, and arrange video clips into several linear sequences. Navigational links placed between the video elements are assigned labels and return behaviors appropriate to the goals of the hypervideo and the role of the destination video. Hyper-Hitchcock was used by students in a Computers and New Media class to author hypervideos on a variety of topics. The produced hypervideos provide examples of hypervideo structures and the link properties and behaviors needed to support them. Feedback from students identified additional link behaviors and features required to support new hypervideo genres. This feedback is valuable for the redesign of Hyper-Hitchcock and the design of hypervideo editors in general.

DoKumobility: Web services for the mobile worker

Publication Details
  • IEEE International Conference on Next Generation Web Services Practices (NWeSP'05), Seoul, Korea
  • Aug 22, 2005

Abstract

Close
Mobile users often require access to their documents while away from the office. While pre-loading documents in a repository can make those documents available remotely, people need to know in advance which documents they might need. Furthermore, it may be difficult to view, print, or share the document through a portable device such as cell phone. We implemented DoKumobility, a network of web services for mobile users for managing, printing, and sharing documents. In this paper, we describe the infrastructure and illustrate its use with several applications