Publications

From 2019 (Clear Search)

2019

Abstract

Close
As our landscape of wearable technologies proliferates, we find more devices situated on our heads. However, many challenges hinder them from widespread adoption---from their awkward, bulky form factor (today's AR and VR goggles) to their socially stigmatized designs (Google Glass) and a lack of a well-developed head-based interaction design language. In this paper, we explore a socially acceptable, large, head-worn interactive wearable---a hat. We report results from a gesture elicitation study with 17 participants, extract a taxonomy of gestures, and define a set of design concerns for interactive hats. Through this lens, we detail the design and fabrication of three hat prototypes capable of sensing touch, head movements, and gestures, and including ambient displays of several types. Finally, we report an evaluation of our hat prototype and insights to inform the design of future hat technologies.
Publication Details
  • ICWSM 19
  • Jun 12, 2019

Abstract

Close
Millions of images are shared through social media every day. Yet, we know little about how the activities and preferences of users are dependent on the content of these images. In this paper, we seek to understand viewers engagement with photos. We design a quantitative study to expand previous research on in-app visual effects (also known as filters) through the examination of visual content identified through computer vision. This study is based on analysis of 4.9M Flickr images and is organized around three important engagement factors, likes, comments and favorites. We find that filtered photos are not equally engaging across different categories of content. Photos of food and people attract more engagement when filters are used, while photos of natural scenes and photos taken at night are more engaging when left unfiltered. In addition to contributing to the research around social media engagement and photography practices, our findings offer several design implications for mobile photo sharing platforms.
Publication Details
  • ACM TVX 2019
  • Jun 5, 2019

Abstract

Close
Advancements in 360° cameras have increased their related livestreams. In the case of video conferencing, 360° cameras provide almost unrestricted visibility into a conference room for a remote viewer without the need for an articulating camera. However, local participants are left wondering if someone is connected and where remote participants might be looking. To address this, we fabricated a prototype device that shows the gaze and presence of remote 360° viewers using a ring of LEDs that match the remote viewports. We discuss the long term use of one of the prototypes in a lecture hall and present future directions for visualizing gaze presence in 360° video streams.
Publication Details
  • ACM TVX 2019
  • Jun 5, 2019

Abstract

Close
Livestreaming and video calls have grown in popularity due to the increased connectivity and advancements in mobile devices. Our interactions with these cameras are limited as the cameras are either fixed or manually remote controlled. Here we present a Wizard-of-Oz elicitation study to inform the design of interactions with smart 360\textdegree\ cameras or robotic mobile desk cameras for use in video-conferences and live-streaming situations. There was an overall preference for devices that can minimize distraction as well as preferences for devices that can show they demonstrate an understanding of video-meeting context. We find participants dynamically grow with regards to the complexity of interactions which illustrate the need for deeper event semantics within the Camera AI. Finally, we detail interaction techniques and design insights to inform the next generation of personal video cameras for streaming and collaboration.
Publication Details
  • Personal and Ubiquitous Computing
  • May 7, 2019

Abstract

Close
Reliable location estimation has been a key enabler of many applications in the UbiComp space. Much progress has been made on the development of accurate of indoor location systems, which form the foundation of many interesting applications, particularly in consumer scenarios. However, many location-based applications in enterprise settings also require addressing another facet of reliability: assurance. Without having strong guarantees of a location estimate’s legitimacy, stakeholders must explicitly balance the advantages offered with the risks of falsification. In this space, there are two key threats: replay attacks, where signal and sensor information is collected in one location and replayed in another to falsify a location estimation later in time; and wormhole attacks, where signal and sensor information is forwarded to a remote location by a colluding device to falsify location estimation in real-time. In this work, we improve upon the state of the art in wormhole-resistant location estimation techniques. Specifically, we present the Location Anchor, which leverages a combination of technical solutions and social contracts to provide high-assurance proofs of device location that are resistant to wormhole attacks. Unlike existing work, the Location Anchor has minimal hardware costs, supports a rich tapestry of applications, and is compatible with commodity smartphone and tablet platforms. We show that the Location Anchor can extend existing replay-resistant location systems into wormhole-resistant location systems, even in the face of very aggressive attacker assumptions. We describe the protocols underlying the Location Anchor, as well as report on the efficacy of a prototype implementation.

Augmenting Knowledge Tracing by Considering Forgetting Behavior

Publication Details
  • The Web Conference 2019 (formerly WWW)
  • Apr 29, 2019

Abstract

Close
We describe a corpus analysis method to extract terminology from a collection of technical specifications book in the field of construction. Using statistics and word n-grams analyzes, we extract the terminology of the domain and then perform pruning steps with linguistic patterns and internet queries to improve the quality of the final terminology. In this paper we specifically focus on the improvements got by applying Internet queries and patterns. These improvements are evaluated by using a manual evaluation carried out by 6 experts in the field in the case of technical specification books.
Publication Details
  • CHI 2019
  • Apr 27, 2019

Abstract

Close
Work breaks -- both physical and digital -- play an important role in productivity and workplace wellbeing. Yet, the growing availability of digital distractions from online content can turn breaks into prolonged "cyberloafing". In this paper, we present UpTime, a system that aims to support workers' transitions from breaks back to work--moments susceptible to digital distractions. Combining a browser extension and chatbot, users interact with UpTime through proactive and reactive chat prompts. By sensing transitions from inactivity, UpTime helps workers avoid distractions by automatically blocking distracting websites temporarily, while still giving them control to take necessary digital breaks. We report findings from a 3-week comparative field study with 15 workers. Our results show that automatic, temporary blocking at transition points can significantly reduce digital distractions and stress without sacrificing workers' sense of control. Our findings, however, also emphasize that overloading users' existing communication channels for chatbot interaction should be done thoughtfully.
Publication Details
  • Internet of Things: Engineering Cyber Physical Human Systems
  • Mar 15, 2019

Abstract

Close
Recent advances on the Internet of Things (IoT) lead to an explosion of physical objects being connected to the Internet. These objects sense, compute, interpret what is occurring within themselves and the world, and preferably interact with users. In this work, we present a visible light-enabled finger tracking technique allowing users to perform freestyle multi-touch gestures on everyday object’s surface. By projecting encoded patterns onto an object’s surface (e.g. paper, display, or table) through a projector, and localizing the user’s fingers with light sensors, the proposed system offers users a richer interactive space than the device’s existing interfaces. More importantly, results from our experiments indicate that this system can localize ten fingers simultaneously with an accuracy of 1.7 millimeters and an refresh rate of 84 Hz with only 31 milliseconds delay on WiFi or 23 milliseconds delay on serial communication, easily supporting multi-finger gesture interaction on everyday ob-jects. We also develop two example applications to demonstrate possible scenarios. Finally, we conduct a pre-liminary exploration of 3D depth inference using the same setup and achieve 2.43 cm depth estimation accuracy.
Publication Details
  • IEEE 2nd International Conference on Multimedia Information Processing and Retrieval
  • Mar 14, 2019

Abstract

Close
We present an approach to detect speech impairments from video of people with aphasia, a neurological condition that affects the ability to comprehend and produce speech. To counter inherent privacy issues, we propose a cross-media approach using only visual facial features to detect speech properties without listening to the audio content of speech. Our method uses facial landmark detections to measure facial motion over time. We show how to detect speech and pause instances based on temporal mouth shape analysis and identify repeating mouth patterns using a dynamic warping mechanism. We relate our developed features for pause frequency, mouth pattern repetitions, and pattern variety to actual symptoms of people with aphasia in the AphasiaBank dataset. Our evaluation shows that our developed features are able to reliably differentiate dysfluent speech production of people with aphasia from those without aphasia with an accuracy of 0.86. A combination of these handcrafted features and further statistical measures on talking and repetition improves classification performance to an accuracy of 0.88.
Publication Details
  • ACM Transactions on Interactive Intelligent System
  • Jan 31, 2019

Abstract

Close
Activity recognition is a core component of many intelligent and context-aware systems. We present a solution for discreetly and unobtrusively recognizing common work activities above a work surface without using cameras.We demonstrate our approach, which utilizes an RF-radar sensor mounted under the work surface, in three domains; recognizing work activities at a convenience-store counter, recognizing common office deskwork activities, and estimating the position of customers in a showroom environment. Our examples illustrate potential benefits for both post-hoc business analytics and for real-time applications. Our solution was able to classify seven clerk activities with 94.9% accuracy using data collected in a lab environment and able to recognize six common deskwork activities collected in real offices with 95.3% accuracy. Using two sensors simultaneously, we demonstrate coarse position estimation around a large surface with 95.4% accuracy. We show that using multiple projections of RF signal leads to improved recognition accuracy. Finally, we show how smartwatches worn by users can be used to attribute an activity, recognized with the RF sensor, to a particular user in multi-user scenarios. We believe our solution can mitigate some of users’ privacy concerns associated with cameras and is useful for a wide range of intelligent systems.