Patrick Chiu, Ph. D.

Principal Research Scientist

Patrick Chiu

Patrick’s current research interests include multimedia applications and content analysis, human-computer interaction, and ubiquitous computing. Before joining FXPAL, he worked at Xerox PARC and LiveWorks, Inc. He graduated summa cum laude with a BA from UC San Diego, and received a PhD in mathematics from Stanford University.

 

 

 

 

 

Co-Authors

Publications

2019

Abstract

Close
We present a remote assistance system that enables a remotely located expert to provide guidance using hand gestures to a customer who performs a physical task in a different location. The system is built on top of a web-based real-time media communication framework which allows the customer to use a commodity smartphone to send a live video feed to the expert, from which the expert can see the view of the customer's workspace and can show his/her hand gestures over the video in real-time. The expert's hand gesture is captured with a hand tracking device and visualized with a rigged 3D hand model on the live video feed. The system can be accessed via a web browser, and it does not require any app software to be installed on the customer's device. Our system supports various types of devices including smartphone, tablet, desktop PC, and smart glass. To improve the collaboration experience, the system provides a novel gravity-aware hand visualization technique.
Publication Details
  • ACM ISS 2019
  • Nov 9, 2019

Abstract

Close
In a telepresence scenario with remote users discussing a document, it can be difficult to follow which parts are being discussed. One way to address this is by showing the user's hand position on the document, which also enables expressive gestural communication. An important practical problem is how to capture and transmit the hand movements efficiently with high resolution document images. We propose a tabletop system with two channels that integrates document capture with a 4K video camera and hand tracking with a webcam, in which the document image and hand skeleton data are transmitted at different rates and handled by a lightweight Web browser client at remote sites. To enhance the rendering, we employ velocity based smoothing and ephemeral motion traces. We tested our prototype over long distances from USA to Japan and to Italy, and report on latency and jitter performance. Our system achieves relatively low latency over a long distance in comparison with a tele-immersive system that transmits mesh data over much shorter distances.
Publication Details
  • IEEE VIS 2019
  • Oct 20, 2019

Abstract

Close
The analysis of bipartite networks is critical in a variety of application domains, such as exploring entity co-occurrences in intelligence analysis and investigating gene expression in bio-informatics. One important task is missing link prediction, which infers the existence of unseen links based on currently observed ones. In this paper, we propose MissBiN that involves analysts in the loop for making sense of link prediction results. MissBiN combines a novel method for link prediction and an interactive visualization for examining and understanding the algorithm outputs. Further, we conducted quantitative experiments to assess the performance of the proposed link prediction algorithm, and a case study to evaluate the overall effectiveness of MissBiN.
Publication Details
  • Designing Interactive Systems (DIS) 2019
  • Jun 23, 2019

Abstract

Close
As our landscape of wearable technologies proliferates, we find more devices situated on our heads. However, many challenges hinder them from widespread adoption---from their awkward, bulky form factor (today's AR and VR goggles) to their socially stigmatized designs (Google Glass) and a lack of a well-developed head-based interaction design language. In this paper, we explore a socially acceptable, large, head-worn interactive wearable---a hat. We report results from a gesture elicitation study with 17 participants, extract a taxonomy of gestures, and define a set of design concerns for interactive hats. Through this lens, we detail the design and fabrication of three hat prototypes capable of sensing touch, head movements, and gestures, and including ambient displays of several types. Finally, we report an evaluation of our hat prototype and insights to inform the design of future hat technologies.
2018

Abstract

Close
The analysis of bipartite networks is critical in many application domains, such as studying gene expression in bio-informatics. One important task is missing link prediction, which infers the exis- tence of new links based on currently observed ones. However, in practice, analysts need to utilize their domain knowledge based on the algorithm outputs in order to make sense of the results. We pro- pose a novel visual analysis framework, MissBi, which allows for examining and understanding missing links in bipartite networks. Some initial feedback from a management school professor has demonstrated the effectiveness of the tool.
Publication Details
  • ISS 2018
  • Nov 25, 2018

Abstract

Close
Projector-camera systems can turn any surface such as tabletops and walls into an interactive display. A basic problem is to recognize the gesture actions on the projected UI widgets. Previous approaches using finger template matching or occlusion patterns have issues with environmental lighting conditions, artifacts and noise in the video images of a projection, and inaccuracies of depth cameras. In this work, we propose a new recognizer that employs a deep neural net with an RGB-D camera; specifically, we use a CNN (Convolutional Neural Network) with optical flow computed from the color and depth channels. We evaluated our method on a new dataset of RGB-D videos of 12 users interacting with buttons projected on a tabletop surface.
2017
Publication Details
  • ICDAR 2017
  • Nov 10, 2017

Abstract

Close
We present a system for capturing ink strokes written with ordinary pen and paper using a fast camera with a frame rate comparable to a stylus digitizer. From the video frames, ink strokes are extracted and used as input to an online handwriting recognition engine. A key component in our system is a pen up/down detection model for detecting the contact of the pen-tip with the paper in the video frames. The proposed model consists of feature representation with convolutional neural networks and classification with a recurrent neural network. We also use a high speed tracker with kernelized correlation filters to track the pen-tip. For training and evaluation, we collected labeled video data of users writing English and Japanese phrases from public datasets, and we report on character accuracy scores for different frame rates in the two languages.
Publication Details
  • IEEE Transactions on Visualization and Computer Graphics (Proceedings of VAST 2017)
  • Oct 1, 2017

Abstract

Close
Discovering and analyzing biclusters, i.e., two sets of related entities with close relationships, is a critical task in many real-world applications, such as exploring entity co-occurrences in intelligence analysis, and studying gene expression in bio-informatics. While the output of biclustering techniques can offer some initial low-level insights, visual approaches are required on top of that due to the algorithmic output complexity.This paper proposes a visualization technique, called BiDots, that allows analysts to interactively explore biclusters over multiple domains. BiDots overcomes several limitations of existing bicluster visualizations by encoding biclusters in a more compact and cluster-driven manner. A set of handy interactions is incorporated to support flexible analysis of biclustering results. More importantly, BiDots addresses the cases of weighted biclusters, which has been underexploited in the literature. The design of BiDots is grounded by a set of analytical tasks derived from previous work. We demonstrate its usefulness and effectiveness for exploring computed biclusters with an investigative document analysis task, in which suspicious people and activities are identified from a text corpus.
2016
Publication Details
  • UIST 2016 (Demo)
  • Oct 16, 2016

Abstract

Close
We propose a robust pointing detection with virtual shadow representation for interacting with a public display. Using a depth camera, our shadow is generated by a model with an angled virtual sun light and detects the nearest point as a pointer. Position of the shadow becomes higher when user walks closer, which conveys the notion of correct distance to control the pointer and offers accessibility to the higher area of the display.
Publication Details
  • IUI 2016
  • Mar 7, 2016

Abstract

Close
We describe methods for analyzing and visualizing document metadata to provide insights about collaborations over time. We investigate the use of Latent Dirichlet Allocation (LDA) based topic modeling to compute areas of interest on which people collaborate. The topics are represented in a node-link force directed graph by persistent fixed nodes laid out with multidimensional scaling (MDS), and the people by transient movable nodes. The topics are also analyzed to detect bursts to highlight "hot" topics during a time interval. As the user manipulates a time interval slider, the people nodes and links are dynamically updated. We evaluate the results of LDA topic modeling for the visualization by comparing topic keywords against the submitted keywords from the InfoVis 2004 Contest, and we found that the additional terms provided by LDA-based keyword sets result in improved similarity between a topic keyword set and the documents in a corpus. We extended the InfoVis dataset from 8 to 20 years and collected publication metadata from our lab over a period of 21 years, and created interactive visualizations for exploring these larger datasets.
2015
Publication Details
  • ACM Multimedia Conference 2015
  • Oct 26, 2015

Abstract

Close
New technology comes about in a number of different ways. It may come from advances in scientific research, through new combinations of existing technology, or by simply from imagining what might be possible in the future. This video describes the evolution of Tabletop Telepresence, a system for remote collaboration through desktop videoconferencing combined with a digital desk. Tabletop Telepresence provides a means to share paper documents between remote desktops, interact with documents and request services (such as translation), and communicate with a remote person through a teleconference. It was made possible by combining advances in camera/projector technology that enable a fully functional digital desk, embodied telepresence in video conferencing and concept art that imagines future workstyles.
Publication Details
  • DocEng 2015
  • Sep 8, 2015

Abstract

Close
We present a novel system for detecting and capturing paper documents on a tabletop using a 4K video camera mounted overhead on pan-tilt servos. Our automated system first finds paper documents on a cluttered tabletop based on a text probability map, and then takes a sequence of high-resolution frames of the located document to reconstruct a high quality and fronto-parallel document page image. The quality of the resulting images enables OCR processing on the whole page. We performed a preliminary evaluation on a small set of 10 document pages and our proposed system achieved 98% accuracy with the open source Tesseract OCR engine.
Publication Details
  • CSCW 2015
  • Mar 14, 2015

Abstract

Close
Collaboration Map (CoMap) is an interactive visualization tool showing temporal changes of small group collaborations. As dynamic entities, collaboration groups have flexible features such as people involved, areas of work, and timings. CoMap shows a graph of collaborations during user-adjustable periods, providing overviews of collaborations' dynamic features. We demonstrate CoMap with a co-authorship dataset extracted from DBLP to visualize 587 publications by 29 researchers at a research organization.
2014
Publication Details
  • ICME 2014, Best Demo Award
  • Jul 14, 2014

Abstract

Close
In this paper, we describe Gesture Viewport, a projector-camera system that enables finger gesture interactions with media content on any surface. We propose a novel and computationally very efficient finger localization method based on the detection of occlusion patterns inside a virtual sensor grid rendered in a layer on top of a viewport widget. We develop several robust interaction techniques to prevent unintentional gestures to occur, to provide visual feedback to a user, and to minimize the interference of the sensor grid with the media content. We show the effectiveness of the system through three scenarios: viewing photos, navigating Google Maps, and controlling Google Street View.
Publication Details
  • HotMobile 2014
  • Feb 26, 2014

Abstract

Close
In this paper, we propose HiFi system which enables users to interact with surrounding physical objects. It uses coded light to encode position in an environment. By attaching a tiny light sensor on a user’s mobile device, the user can attach digital info to arbitrary static physical objects or retrieve/modify them anchored to these objects. With this system, a family member may attach a digital maintenance schedule to a fish tank or indoor plants, etc. In a store, a store manager may use such system to attach price tag, discount info and multimedia contents to any products and customers can get the attached info by moving their phone close to the focused product. Similarly, a museum can use this system to provide extra info of displayed items to visitors. Different from computer vision based systems, HiFi does not have requests on texture, bright illumination, etc. Different from regular barcode approaches, HiFi does not require extra physical attachments that may change an object’s native appearance. HiFi has much higher spatial resolution for distinguishing close objects or attached parts of the same object. As HiFi system can track a mobile device at 80 positions per second, it also has much faster response than any above listed system.
2013
Publication Details
  • IEEE ISM 2013
  • Dec 9, 2013

Abstract

Close
Real-time tele-immersion requires low latency, synchronized multi-camera capture. Prior high definition (HD) capture systems were bulky. We in vestigate the suitability of using flocks of smartphone cameras for tele-immersion. Smartphones can potentially integrate HD capture and streaming into a single portable package. However, they are designed for archiving the captured video into a movie. Hence, we create a sequence of H.264 movies and stream them. We lower the capture delay by reducing the number of frames in each movie segment. Increasing the number of movie segments adds compression overhead. Smartphone video encoders do not sacrifice video quality to lower the compression latency or the stream size. On an iPhone 4S, our application that uses published APIs streams 1920x1080 videos at 16.5 fps with a delay of 712 msec between a real-life event and displaying an uncompressed bitmap of this event on a local laptop. For comparison, the bulky Cisco Tandberg required 300 msec delay. Stereoscopic video from two unsynchronized smartphones showed minimal visual artifacts in an indoor teleconference setting.
Publication Details
  • Interactive Tabletops and Surfaces (ITS) 2013
  • Oct 6, 2013

Abstract

Close
The expressiveness of touch input can be increased by detecting additional finger pose information at the point of touch such as finger rotation and tilt. PointPose is a prototype that performs finger pose estimation at the location of touch using a short-range depth sensor viewing the touch screen of a mobile device. We present an algorithm that extracts finger rotation and tilt from a point cloud generated by a depth sensor oriented towards the device's touchscreen. The results of two user studies we conducted show that finger pose information can be extracted reliably using our proposed method. We show this for controlling rotation and tilt axes separately and also for combined input tasks using both axes. With the exception of the depth sensor, which is mounted directly on the mobile device, our approach does not require complex external tracking hardware, and, furthermore, external computation is unnecessary as the finger pose extraction algorithm can run directly on the mobile device. This makes PointPose ideal for prototyping and developing novel mobile user interfaces that use finger pose estimation.
Publication Details
  • CBDAR 2013
  • Aug 23, 2013

Abstract

Close
Capturing book images is more convenient with a mobile phone camera than with more specialized flat-bed scanners or 3D capture devices. We built an application for the iPhone 4S that captures a sequence of hi-res (8 MP) images of a page spread as the user sweeps the device across the book. To do the 3D dewarping, we implemented two algorithms: optical flow (OF) and structure from motion (SfM). Making further use of the image sequence, we examined the potential of multi-frame OCR. Preliminary evaluation on a small set of data shows that OF and SfM had comparable OCR performance for both single-frame and multi-frame techniques, and that multi-frame was substantially better than single-frame. The computation time was much less for OF than for SfM.
2012
Publication Details
  • DAS 2012
  • Mar 27, 2012

Abstract

Close
This paper describes a system for capturing images of a book with a 3D stereo camera which performs dewarping to produce output images that are flattened. A Fujifilm consumer grade 3D camera (FinePix W3) provides a highly mobile and low cost 3D capture device. Applying standard computer vision algorithms, the camera is calibrated and the captured images are stereo rectified. Due to technical limitations, the resulting point cloud has defects such as splotches and noise, which make it hard to recover the precise 3D locations of the points on the book pages. We address this problem by computing curve profiles of the depth map and using them to build a cylinder model of the pages. We then generate a mesh M1 on the source image and project this into a mesh M2 on the cylinder model in virtual space. Finally, the mesh M2 is flattened and the pixels in M1 are interpolated and rendered via M2 onto the output image. We have implemented a prototype of the system and report on some preliminary evaluation results.
2011
Publication Details
  • CHI 2011
  • May 7, 2011

Abstract

Close
For document visualization, folding techniques provide a focus-plus-context approach with fairly high legibility on flat sections. To enable richer interaction, we explore the design space of multi-touch document folding. We discuss several design considerations for simple modeless gesturing and compatibility with standard Drag and Pinch gestures, and categorize gesture models along the characteristics of Symmetric/Asymmetric and Sequential/Parallel, which yields three gesture models. We built a prototype document workspace application that integrates folding and standard gestures, and a prototype for experimenting with the gesture models. A user study was conducted to compare the three models and to analyze the factors of fold direction, target symmetry, and target tolerance in user performance of folding a document to a specific shape. Our results indicate that all three factors were significant for task times, and parallelism was greater for symmetric targets.
2010
Publication Details
  • ACM Multimedia
  • Oct 25, 2010

Abstract

Close
FACT is an interactive paper system for fine-grained interaction with documents across the boundary between paper and computers. It consists of a small camera-projector unit, a laptop, and ordinary paper documents. With the camera-projector unit pointing to a paper document, the system allows a user to issue pen gestures on the paper document for selecting fine-grained content and applying various digital functions. For example, the user can choose individual words, symbols, figures, and arbitrary regions for keyword search, copy and paste, web search, and remote sharing. FACT thus enables a computer-like user experience on paper. This paper interaction can be integrated with laptop interaction for cross-media manipulations on multiple documents and views. We present the infrastructure, supporting techniques and interaction design, and demonstrate the feasibility via a quantitative experiment. We also propose applications such as document manipulation, map navigation and remote collaboration.
Publication Details
  • ACM DocEng 2010
  • Sep 21, 2010

Abstract

Close
We present a method for picture detection in document page images, which can come from scanned or camera images, or rendered from electronic file formats. Our method uses OCR to separate out the text and applies the Normalized Cuts algorithm to cluster the non-text pixels into picture regions. A refinement step uses the captions found in the OCR text to deduce how many pictures are in a picture region, thereby correcting for under- and over-segmentation. A performance evaluation scheme is applied which takes into account the detection quality and fragmentation quality. We benchmark our method against the ABBYY application on page images from conference papers.

Seamless Document Handling

Publication Details
  • Fuji Xerox Technical Report, No.19, 2010, pp. 57-65.
  • Jan 12, 2010

Abstract

Close
The current trend toward high-performance mobile networks and increasingly sophisticated mobile devices has fostered the growth of mobile workers. In mobile environments, an urgent need exists for handling documents using a mobile phone, especially for browsing documents and viewing Rich Contents created on computers. This paper describes Seamless Document Handling, which is a technology for viewing electronic documents and Rich Contents on the small screen of a mobile phone. To enhance operability and readability, we devised a method of scrolling documents efficiently by applying document image processing technology, and designed a novel user interface with a pan-and-zoom technique. We conducted on-site observations to test usability of the prototype, and gained insights difficult to acquire in a lab that led to improved functions in the prototype.
2009
Publication Details
  • Pervasive 2009
  • May 11, 2009

Abstract

Close
Recorded presentations are difficult to watch on a mobile phone because of the small screen, and even more challenging when the user is traveling or commuting. This demo shows an application designed for viewing presentations in a mobile situation, and describes the design process that involved on-site observation and informal user testing at our lab. The system generates a user-controllable movie by capturing a slide presentation, extracting active regions of interest using cues from the presenter, and creating pan-and-zoom effects to direct the active regions within a small screen. During playback, the user can simply watch the movie in automatic mode using a minimal amount of effort to operate the application. When more flexible control is needed, the user can switch into manual mode to temporarily focus on specific regions of interest.