Full and Short Papers

The following full and short papers were selected through a double-blind peer-review process. During the TPC meeting on March 22-23rd in Chicago, USA, 29% of all submitted papers were accepted, based on at least three reviews and one meta-review by an Associate Chair.

Full papers get a 20-minute time slot (+Q&A) for presentation at the conference, short papers get a 10-minute time slot (+Q&A). The final program with the exact timing of the presentations will be made available in program details.



Rivulet: Exploring Participation in Live Events through Multi-Stream Experiences

William A. Hamilton – Interface Ecology Lab @ Texas A&M University, College Station, TX, United States
John Tang – Microsoft Research, Redmond, Washington, United States
Gina Venolia – Microsoft Research, Redmond, Washington, United States
Kori Inkpen – Microsoft Research, Redmond, Washington, United States
Jakob Zillner – VRVis Research Center, Vienna, Austria
Derek Huang – Microsoft Research, Microsoft, Redmond, Washington, United States

Abstract
Live streaming has recently emerged as a growing form of participatory social media. While current live streaming practice focuses on single stream experiences, there are increasing instances of events covered by multiple live streams. We present the design and evaluation of Rivulet, an end-to-end mobile live streaming system designed to support participatory multi-stream experiences. Rivulet affords simultaneously watching multiple live streams and incorporates existing feedback mechanisms of text chat and hearts with a novel push-to-talk audio modality. By recruiting viewers through Mechanical Turk, we were able to conduct a study of Rivulet at scale. We found that Rivulet afforded new engaging experiences for participants and led to an impromptu sense of community.

 

Understanding Video Rewatching Experiences

Frank Bentley – Yahoo, Sunnyvale, California, United States
Janet Murray – Graduate Program in Digital Media, Georgia Tech, Atlanta, GA, USA

Abstract
New video platforms have enabled a wide variety of opportunities for rewatching video content. From streaming sites such as Netflix, Hulu, and HBO Now, to the proliferation of syndicated content on cable and satellite television, to new streaming devices for the home such as Roku and Apple TV, there are countless ways that people can rewatch movies and television shows. But what are people doing? We set out to understand current rewatching practices across a variety of devices and services. Through an online, open-ended survey to 150 diverse people and in-depth, in-person interviews with 10 participants, we explore current rewatching behaviors. We quantify the types of content that are being rewatched as well as qualitatively explore the reasons and contexts behind rewatching. We conclude with key implications for the design of new video systems to promote rewatching behaviors.

 

Uncovering the Underlying Factors of Smart TV UX over Time: A Multi-study, Mixed-method Approach

Jincheul Jang – Knowledge Service Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Dapeng Zhao  – Knowledge Service Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Woneui Hong   – Knowledge Service Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Youkyoung Park – Knowledge Service Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Mun Yi – Knowledge Service Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

Abstract
The objective of this research is to explore and identify Smart TV user experience (UX) factors over different time periods employing multiple methods so as to overcome the weakness of a single study approach. To identify the effect of contextual dimensions on the Smart TV UX, we conducted empirical studies exploiting different methods of think-aloud and diary method under two usage conditions: laboratory and real-life in the participants’ residence. The factors identified through each study were integrated into a single set and further refined through peer review resulting in a final set of 19 UX factors. Metrics for these 19 UX factors were generated and used in an online survey, in which over 300 Smart TV users participated. The empirical evidences from each study suggest that the UX factors vary with respect to product temporality. The findings indicate practical implications for Smart TV manufacturers, marketing managers, application developers, and service providers.

 

Mining Subtitles for Real-Time Content Generation for Second-Screen Applications

Tilman Dingler – VIS, University of Stuttgart, Stuttgart, Germany
Johannes Knittel – University of Stuttgart, Stuttgart, Germany
Albrecht Schmidt – VIS, University of Stuttgart, Stuttgart, Germany

Abstract
Using mobile devices while watching TV is becoming increasingly common. Most of the so-called second-screen apps provide additional information and services for a specific TV program. App content is mostly manually curated by the program or app publishers. In this paper we present an approach for automatically extracting keywords from subtitles in order to retrieve and provide highly relevant additional program content. Over the course of 4 months we recorded more than 45.000 hours of TV shows, on which we based an entity linking algorithm to extract relevant keywords and automatically trigger Wikipedia look-ups. Our system includes a second-screen app which proactively displays these contents with hindsight to time and position in the current TV show. We then conducted a user study with 30 people investigating the relationship between app usage while watching documentaries and effects on comprehension, recall, and subjective experience. We confirmed user distraction while using the app, but noticed an increase in subjectively reported comprehension compared to when users triggered web-searches via a smartphone browser. The content extracted, linked and proactively presented by our system turned out to be highly relevant.

 

I Kind Of Had An Avatar Switch: The Role Of The Self In Engagement With An Interactive TV Drama

Allie Johns – Psychology, University of Salford, Salford, United Kingdom
Adam Galpin – Psychology, University of Salford, Salford, United Kingdom
Joanne Meredith – Psychology, University of Salford, Salford, United Kingdom
Maxine Glancy – Research & Development, BBC, Salford, Greater Manchester, United Kingdom

Abstract
This paper report results from a study which examined viewers’ cognitive and affective responses to an interactive TV drama. Ten participants were videoed interacting with ‘Our World War’ [BBC 2014], and then interviewed about their experience using the video playback as a retrospective prompt. An interpretative framework was designed to reveal themes of engagement and to guide questions and analysis. We report findings relating to five themes of engagement: cognitive, affective, perspective taking, competence and autonomy, and transportation. Our data add to the existing literature on interactive stories by highlighting the pivotal role of the self in engaging with interactive drama, with self-reflection emerging within each theme. We conclude that two experiential states drive engagement: a transported experience; and one in which self-reflection limits transportation.

 

GameBridge: Converging Toward a Transmedia Storytelling Experience through Gameplay

Rachel Miles – Digital Media, Georgia Institute of Technology, Atlanta, GA, United States
Arielle Cason – Digital media / eTV, Georgia Institute of Technology, ATLANTA , GA, USA
Larry Chan – Human-Computer Interaction, Georgia Tech, Atlanta, Georgia, United States
Jing Li – Technology Production Center, China Central Television, Beijing, Beijing, China
Ryan McDonnell – Experimental Television Lab, Georgia Institute of Technology, Atlanta, Georgia, United States
Janet Murray – Experimental Television Lab, Georgia Institute of Technology, Atlanta, Georgia, United States
Zixuan Wang – ETV lab, Georgia Institute of Technology, Atlanta, Georgia, United States

Abstract
Transmedia storytelling enables a narrative to traverse various media platforms in order to create a richer storyworld. To achieve this goal, our group envisioned a product called GameBridge, which builds upon the concept of transmedia storytelling by implementing a cross-platform narrative. A primary focus of GameBridge is to explore the potential of interactive narrative to provide continuous additive rewards throughout a television season. For our prototype, we decided to take the television show Game of Thrones and the corresponding book series, A Song of Ice and Fire, to create a game using content from both media to form our own storyline. By using both the television show and the book series, GameBridge creates a convergence point between the two media and allows the interactor to have agency of the story through gameplay. In the future, this model could be recreated with any storyworld that is told through various media, including movies.

 

Enabling Frame-Accurate Synchronised Companion Screen Experiences

Vinoba Vinayagamoorthy – British Broadcasting Corporation, London, Greater London, United Kingdom
Rajiv Ramdhany – British Broadcasting Corporation , London, Greater London, United Kingdom
Matt Hammond – British Broadcasting Corporation, London, Greater London, United Kingdom

Abstract
This paper describes the development and implementation of a new open communication standard for use between Internet-connected TVs and companion screens, over the home network. Content providers know that improving Internet connectivity and prevalence of personal mobile devices is encouraging our audiences to seek more interactive experiences across multiple screens. In order to deliver a coherent integrated user experience, with content on all screens presented according to a common timeline, the application on the companion device needs to discover ‘what is being shown on the TV’ and ‘what the timeline position is’. DVB-CSS provides a standardised way to enable this synchronisation between the TV and any personal device on the home network. We describe its development, use cases and early prototype implementation.

 

Design Guidelines for Notifications on Smart TVs

Dominik Weber – Institute for Visualization and Interactive Systems, University of Stuttgart, Stuttgart, Germany
Sven Mayer – Institute for Visualization and Interactive Systems, University of Stuttgart, Stuttgart, Germany
Alexandra Voit – Institute for Visualization and Interactive Systems, University of Stuttgart, Stuttgart, Germany
Rodrigo Ventura Fierro – Institute for Visualization and Interactive Systems, University of Stuttgart, Stuttgart, Germany
Niels Henze – VIS, University of Stuttgart, Stuttgart, Germany

Abstract
Notifications are among the core mechanisms of most smart devices. Smartphones, smartwatches, tablets and smart glasses all provide similar means to notify the user. For smart TVs, however, no standard notification mechanism has been established. Smart TVs are unlike other smart devices because they are used by multiple people – often at the same time. It is unclear how notifications on smart TVs should be designed and which information users need. From a set of focus groups we derive a design space for notifications on smart TVs. By further studying selected design alternatives in an online survey and lab study we show, for example, that users demand different information when they are watching TV with others and that privacy is a major concern. We derive according design guidelines for notifications on smart TVs that can be used by developers to gain the user’s attention in a meaningful way.

 

Connecting Living Rooms: An Experiment In Orchestrated Video Communication

Manolis Falelakis – Goldsmiths, University of London, London, United Kingdom
Marian Ursu – Department of Theatre, Film and Television, University of York, Yrok, United Kingdom
Rene Kaiser – Institute for Information and Comminication Technologies, JOANNEUM RESEARCH, Graz, Austria
Erik Geelhoed  – Falmouth University, Cornwall, United Kingdom
Michael Frantzis – Goldsmiths, University of London, London, United Kingdom

Abstract
Consumer live video communication is becoming commonplace in our everyday lives, but the current systems are still rather limited in their ability to support natural communication in more complex interaction contexts. What new features might there be provided by the next generation of live video communication systems, that would provide for more complex contexts and more natural communication? This paper suggests: “orchestration”– i.e. the ability to automatically and in real-time (re)configure the communication system to the needs of the interaction context. The inspiration for communication orchestration is television production – mixing views from different cameras and camera reframing. This paper reports a specific study of orchestration carried out in the social setting of a group of friends communicating from three separate living rooms through television screens and multiple cameras. The views mixed by orchestration consisted of midshots of the participants and wide shots of the rooms. The orchestrated experience was evaluated against a static, split screen, connection, and was carried out via a questionnaire, analysis of automatic logs and interviews. In this case study, orchestration has been identified as providing for more intimate conversations, but, somewhat surpisingly, the static solution, emerged to be better for conveying group awareness. In addition to this specific result, the paper also provides a model for the experimental investigation of automatically and dynamically configured video communication systems.

 

Confessions of A “Guilty” Couch Potato: Understanding and Using Context to Optimize Binge-watching Behavior

Dimph de Feijter – Academy for Digital Entertainment, NHTV Breda University of Applied Sciences, Breda, Noord-Brabant, Netherlands
Vassilis-Javed Khan – Industrial Design Department, Eindhoven University of Technology, Eindhoven, Noord Brabant, Netherlands
Marnix van Gisbergen – Academy for Digital Entertainment, NHTV Breda University of Applied Sciences, Breda, Noord-Brabant, Netherlands

Abstract
Viewers more frequently watch television content whenever they want, using devices they prefer, which stimulated ‘Binge-watching’ (consecutive viewing of television programs). Although binge-watching and health concerns have been studied before, the context in which binge-watching takes place and possibilities to use context to optimize binge-watching behavior have not. An in-situ, smartphone monitoring survey among Dutch binge-watchers was used to reveal context factors related to binge-watching. Results indicate that binge-watching is a solitary activity that occurs in a digital social active and positive context. Time spend (number of episodes watched) correlates with the amount of free time and plays an important role in the effect of binge-watching on emotional well-being. Considering the difficulty viewers have to create an optimal viewing experience, these context factors are used as a framework to be able to design and promote a recommendation tool for TV streaming services to create a more optimal binge-watching experience.

 

Analysis of User Behavior with a Multicamera HbbTV App in a Live Sports Event

Marc Aguilar – Living Labs Unit, i2CAT Foundation, Barcelona, Spain
Sergi Fernández – Media Internet Unit, i2CAT Foundation, Barcelona, Spain
David Cassany – Media Internet Unit, i2CAT Foundation, Barcelona, Spain

Abstract
This paper describes the results of a large-scale live pilot test of an HbbTV multicamera application. In this pilot test, carried out during an association football match, the interactions of 6203 user devices with the application were logged. An exploratory statistical analysis was performed on the dataset, to better understand the behavior of the users on the application. The analysis yielded conclusions that can be useful to those seeking to build a successful multicamera service, with insights on issues of suitability of program genres, multicamera content selection, audience segmentation, and the structure of data stream traffic.

 

Who Has the Force?   Solving Conflicts for Multi User Mid-Air Gestures for TVs

Katrin Plaumann – Institute of Media Informatics, Ulm University, Ulm, Baden-Württemberg, Germany
David Lehr – Universität Ulm, Medieninformatik, Ulm, Baden Württemberg, Germany
Enrico Rukzio – Institute of Media Informatics, Ulm, Germany

Abstract
In recent years, mid-air gestures have become a feasible input modality for controlling and manipulating digital content. In case of controlling TVs, mid-air gestures eliminate the need to hold remote controls, which quite often are not at hand, need to be searched before use or are dirty. Thus, mid-air gestures quicken interactions. However, the absence of a single controller and the nature of mid-air gesture detection also poses a disadvantage: gestures performed by multiple watchers may result in conflicts. In this paper, we propose an interaction technique solving the conflicts arising in such multi viewer scenarios. We conducted a survey with 64 participants, asking them about their TV viewing habits, experienced conflicts and opinions on conflict solving strategies. Based on the survey’s results, we present a prototype for multi viewer gestural controls for TVs solving possible conflicts.