Mixed Reality | Stereoscopy | Mixed Reality

Please download to get full document.

View again

of 12
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report
Category:

Documents

Published:

Views: 26 | Pages: 12

Extension: PDF | Download: 0

Share
Related documents
Description
Mixed Reality
Transcript
  A Mixed Reality Space for Tangible User Interaction Martin Fischbach ∗ , Christian Treffs ∗ , David Cyborra † , Alexander Strehler † Thomas Wedler † , Gerd Bruder † , Andreas Pusch † , Marc E. Latoschik ∗ , Frank Steinicke †∗ Human-Computer Interaction  † Immersive Media GroupUniversit¨at W¨urzburg Universit¨at W¨urzburg Am Hubland Oswald-K¨ulpe-Weg 8297074 W¨urzburg 97074 W¨urzburgTel.: +49 931 31 86314 Tel.: +49 931 31 85868martin.fischbach@uni-wuerzburg.de gerd.bruder@uni-wuerzburg.de Abstract:  Recent developments in the field of semi-immersive display technologies pro-vide new possibilities for engaging users in interactive three-dimensional virtual environments(VEs). For instance, combining low-cost tracking systems (such as the Microsoft Kinect) andmulti-touch interfaces enables inexpensive and easily maintainable interactive setups. Thegoal of this work is to bring together virtual as well as real objects on a stereoscopic multi-touch enabled tabletop surface. Therefore, we present a prototypical implementation of sucha mixed reality (MR) space for tangible interaction by extending the smARTbox [FLBS12].The smARTbox is a responsive touch-enabled stereoscopic out-of-the-box system that is ableto track users and objects above as well as on the surface. We describe the prototypical hard-and software setup which extends this setup to a MR space, and highlight design challengesfor the several application examples. Keywords:  Multi-touch, tangible interaction, stereoscopic display 1 Introduction Research on multi-touch display interaction has mainly focused on monoscopically rendered2D or 3D visualizations, whereas the challenges that occur when the virtual content isdisplayed stereoscopically in the 3D space, but interaction remains constrained to a 2Dsurface, has rarely been considered so far. Over the last couple of years, stereoscopic feedbackreceived much attention due to the rise of 3D motion pictures and 3D consumer technology(e.g., TVs, video game consoles, notebooks). First commercial hardware systems supportingstereoscopic display and (multi-)touch interaction have recently been launched (iliGHT 3DTouch 1 , Nintendo 3DS 2 , etc.), and interdisciplinary research projects provide first insightsinto how interaction with stereoscopic content displayed on a two-dimensional touch surface(iMUTS 3 , InSTInCT 4 , etc.) has to be designed. 1 http://ilight-3i.com/en 2 http://www.nintendo.com/3ds 3 http://imuts.uni-muenster.de 4 http://anr-instinct.cap-sciences.net/?q=node/34  More and more hardware solutions for the entertainment market allow the sensing of human gestures and postures performed both on touch-sensitive surfaces as well as in the3D space (e.g. Leap Motion 5 , Microsoft Kinect 6 ). Although these technologies provide anenormous potential for a variety of novel applications and interaction concepts, merely firststeps have been done in the area of multi-touch enabled stereoscopic surfaces. Such setups arestill rarely used, because certain aspects hinder a wide acceptance [SBK + 12]. For instance,the fact that stereoscopic content can be displayed behind or in front of the projection screen,whereas a touch can only be detected on (or very close to) the surface, induces a visuo-hapticconflict [SSK + 09]. Another problem comes from accommodation-vergence conflicts that arepresent in most stereoscopic display environments and become even more important whenusers have to switch their current visual focus between their hands and the stereoscopicobjects [VSBH11].For a long time, human beings have shaped their physical environment to materializeconcepts of form and function. Despite the diversity of existing interaction devices, the hu-man hand remains by far the most natural and fundamental interface for interacting withreal-world objects as well as with virtual objects. Seamlessly merging these real and thevirtual worlds created within a computer is a challenging topic in current research. Theresult of superimposing the real world by computer-generated images is called augmentedreality (AR) whereas the enhancement of virtual worlds using real-world data is called aug-mented virtuality (AV). The term mixed reality (MR) [MK94] encompasses both AR as wellas AV. As pointed out by [MDG + 95], the main issues of MR environments are consistencyof geometry, time, and illumination. Therefore, one of the most important tasks is to matcha superimposed object’s location with the that of the corresponding real world object. Like-wise, reflections, light sources and shadows must match in both worlds to obtain consistentglobal illumination; hence movements in the two worlds must be synchronized.MR systems have been the subject of growing interest by designers due to the dualneed of users to both benefit from computing power and interact with the real world. Afirst attempt to satisfy this requirement consisted in augmenting the real world with digitalinformation: The central rationale for AR. Another approach is to make interaction withcomputer-generated content more natural by means of a transfer of objects and actions.Examples include input modalities [NC95] based on real objects, such as Fitzmaurice etal.’s bricks [FIB95], or Ishii & Ulmer’s phicons [IU97]. Ishii has described this interactionparadigm as  Tangible User Interface (TUI) . As a tangible is placed on a table that also actsas a display, various interactions can be performed. For instance, animated symbols appear,such as waveforms, circles, circular grids, or sweeping lines. While some symbols merelyshow what the particular tangible is doing, others can be used with the fingertip to controlthe respective module.In this paper, we present how the smARTbox [FLBS12] can be used for tangible inter- 5 http://www.leapmotion.com 6 http://www.microsoft.com/en-us/kinectforwindows  action as a mixed reality (MR) design space. The smARTbox is a responsive touch-enabledstereoscopic tabletop system. For this work we augmented this setup to support tangibleinteraction with physical objects based on fiducial tracking. These physical objects serve astangible proxies for corresponding virtual objects, which have been transferred to the virtualworld by a simple 3D reconstruction process.The overall goal is thus to bring together virtual and real objects using a stereoscopicmulti-touch-enabled tabletop device to create a novel tangible mixed reality design space.The contributions of this paper can be summarized as: ã  the description and implementation of a first stereoscopic multi-touch-enabled tabletopsetup supporting fiducial tracking, ã  the proposition of a tangible mixed reality design space, and ã  the development of interaction concepts for 3D design and modeling tasks. 2 Related Work Many approaches for extending multi-touch interaction techniques to 3D applications withmonoscopic rendering have been proposed. For instance, Hancock et al. [HCC07] have pre-sented the concept of shallow-depth 3D, i.e., 3D interaction with limited depth. They havedeveloped interaction techniques for 3D object selection and manipulation. Extending theinteraction space beyond the touch surface has been investigated by Hilliges et al. [HIW + 09].They have tested two depth sensing approaches to enrich multi-touch interaction on a table-top setup with monoscopic projection. Grossman and Wigdor [GW07] provided an extensivereview of the existing work on interactive surfaces and developed a taxonomy for classifica-tion of this research. This framework takes into account the perceived and the actual displayspace, the input space and the physical properties of an interactive surface.Sch¨oning et al. [SSK + 09] have discussed the challenges of direct touch interaction withstereoscopically rendered scenes, and first prototypes have been introduced since then asdescribed in Section 1. Interaction with objects with negative parallax on a multi-touchtabletop setup has further been addressed by Benko et al. [BF07] and Strothoff et al. [SVH11].As mentioned above, problems arise in such setups because stereoscopic content can bedisplayed behind or in front of the projection screen, whereas a touch can only be detectedon (or very close to) the surface [SSK + 09]. The mapping between an on-surface touch pointand the corresponding point on a virtual object is straightforward in the monoscopic case.But with stereoscopic projection, this mapping introduces problems [TS11]. Particularly,since there are different projections for each eye, the question is where users would touch thesurface when they tried to touch a stereoscopic object. One may expect that the touch wouldbe anywhere around the two projections of a stereoscopically displayed object. What Valkovet al. found is that users actually touch an intermediate point which is located between bothprojections with a significant offset to the users dominant eye [VSBH11]. However, as shown  by previous work, users are insensitive to discrepancies between visual penetration and touchfeedback when they try to touch stereoscopic objects displayed close to the surface [VSB + 10].In particular, it has been found that sensitivity to these visuo-tactile conflicts is lower, if objects are displayed with negative parallax, e.g., on top or above a tabletop surface.Research efforts as well as commercialization of tangible and multi-touch user interactionhas recently seen an exponential growth. For instance, Microsoft distributes the Windows-based platform Microsoft Surface 7 . Beside multi-touch tracking of fingers, the platformsupports the recognition of physical objects by their footprints such as cell phone or digitalcameras. The Tangible Media Lab led by Hiroshi Ishii has designed several interfaces thatemploy physical objects, surfaces, and spaces as tangible embodiments of digital informationand processes 8 . AR-Jig, for example, is a handheld TUI for 3D digital modeling in an ARspace [AI07]. AR-Jig has a pin array that displays a 2D physical curve coincident with acontour of a digitally displayed 3D form. It supports physical interaction with a portion of a 3D digital representation, allowing 3D forms to be directly touched and modified. Recom-pose has been proposed as a system for manipulating an actuated surface [LLDB11]. Thisactuated surface enables a number of interaction techniques exploiting the shared space of direct and gestural input. Another example of a prototype system for interactive construc-tion and modification of 3D physical models is based on building blocks [MWC + 12]. Similarto our system, theirs uses a depth sensing camera for acquiring and tracking the physicalmodels.The combination of multi-touch technology and TUIs has been employed in various ap-plications. Two of them are highly relevant to the application scenario described in thispaper. Urp (Urban Planning Workbench) [UI99] is one example of a TUI, which uses minia-ture physical models of architectural buildings to configure and control an underlying urbansimulation of shadow, light reflections, or wind flow. Another notable interactive installationis instant city  9 , which combines gaming, music, and architecture. The user can build 3Dstructures and set up a city with rectangular building blocks, which simultaneously resultsin the interactive assembly of musical fragments of different composers. However, as alreadymentioned above, none of the described setups presented so far is capable of supportingmulti-touch and tangible user input combined with stereoscopic display. 3 Tangible Mixed Reality Tabletop Setup This section describes the system setup used for the implementation of the developed tangibleuser interface concepts. It is an extension of the smARTbox setup [FLBS12], combining fivekey features in the frame of a portable box: (1) stereoscopic visualization, (2) multi-touchdetection, (3) fiducial tracking, (4) full-body user tracking and (5) simple 3D reconstruction.In the following a detailed walk trough the realization of this key features is presented. 7 http://www.microsoft.com/surface 8 http://tangible.media.mit.edu/ 9 http://www.instantcity.ch/
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks