About the Teleimmersion Project



Teleimmersion technology allows geographically distributed users to communicate and interact in real-time through a shared virtual environment. Users are captured by a set of stereo (depth) cameras which digitize the user into a cloud of points or a 3D mesh which can be compressed and shared with remote locations. Through the true 3D capture, the geometry of the real scene is preserved and mapped into the virtual environment. The research in teleimmersion thus combines 3D computer vision, collaborative virtual reality and networking.

For the past eight years, we have, supported by NSF, CITRIS, HP Labs, EADS, and Mellon Foundation, been developing multimedia end-to-end system that can capture people in 3D in 360 degree-view interacting over the network in real time. During the period of 2005 to 2011, we have demonstrated our teleimmersion technology in several different areas. Initially we have experimented with distributed dancers [1] [2]. Our dance collaborators have challenged the integration of prerecorded dancers with live performance of remote dancers. We have performed several remote experiments with the focus on long-distance network transmission jointly with Prof. Nahrstedt’s teleimmersion group from University of Illinois Urbana Champagne.

Next, we have investigated the value of teaching and learning of physical exercises with this technology as oppose to just having a simple two-dimensional video [3]. The results of the study, conducted in collaboration with Prof. Bailenson of Virtual Human Interaction Lab (VHIL) - Stanford University Stanford University, showed that people were able to better recognize and learn movements through the 3D immersive feedback as compared to regular video.

In collaboration with Institute for Data Analysis and Visualization (IDAV) of University of California at Davis, we have been developing software tools which incorporated our 3D teleimmersion technology for real-time interaction in collaborative virtual environments. In 2007 we have performed a set of experiments with geographically distributed geoscientists between Berkeley, CA and Davis, CA, who were able to in real-time interact with volumetric data while seeing each other in the 3D space using a CAVE. The visualization was implemented using an open source development tool Vrui [4]. Within this collaboration we have also explored visualization of medical data for remote training and diagnosis. The project received “Innovations in Networking Award” presented by the Corporation for Education Network Initiatives in California (CENIC).

Since 2010 we have been jointly with Prof. Forte of School of Social Sciences, Humanities and Arts at University of California at Merced developing a collaborative virtual environment for real-time interaction with 3D objects in archaeology which includes the teleimmersion technology. The research project, which has been supported by The Center for Information Technology Research in the Interest of Society (CITRIS) at University of California, Berkeley, has been awarded with Best Paper Award at 16th International Conference on Visualization and Multimedia (VSMM 2010, Korea) [5].

Since 2009 we have collaborated with The European Aeronautic Defence and Space Company N.V. (EADS) to explore the use of teleimmersive technology in distributed design in manufacturing. The teleimmersive system has been set up at EADS Innovative Works in Newport, UK.

Other industry partners include HP Labs, where we have been assisting in calibration issues in multi-view and autostereoscopic display systems applying the software developed for our teleimmersion system [6][7].
We are also collaborating with Dr. Gerald Friedland from the International Computer Science Institute (ICSI) in Berkeley in the area of audio and multimedia. In this collaboration we are exploring how the sound can assist stereo reconstruction.

Currently we are also starting to apply the teleimmersion technology to medical field, in the area of remote training and rehabilitation. We have an ongoing collaboration with Dr. Jay Han of UC Davis Medical Center in Sacramento where are setting up a teleimmersion facility with several stereo cameras. Our focus is accurate and reliable assessments of upper extremity which is critical because functionality of upper extremity affects essentially all activities of daily living, independence, and overall quality of life. In this application we have a remote health-coach who is instructing the patient during his/her exercises through the teleimmersion technology.

In addition to above mentioned collaborators, we have installed or we are in the process of installing our teleimmersion technology at several other locations, such as the UC Berkeley Dance Department, University of Illinois at Urbana-Champaign (UIUC), Florida International University, University of California Davis Medical Center (UCDMC) and University of Texas at Dallas.

References
[1] K. Nahrstedt, R. Bajcsy, L. Wymore, G. Kurillo, K. Mezur, R. Sheppard, Z. Yang, W. Wu, ”Symbiosis of Tele-immersive Environments with Creative Choreography”, Proceedings of ACM Workshop supporting Creative Arts Beyond dissemination, June 13-15, 2007, Washington DC.
[2] Z. Yang, W. Wu, K. Nahrstedt, G. Kurillo, R. Bajcsy, ”Enabling Multi-party 3D Teleimmersive Environments with ViewCast”, ACM Transactions on Multimedia Computing, Communications and Applications (TOMCCAP), vol.6, 2010.
[3] J.N. Bailenson, K. Patel, A. Nielsen, R. Bajcsy, S. Sung, G. Kurillo, “The Effect of Interactivity on learning Physical Actions in Virtual reality”, Media Psychology, vol.11, pp.354-376, 2008.
[4] O. Kreylos, “Environment-independent VR development”, In: G. Bebis, et al. (eds.): Advances in Visual Computing, ISVC 2008, Part I, LNCS 5358, 901–912, 2008.
[5] M. Forte, G. Kurillo, ”Cyberarcheology-Experimenting with Teleimmersive Archeology”, 16th International Conference on Virtual Systems and Multimedia (VSMM 2010), Oct. 20-23, 2010, Seoul, South Korea
[6] G. Kurillo, Z. Li, R. Bajcsy, ” Framework for hierarchical calibration of multicamera systems for tele-immersion”, Proceedings of IMMERSCOM ’09,Berkeley,CA.2009.
[7] Z. Li, H. Baker, G. Kurillo, R. Bajcsy: “Projective Epipolar Rectification for a Linear Multi-imager Array”, In Proceedings of 3DPVT’10 Conference, May 17-20, 2010, Paris.

Note: Visit ‘Publications’ section for more extensive list of references.