Introduction

For more information on projects related to human modeling and robotics, visit our new website for UC Berkeley HART Lab



Our research group is interested in computer vision, robotics, tele-immersive communications, and modeling of cyber-physical systems. Our ongoing research activities entail human-robot cooperation, human activity recognition from multi-modal data, development of individualized musculoskeletal models, quantification of human performance, remote monitoring in health care and its privacy and security considerations, and modeling of driver interaction in semi-autonomous vehicles. Tele-immersion group is lead by Prof. Ruzena Bajcsy. The lab located in Sutardja Dai Hall, Room #133 .

Software

In this section we feature some of the software that has been developed in the lab and is publicly available.


Berkeley Telemonitoring Project

Berkeley Telemonitoring project is an open source framework for building smartphone applications for remote health monitoring on Android operating system. Please, refer to the link below for more information.
Berkeley Telemonitoring: Privacy Aware Health-Monitoring

Camera Calibration Code

Two packages for calibration of multiple cameras were developed in the past as part of the Tele-immersion project. The source code was released as Open Source with a BSD license. Note that the source code is provided as is and minimal or no support can be expected. Please, refer to the 'readme.txt' file inside the repositories for more information.

  • Intrinsic Camera Calibration: Package for calibrating a single, a pair, or more cameras that share similar viewing space using a checkerboard.
  • Extrinsic Camera Calibration: Package for obtaining extrinsic calibration of multiple cameras that share the same workspace volume using LED-based approach*.

*G. Kurillo, Z. Li, R. Bajcsy, Wide-Area External Multi-Camera Calibration Using Vision Graphs and Virtual Calibration Object,” In Proceedings of 2nd ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC ’08), Stanford, CA, 2008
# Download OpenCV 2.2. with Visual Studio 2010 binaries here.

Safety Guarantees for Interaction Between a Driver and an Autonomous Vehicle



Duration:2012-now
Collaborators: Prof. Francesco Borrelli, Mechanical Engineering, UC Berkeley
Funding: NSF (#1239323), Hyundai
Researchers: Katherine Driggs Campbell





Goals:

  • To develop smarter active safety systems that rely on driver monitoring to predict how humans will behave
  • To collect data in a safe environment, a 4-axis motion simulator has been setup for human-in-the-loop driving experiments
  • To implement autonomous framework in a heterogeneous environment, given the current infrastructure, vehicle sensors, and V2V communication technology



    Methods:
    High-level decision making considers work in control theory and hybrid systems, communication, and artificial intelligence. Data-driven, probabilistic models to take into account uncertainties and variations in driving styles



    Applications:
    Driver assistance systems (e.g. lane change assistance, lane keeping assistance), autonomous driving (learn driver’s preferences)



    Publications:

  • K. Driggs-Campbell, R. Bajcsy, Identifying Modes of Intent from Driver Behaviors in Dynamic Environments. In IEEE International Conference on Intelligent Transportation Systems (ITSC), September 2015.
  • V. Shia,Y. Gao, R. Vasudevan, K. Driggs-Campbell, T. Lin, F. Borrelli, R. Bajcsy, "Semi-Autonomous Vehicular Control Using Driver Modeling,” IEEE Transactions on Intelligent Transportation Systems, To Appear.
  • D. Sadigh, K. Driggs-Campbell, A. Puggelli, W. Li, V. Shia, R. Bajcsy, A. Sangiovanni-Vincentelli, S.S. Sastry, S. A. Seshia. “Data-Driven Probabilistic Modeling and Verification of Human Driver Behavior,” AAAI Spring Symposium on Formal Verification & Modeling in Human-Machine Systems 2014.
  • K. Driggs-Campbell, V. Shia, R. Vasudevan, R. Bajcsy, “Probabilistic Driver Models for Semiautonomous Vehicles,” in Digital Signal Processing for In-Vehicle Systems, 2013.
  • Human-Centered Modeling and Control of Cooperative Manipulation with Bimanual Robots



    Duration: 2014-2017
    Funding: NSF NRI (#1427260 )
    Collaborators: Oussama Khatib (AI Lab, Stanford University)
    Researchers: Aaron Bestick, Robert Matthew, Ruzena Bajcsy, Gregorij Kurillo, Oussama Khatib (SU), Samir Menon (SU)





    Goals:

  • Enable improved control of robots providing direct physical assistance to humans
  • Create unified model of the human-robot coupled mechanical system
  • Predict intent of human operator based on physical cues



    Abstract:
    This proposal addresses modeling and control aspects of human-robot interaction by considering constraints imposed by an individual's physiology. The project is motivated by increasing demand for automation in unstructured environments that require high-level cognitive processing and complex decision-making which cannot yet be fully automated. By taking human-centric approach, data-driven musculoskeletal models are incorporated into the robot interaction model to account for differences of individuals.

    Each cooperative activity is divided into action primitives requiring different control strategies while estimating human intent from various sensors. The framework is based on theory of hybrid systems that provides provable safety and stability criteria. The outcome of this research will facilitate methodology for safer and more reliable human-robot interaction and advance state-of-the-art in human movement analysis and control theory. The broader impacts of this research will be realized through new insights into understanding of human intent and haptic cooperation applicable to general human-machine interaction. With increasing interest in service robotics safe and reliable interaction will be the key to successful introduction of robots in human-occupied environments. The potential economic impact of robots engaged in services and manufacturing alongside humans are significant due to increased productivity and reduced costs. Another emerging area is rehabilitation and assistive robotics. The developed data-driven musculoskeletal models will also be applicable to quantification of physical impairments and estimation of muscular stress in healthcare and ergonomics. This interdisciplinary research provides excellent opportunities for undergraduate and graduate students to be engaged in analytical challenges, laboratory demonstrations of theoretical results, and experimental evaluations.



    Applications:
    Industrial robots in close contact with humans, robotic assistance in construction/other physical tasks, assistive devices for elderly/people with disabilities.



    Publications:

  • Individualized Musculoskeletal Modeling for Diagnosis, Rehabilitation, and Real-Time Feedback



    Duration: 2013-now
    Funding: NSF EAGER (#1354321)
    Collaborators: Dr. Jay Han, UC Davis Medical Center
    Researchers: Robert Matthew




    Goals:

  • Development of musculoskeletal models based on multimodal sensor measurements
  • Individualized musculoskeletal models to improve over generalized population-based models.
  • These models can be analyzed to detect muscle weakness and joint injuries.



    Methods:
    Models of an individual can be created using, non-invasive sensors such as motion capture, Inertial Measurement Devices and Electromyography.



    Applications:
    Medical diagnostics of injuries, more effective physical therapy, assistive robotics (e.g. exoskeletons), human-machine interaction.



    Publications:

  • R.P. Matthew, V. Shia, M. Tomizuka, R. Bajcsy, "Optimal design for individualised passive assistance", In Proceedings of the 6th Augmented Human International Conference, Singapore, Mar 9-11, 2015, pp. 69-76.
  • R. Matthew, G. Kurillo, J. Han, R.Bajcsy, "Calculating an Individuals' Reachable Volume for use in Quantitative Medicine", Proceedings of 2nd Workshop on Assistive Computer Vision and Robotics (ACVR), Zürich, Sept. 12,2014.
  • Activity Monitoring Using Smartphones



    Duration: 2012-2014
    Funding: SHARPS
    Collaborators: Dr. David Liebovitz, Northwestern University, Chicago, IL
    Researchers: Daniel Aranki




    Goals:

  • Identify heart-failure patients at risk of clinical deterioration using a patient-acceptable tele-monitoring system.
  • Reduce readmissions rate
  • Low-cost & easy-to-use tele-monitoring



    Methods:
    Data-driven (activity, location, vital signs), daily self-reported surveys, medical intervention.



    Applications:
    Low-cost continuous monitoring of patients with heart-failure, reduction in readmissions rate, clinical deterioration risk-assessment for patients with heart-failure, monitoring-based medical intervention.



    Publications:

  • D. Aranki, G. Kurillo, P. Yan, D.M. Liebovitz, R. Bajcsy, "Continuous, Real-Time, Tele-monitoring of Patients with Chronic Heart-Failure: Lessons Learned From a Pilot Study", Proceedings of 9th International Conference on Body Area Networks (Bodynets), London, Great Britain, Sept. 29–Oct.1, 2014.
  • Development of Novel Upper Extremity Outcome Measures Using 3D-Vision Technology



    Duration: 2011-now
    Funding: NIH U01, PPMD
    Collaborators: Dr. Jay Han, UC Davis Medical Center
    Researchers: Gregorij Kurillo, Robert Matthew




    Goals:

  • Development and validation of new upper-extremity outcome measure for functional evaluation in patients with musculoskeletal impairments.
  • Data acquisition methodology is based on Microsoft Kinect camera.
  • Validation of reachable workspace outcome measure using standardized clinical tests.


    Methods:
    Reachable workspace obtained from kinematic measurements from 3D vision camera is used as a proxy of upper-limb function. We are currently clinically validating the proposed outcome measure in a group of patients with various neuromuscular disorders.


    Applications:
    Physical therapy, testing of drug effectiveness, remote health care, ergonomics.



    Publications:

  • J.J. Han, E. de Bie, A. Nicorici, R.T. Abresch , R. Bajcsy, G. Kurillo, "Reachable workspace reflects dynamometer-measured upper extremity strength in FSHD", Muscle and Nerve, 2015. Accepted.
  • J. Han, G. Kurillo, T. Abresch, E. de Bie, A. Nicroci Lewis, R. Bajcsy, "Upper extremity 3D reachable workspace analysis in dystrophinopathy using Kinect", Muscle and Nerve, 2015. Accepted.
  • J.J. Han, G. Kurillo, R.T. Abresch, A. Nicorici, R. Bajcsy, "Validity, Reliability, and Sensitivity of a 3D Vision Sensor-based Upper Extremity Reachable Workspace Evaluation in Neuromuscular Diseases", PLOS Currents in Muscular Dystrophy, Dec 12. Edition 1.
  • G. Kurillo, A. Chen, R. Bajcsy, J.J. Han, "Evaluation of upper extremity reachable workspace using Kinect camera," Technology and Health Care, vol. 21, no. 6, pp. 641-656, 2013.
  • G. Kurillo, J.J. Han, Š. Obdržálek, P. Yan, R.T. Abresh, A. Nicorici, R. Bajcsy, "Upper Extremity Reachable Workspace Evaluation with Kinect", Stud Health Technol Inform. 2013;184:247-53 (Proceedings of MMVR 2013).
  • G. Kurillo, J.J. Han, R.T. Abresch, A. Nicorici, P. Yan and R. Bajcsy, "Development and Application of Stereo Camera-Based Upper Extremity Workspace Evaluation in Patients with Neuromuscular Diseases", PLOS ONE journal, September 2012.
  • Presentations

    Below is the list of various presentations related to the research work of our lab:

    Subscribe to Teleimmersion Lab RSS