Speaker
Yuichi Kurita
Associate Professor, Institute of Engineering, Hiroshima University
Title
Computational Modeling of Subjective Force Sensation for Assisting Human Activity
Abstract
The models of motion and perception characteristics of humans are helpful to evaluate subjective efforts associated with intuitive, safe, and easy-to-use products. Traditionally, the muscle activity and/or the joint torque estimated by using a musculoskeletal model have been evaluated, whereas there are few studies that consider human’s perception characteristics. The perception characteristics of force changes depend on the physical capacity of the human body. Human’s skeletal muscles have sensory system, such as muscle spindles and Golgi tendon organs. These receptors are believed to be involved in the proprioceptive sense of a body posture and neuromuscular activation. Considering that there is a sense of force associated with voluntary muscle activity, estimating muscle activity by using a musculoskeletal model is helpful to develop computational force perception model. Our research group has challenged to model the perception characteristics of force, and explored the application to assisting and supporting human activity. In this talk, I will introduce three research topics: (1) Investigation of the subjective force perception based on the estimation of the muscle activities during a steering operation, (2) A muscle assistive equipment that unloads the weight of one’s upper limb, and (3) Interface design that considers muscle effort associated with subjective effort when using a product.
Biography
See official website.
Speaker
Katsu Yamane and Peter Whitney
Disney Research, Pittsburgh
Title
The Role of Human Modeling in Human-Robot Interaction
Abstract
Robots are expected to assist humans both physically and mentally. For example, they can help humans physically by reducing load or promoting exercises. They can also be used to help mental development or mental disorder treatment.
It is often beneficial for such robots to exhibit human-like motions and responses while interacting with humans, not only for aesthetic reasons but also for smooth nonverbal communication because humans can easily infer robots’ intention if their motions are similar to what they expect from other humans. There are two possible approaches to this goal.
One is to learn and model motions and/or interactions from human motion data. This approach seems to be promising because we know that humans are capable of accomplishing various tasks robustly while interacting with other humans. However, this approach is not always straightforward due to differences between human and robot bodies.
Another approach is to start from the functionality required for the task. In this case, a designer will have to create motions and interactions by hand, or determine the cost function used by a numerical optimization algorithm. However, the quality of the resulting interactions depends on the designer’s skill.
In this talk, I will introduce our research projects in physical human-robot interaction that use one of the two approaches, and discuss their advantages and disadvantages. The projects also cover different distances between the robot and human, leading to different safety requirements. The first project handles remote human-robot interaction through an object, where we studied whether reactions to external events invokes more engaging interactions. The second project uses human motion data to learn an interaction that involves implicit communication between the robot and human. Finally, the third project addresses the safety issue in intimate human-robot interactions by applying new actuation mechanisms.
Speaker
Ashish D. Deshpande
Assistant Professor, The University of Texas at Austin
Title
A Novel Framework for Virtual Prototyping of Rehabilitation Exoskeletons
Abstract
Robotic exoskeleton (human-worn) systems provide a promising avenue for assisting stroke patients to recover motor function and for easing the burden of labor intensive, highly repetitive, and therefore, costly conventional physical therapy.
Clinical trials have shown that robot-aided therapy results in improved limb motor function after chronic stroke with increased sensorimotor cortex activity for practiced tasks. Design of robotic exoskeletons is challenging due to the limits on size and weight, and the need to address technical challenges in areas ranging from biomechanics, rehabilitation, actuation, sensing, physical human-robot interaction, and control based on the user intent. The process involves design of the robot hardware including selection of the robot architecture, method for attachment to the wearer, and choice of design parameters, and the control system design. Since the exoskeletons are meant to be in a close physical contact with the subjects, a synergistic approach that accounts for the coupled human-robot system may be necessary.
For generating effective designs of the exoskeletons, our goal is to develop design and analysis tools that allow for simultaneous modeling of the robot hardware and subject’s musculoskeletal biomechanics. For engineering design applications simulation-based design methodologies have been developed that achieve iterative refinements of the product models early in the design stage to develop more cost- and time-effective designs with superior performance and quality. Recently, a few tools such as Software for Interactive Musculoskeletal Modeling (SIMM), OpenSim, AnyBody Modeling System, LifeModeler, Virtual Interactive Musculoskeletal System (VIMS) have been developed for musculoskeletal analysis. However, biomechanics computational tools have not been effectively combined with traditional engineering design techniques. In the past, a few attempts have been made to simulate a combined human exoskeleton system using musculoskeletal analysis. However, a systematic and principled framework for quantitative evaluation of competing alternatives, and a mechanism for parametric design refinement via studying the effects of exoskeletons on human musculoskeletal system (and vice versa) and control algorithm development, for this new class of human-worn robotic devices, are currently missing.
In this work, we propose a human-model-in-the-loop framework merging the computational musculoskeletal analysis and simulation-based design methodologies. The objective of the framework development is to carry out virtual prototyping of the rehabilitation exoskeletons. The framework allows for iterative optimization of exoskeletons using biomechanical, morphological and controller performance measures. Furthermore, it allows for carrying out virtual experimentation for testing specific “what-if” scenarios to quantify device performance and recovery progress. To illustrate the development and application of the framework, we present a case study wherein virtual prototyping of an index-finger exoskeleton is carried out based on the relevant performance measures. Also, a “what-if” scenario is simulated for virtual experimentation and is shown to help quantify recovery progress.
Speaker
Masashi Konyo
Associate Professor, Tohoku University
Title
Creating Haptic Feedback by Motion-oriented Skin Vibration for Assistive Technologies
Abstract
Our haptic cues for creating motion are based on kinesthetic and cutaneous sensations. It is natural to think that kinesthetic feedback is dominant because these sensory receptors are located at muscles to create body movement. In this presentation, however, I focus on the contribution of cutaneous (tactile) sensation that related to the perception of kinesthetic information.
Basic functions of cutaneous receptors are detection of the information associated with skin deformations, which have been mainly focused on the perception of tactile information. On the other hand, skin deformation is also associated with our body movement so that it could encode the information of body movement interacting with the environment. For example, we confirmed that high frequency vibration on the finger pad during hand exploration could reflect the normal force and velocity of the finger. Based on the physical observations, we have proposed a substitutive approach to representing friction force sensation using vibrotactile stimuli without any physical forces. This approach can be applied for representing another force-like feelings such as inertia, viscosity, and elastic feeling because skin deformation could reflect such properties during body movement.
In my talk, I will present the concept of the vibration stimulation in association with body movement to create force-like sensation and several applications to create haptic feedback on mobile information devices and gesture interfaces just by using vibrations. The proposed approach will contribute for assistive technologies especially in mobile and ubiquitous environments.
Biography
See Website.
Speaker
Seokhee Jeon, Kyung Hee University Yongin-si, South Korea
Matthias Harders, Universität Innsbruck Technikerstrasse 21 A 6020 Innsbruck, Austria
Seungmoon Choi, POSTECH, Pohang-si, South Korea
Title
Augmenting Haptic Channel: Building Blocks for Haptic Augmented Reality
Abstract
Haptic augmented reality is a new paradigm focusing on the augmentation of haptic channel in human-computer interaction. Analogous to conventional visual augmented reality, the paradigm combines real and virtual haptic sensory stimuli to alter or augment touch perception during object manipulation. Such a combined rendering can provide a user with a modulated percept of object haptic properties including stiffness, friction, surface texture, and shape. This workshop proposal first outlines the general concept of integrating haptics into augmented reality. Thereafter, we will introduce three heuristic algorithms for haptic augmentation – covering stiffness modulation at either one or two contact points, and the modulation of friction and weight. This will address topics of parameter estimation, contact detection, and augmentation computation. Finally, an application example will be given in the context of tissue palpation. A method for augmenting virtual stiffer inclusions in physical soft tissue samples will be presented.
Speaker
Mitsunori Tada
National Institute of Advanced Industrial Science and Technology
Title
Musculoskeletal Motion Analysis of Cadaver Hands
Abstract
Speaker
Yui Endo
National Institute of Advanced Industrial Science and Technology
Title
DHAIBA: A System for aiding Human-Centered Product Design with Human Models
Abstract
Speaker
Masahiko Inami
Professor, Graduate School of Media Design, Keio University (KMD)
Title
Towards Augmented Human
Abstract
What are the challenges in creating interfaces that allow a user to
intuitively express his/her intentions? Today's HCI systems are
limited, and exploit only visual and auditory sensations. However, in
daily life, we exploit a variety of input and output modalities, and
modalities that involve contact with our bodies can dramatically
affect our ability to experience and express ourselves in physical and
virtual worlds. Using modern biological understanding of sensation,
emerging electronic devices, and agile computational methods, we now
have an opportunity to design a new generation of 'near-field
interaction' technologies.
This talk will present several approaches that use multi/cross modal
interfaces for enhancing human I/O. They include Transparent Cockpit,
Stop-Motion Goggle, Galvanic Vestibular Stimulation and Superhuman
Olympics.
(See http://www.youtube.com/user/InamiLaboratory for more examples.)
Biography
Masahiko Inami is a professor in the School of Media Design at the
Keio University (KMD), Japan. His research interest is in human I/O
enhancement technologies including bioengineering, HCI and robotics.
He received BE and MS degrees in bioengineering from the Tokyo
Institute of Technology and PhD in 1999 from the University of Tokyo.
His scientific achievements include the Retro-reflective Projection
Technology (RPT) known as "Optical Camouflage," which was chosen as
one of the coolest inventions of 2003 by /TIME/ magazine. His research
has appeared at the Siggraph Emerging Technologies via 35
installations from 1997 through 2014. His installations have appeared
at Ars Electronica Center.
Speaker
P. Fraisse, C. Azevedo and M. Hayashibe
Universite Montpellier 2 / LIRMM
Title
Human modeling and control towards a functional assistance in posture of paraplegics using FES
Abstract
Speaker
Jun Ueda, Assistant Professor, Georgia Institute of Technology
William Gallagher, NASA Goddard Space Flight Center
Title
Human-robot physical interaction for adaptive robot co-workers
Abstract
Human power-assisting systems, e.g., powered lifting devices that aid human operators in manipulating heavy or bulky loads, require physical contact between the operator and machine, creating a coupled dynamic system. This coupled dynamic has been shown to introduce inherent instabilities and performance degradation due to a change in human stiffness; when instability is encountered, a human operator often attempts to control the oscillation by stiffening their arm, which leads to a stiffer system with more instability. The level of endpoint stiffness is not directly measurable in typical haptic control situations. Previous estimation methods have either employed offline techniques or have shown a significant amount of error and variance in the online cases. On the operator side, human arm stiffness is changed through modulation of co-activity across muscles crossing joints. Electromyogram (EMG) signals have frequently been used to record muscle activity. Previous studies have shown strong association between muscle co-contraction and corresponding joint stiffness/impedance. A feasibility study was performed to develop a one-degree-of-freedom robotic device. Using gain-scheduling approach based on classifying operator arm stiffness from EMG signals, the study showed improved stability and performance. Achieving increased stability and performance requires treatment of operator’s dynamics as a stochastic system with random varying stiffness parameter. Also, given the observed change in operator’s reaction as a result of change of stiffness compensation state, a “proactive”, rather than “reactive” gain-scheduling approach is preferred. This talk will introduce our work-in-progress supported by the US National Science Foundation National Robotics Initiative program.
Speaker
Cota Nabeshima
CYBERDYNE Inc.
Title
Safety Techniques of Robot Suit HAL®
Abstract
Safety is an indispensable hurdle to put a product into practical use. Our company,
CYBERDYNE Inc. [1] has been developing Robot Suit HAL® [2] with safety as the highest priority
since its inception in 2004. Our effort led to two world-first safety certifications in 2013 [3, 4]:
ISO/DIS 13482 [5] the first safety (draft) standard of personal care robots, and CE Marking
indicating conformity to the Medical Device Directive [6] in the EU. Now, we are energetically
opening a new market. As of February 2014, the number of Robot Suit HAL® for Well-being
amounts to over 400 used at over 160 places in Japan, and that of Robot Suit HAL® for Medical Use
is over 40 in Germany and Sweden.
In this talk, I will present the following topics:
Introduction of CYBERDYNE Inc. and Robot Suit HAL®,
(1)How we assessed and controlled the risks of Robot Suit HAL®,
(2)How we verified and validated the risk controls,
(3)How we exploited the existing safety standards and what is missing in them, and
(4)Regulations to be complied with for wearable walking assistant robot.
Speaker
Takeshi Ando
Panasonic Corporation
Title
Robotics Technology and its Business to Support Human Life ~Development process using bio-signal data~
Abstract
In elder dominated society, many kinds of robots had better to be developed and used in
hospital, care facility and home. We have been developing HOSPI (autonomous
delivering robot), Robotic Bed (combination of bed and wheelchair), and Head-care robot
(automatic shampooing robot) and so on. To promote technical development related with
human support robot, it is useful to show the quantitative and qualitative data based on
user trial. These data is also important to promote business effectively. In this workshop,
I introduce development process of Head-care robot using quantitative data such as
bio-signal which is related with comfortably and relaxation. By using bio-signal
evaluation in each development phase, we have developed suitable technology on
parallel link mechanism and compliance control system.
Speaker
Anne E. Martin and James P. Schmiedeler
the Aerospace and Mechanical Engineering Department, the University of Notre Dame
Title
Predicting Healthy Human and Amputee Walking Gait using Ideas from Underactuated Robot Control
Abstract
The ability to predict human gait, particularly
impaired human gait, has the potential to improve rehabilitation/
training outcomes and to reduce prosthesis/orthosis
development costs. This work presents a walking model of
moderate complexity that accurately captures both sagittal
plane joint kinematics and whole body energetics for healthy
human walking. The six-link, left-right symmetric model with
hips, knees, ankles, and rigid circular feet accurately predicts
normal human walking over a wide range of speeds using
a torque-squared objective function. For unilateral transtibial
amputee gait, one ankle joint is eliminated, yielding a fivelink,
asymmetric model that is used to quantify the differences
between amputee gaits optimized for symmetry and efficiency.
Speaker
Yoichi Morales, Tadahisa Kondo, and Norihiro Hagita
the Intelligent Robotics and Communication Laboratories, Advanced Telecommunications Research Institute International
Title
Comfortable Navigational Framework for Autonomous Vehicles
Abstract
This work proposes the use of a “human comfortable
map” (“HCoM”) for autonomous passenger vehicles
navigation which in top of being safe and reliable is comfortable.
A definition of comfort is presented and a subjective method
was applied to measure directly asking participants how comfortable
they felt after riding an autonomous wheelchair. The
HCoM was built extracting information from users preference
related to their comfort level while navigating under different
conditions in a corridor environment. These human-comfort
factors are integrated to a geometric map generated by SLAM
and a global planner computes a safe and comfortable path
which is followed by the robotic wheelchair. We argue that
human comfortable paths are achievable by the availability of
the HCoM.