The researches of Bio-Engineering team focus on the development of the multi-fingered prosthetic hand system. The multi-fingered prosthetic hand system includes three main parts:
1. Design and control of multi-fingered mechanical hand.
2. Development of the electromyography (EMG) control system.
3. Data glove and virtual reality
4. Safety and rehabilitation robotics
5. Brain-Computer Interface
In the design and control of multi-fingered mechanical hand, the mechanical hand should be similar to the human hand in its size and weight. Furthermore, it also should be dexterously manipulated. The total weight of the latest version of the multi-fingered mechanical hand, NTU-Hand IV (Figure 1), is 654 gram and the workspace is shown in the Figure 2.
Figure 1 NTU-Hand IV
¡@
Figure 2 Workspace of NTU-Hand IV
In the development of EMG control system, we extract the EMG signals via the surface EMG electrodes and then calculate the features of the EMG signals and use the back-propagation neural network as the classifier to classify which posture is performed. The goal of the EMG control system is that an amputee can directly control the multi-fingered mechanical hand to perform the postures he want to do. Current EMG control system is developed based on the TI C3x DSP. In order to reduce the size and weight of the circuit board, we will develop the whole system including the recognition and control parts on a chip using the techniques of system-on-a-chip (SoC) in the future. Hence, a portable EMG control system will be possible.
In the research of data glove and virtual reality, we
control the position and force of the manipulator via
tendons and transmit the information to both the host
computer and the NTU-Hand IV. Therefore, the operator can remotely control the posture and force of the NTU-Hand IV and the data glove can also show the status of the NTU-Hand IV, as shown in Figure 3.
Figure 3 Data glove and virtual hand
Safety and rehabilitation robotics
To deal with two major problems worldwide, aging population and shrinking work force, next generation robots have to interact with people directly and naturally. Therefore, we focuses on the issues of the human-robot interaction, especially safe physical interaction, to develop an intelligent intrinsically-safe robotic system considering human intention, robot performance and safety in unstructured environments as the unified optimality criteria for future technical challenges. Briefly, five main distinct topics of research are:
- Interaction safety and performance criterion development
- Intrinsically safe actuation and mechanical design
- Impedance control for flexible joint robots
- Human intention recognition through a vision system
- Real time safety planning and control strategies
Our ultimate objective is to integrate these topics into a realistic robotic system to make the robot autonomously interact with human by an intention recognition system composed of emotion, joint attention, and behavior recognition subsystems with safe mechanism design, advanced control theorems, and real time safety planning strategies.
Meanwhile, focusing on therapy for both active and passive physical rehabilitation and augmentation, we conduct an investigation about dynamic properties of a general variable stiffness rehabilitation system to provide a general perspective on physical interaction. A coupled elastic robotic system (CEBOT) using a new general coupled elastic actuation (CEA) approach is designed to provide unique intrinsic adjustable stiffness capacity to shape the output force for each individual patient. Namely, with proper preset properties, a CEBOT can efficiently realize different desired training patterns in advance without dynamically adjusting intrinsic stiffness. It also possesses many irreplaceable advantages, such as providing inherent safety, gentler treatment, optimal motion patterns, capacity of combining functional electric stimulation, etc.
In the future, based on this concept, we are going to design a whole body augmentation system that can help both normal and disable people increase their mobility and capacity.
Brain-Computer Interface
In the last few years, a lot of labs begin to research the communicate interface between brain and external environment. The purpose is for human to communicate with external device by using the brain wave. For neurologically impaired patients such as amyotrophic lateral sclerosis, stroke, old man with disabilities, the brain wave is the residual signal that can be used for controlling. Even if patients can not act, they also can use Brain-Computer Interface to communicate with environment.
There are many kinds of brain signals used for Brain-Computer Interface(BCI) and our lab used P300 signal to control robots. The P300 is elicited with an oddball paradigm, occurring from an infrequently stimulus in a random series of two stimulus. By detecting the P300, BCI is able to control the elbow-rehabilitation robot and make the critically illness-patients such as stroke, spinal cord injury …etc to generate rehabilitated exercises and supported exercise by themselves. The accuracy rate of detecting P300 can achieve 100% and the time to detect a command on average is 30 seconds.
In the future, the goal of EEG research on our lab will place on reducing the time of detect a command and we will also research the brainwave with no stimulus, such as the classification of motor imagery. Develop more other useful application, such as control seven-axis robotics arm and control a wheelchair. So that makes human life more convenient and better.
Fig.1 Control Panel
Fig.2 Rehabilitation Robot
Fig.3 Architecture of Brain Computer Interface and Rehabilitation Robot