Simultaneously both exciting and scary, a South Korean research team has demonstrated that an exoskeleton power glove can be controlled using a camera and a machine learning algorithm. This research effort is a collaboration led by Professor Sungho Jo (KAIST) and Kyu-Jin Cho (Seoul National University), with the Soft Robotics Research Center (SRRC), Seoul, Korea.
Due to their many degrees of freedom, both orthotics and prosthetics can be tricky to control. Historically, developers and researchers have used various control methods:
- Buttons – either on a control panel or on a nearby surface
- Pressure – for example a pressure plate in the shoe that will activate a wearable based on a change of the user’s center of gravity
- Air Pressure – such as the Innophys’ back support exoskeleton activating upon blowing into a tube
- Biological Signal – electrical signal detected in the brain or at a nerve pathway that is captured and used to control an actuator
- Eye Motion – tracking of the eyes in order to interpret commands
- Initial Motion – sensitive rotational transducers that detect a change in a limb’s position and send a signal to the motion controller
All of the above control schemes share one thing in common, the user has to personally trigger their wearable robot (be it an exoskeleton or a powered prosthetic). As you can imagine, executing commands can be challenging, but it is even more challenging for people who suffer from mobility impairments. People experiencing tremors or those with partial or even complete paralysis have fewer options.
The machine learning model: Vision-based Intention Detection network from an EgOcentric view (VIDEO-Net) offers something new: have a computer interpret the intentions of the user to activate their wearable. This is a bold new step in human augmentation. It reduces the burden on the user to micromanage their wearables while simultaneously making the same devices accessible for those who would otherwise not be able to control them.
If you are worried about an AI taking control over your powered exo, however, you have nothing to worry about. The VIEDO-Net can be yet another sensor that will work with conjunction with one or more of the classical control schemes previously listed. It is an exciting new tool that can make wearable robots more intuitive and user-friendly. Just like Bill Gates said upon his visit to the Wyss Institute, “…you need the guys that understand the software control part…”
For more, read the full article, Eyes are faster than hands: Vision-based machine learning for soft wearable robot enables disabled person to naturally grasp objects, Seoul National University’s News and Forum published on Feb 1, 2019