There are research-based exoskeletons in existence designed to trip up and hinder the user. Why would anyone ever build a wearable robot designed to make a person fall? I recently came across an article that explains the design process behind one type of automatic response system in exoskeletons (see article here, reference 1). In a nutshell, it creates preprogrammed reactions to outside stimuli, based on the reactions of a healthy human being in identical circumstances. The article itself is fairly dry and somewhat tongue-in-cheek (it talks about the exoskeleton kicking the user after all). However, I think this article brings up a piece of exoskeleton technology that isn’t often discussed in mainstream conversation. And it’s worth discussing.
You see, when most people (that I’ve met) talk about exoskeletons, they tend to focus on the ‘cool factor.’ They talk about Iron Man, Gundam, mecha suits, or the combat suits used in the Matrix Revolution. Note that all of these applications focus on turning normal people into rockstar heroes and are all definitively fiction as of yet (although, several prototypes with lesser abilities do exist today). On occasion, you’ll get people that bring up exoskeletons that are used to help people with paralysis or other disorders (like those from Ekso Bionics, Rex Bionics, and others). But invariably, the conversation seems to center around the size-to-strength ratio of servo motors, capacity of battery packs, and other guts within the machine. Or at best, discussions cover software hurdles which must be overcome for the wearable robot to walk in a straight line without falling over while taking cues from the user. All of these things are important – very important in fact. But if I had to pick one aspect of exoskeletons that will determine if/when they become an acceptable mainstream device used by the general population at large, it would be seamlessness. Seamlessness meaning that if I needed to use an exoskeleton, any exoskeleton, it would both look and feel natural while I used it. It would be user friendly. It would be stable and adaptable to the surrounding environment. Seamless. It would not be clunky, it would not be clumsy, and it would not be like a Tonka toy. Allow me to explain below.
Communication (human-machine interface) is very important. In exoskeletons, in order to achieve more natural control and movement, closed-loop control schemes are desirable as opposed to open-loop ones (see reference 2 in bibliography below). If all you can do is hit ‘play’ and ride the machine through a set of predetermined steps (open-loop control), you’ll simply fall over when you step on a patch of ice. First, the exoskeleton and user need to know when to react (i.e. – sensing loss of traction on a patch of ice and providing haptic feedback, see reference 3 here); then it needs to know how to react. Using closed-loop control solves that problem. You utilize communication between the environment, the user, and the exoskeleton to determine movements. The three paths of communication are shown below. Software and computer control is integrated throughout each of these interfaces.
- User ← → Exoskeleton (driving vs riding the exoskeleton)
- User ← → Environment (driving in heavy rain/snow vs driving in sunshine)
- Exoskeleton ← → Environment (built-in reflexes)
While researches in America and Asia are trying to produce exoskeleton products, EU labs have focused on refining the user-exoskeleton interface. Without that, the exoskeleton simply will not work. Well it may ‘work,’ but you’ll just be dragged along for the ride or it won’t be able to adjust during a slip. But what about the latter two? My opinion is that lack of maturity in these two areas is why most powered exoskeletons for able body users still seem ‘clunky’ and remain largely a novelty. Without closed-loop communication in the user-environment and exoskeleton-environment interfaces, more focus and energy are required to pilot the exoskeleton (which means moving slower and more cautiously). The ‘kick the user in the knee’ article and other articles (see reference 4 here) speak to the exoskeleton-environment interface in order to create automatic compensation (i.e. – reflexes). This is what many of our muscles and limbs do every day without us knowing it. If you step on a patch of ice, you don’t consciously think through each placement of your body to correct the situation. Instead, your body and brain use mostly stored responses (reflexes) to remedy the issue. This is exactly why ‘practice makes perfect’ (see reference 5 here). But if you are paralyzed from the waist down and are wearing an exoskeleton, you’ll need something to simulate those reflexes. However, none of this allows the user ‘feel’ the perturbations from the environment. This means the user is somewhat oblivious to many of the stimuli in the environment and can feel as if driving blind in a snowstorm. Just imagine someone walking around a building with a blindfold on. You get the idea. That’s why haptic feedback is important.
Even worse, an able-bodied user who is using a powered lower body exoskeleton will be able to feel a change in the terrain, a slip or a loss of balance. The user will begin adjusting but the exoskeleton will not, resulting in the user being much more likely to fall. This is why more research into exoskeleton-human interface is required. This is also the reason why every single currently successful exoskeleton for work and industry is either passive (no motors) or does not have more than one set of motors on the legs (see reference 6 and reference 7 here)
To put this all in perspective, consider what it would be like if (to drive your car) you had to drive only in heavy rain or snow, select only one fuel-to-air ratio, and only stay in one gear. On top of all that, say that if you happened to get into a wreck, you had to manually push a button to activate the air bags. Clearly, that would result in you driving slower, more cautiously, and less efficiently. And if you had the misfortune of getting into an accident, your air bags would likely be of no use to you (unless you’re a ninja). That’s exactly the state of exoskeletons today (in my opinion) – functional but not optimized. As any engineer knows, optimization is not required for a proof of concept. It only comes about when you’re ready to make the system ‘pretty.’ In order for exoskeletons to go mainstream, it is my opinion that optimization must occur – until then, they are ‘clunky.’ The technology is there, but the fine tuning is only just beginning.
Once you create closed-loop communication systems for all 3 interfaces, you can better achieve motion that seems natural. If the user can ‘feel’ the ground, they can react better to it and send more useful signals to the exoskeleton. Ultimately, not everything can be preprogrammed so empowering the user is key. Now keep in mind, this all assumes that the physical structure of the exoskeleton has enough degrees of freedom and the proper shape to properly complement the dynamics of the human body without being restrictive. If your structure doesn’t conform, nothing else in this article matters as far as ‘natural human movement’ is concerned. But that’s a different discussion for a another day.
1. Article about creating an automatic response system in exoskeletons (the main subject of this post): http://gizmodo.com/scientists-have-created-an-exoskeleton-that-kicks-its-w-1758334296 ETH Zurich and National Center of Competence in Research Robotics
2) Book: Wearable Robots: Biomechatronic Exoskeletons, edited by Jose L. Pons, chapter 4, pg 88, There is a reason why this book is number one on our list of Books on Exoskeletons and Wearable Robotics
3) Definition for haptic feedback: http://www.mobileburn.com/definition.jsp?term=haptic+feedback
4) Another example of research developing this sort of reflex development in robots: http://www.extremetech.com/extreme/212010-when-people-can-feel-robot-movement-they-can-teach-human-reflexes
5) Muscle memory: http://www.wisegeek.org/what-is-muscle-memory.htm
6) Mind Walker EU Project, https://mindwalker-project.eu Including Balance Control to a Brain Machine Interface
7) Balance Project http://www.balance-fp7.eu/news.php Learning about the fundamentals by using exoskeletons and robots that purposely unbalance the user.
Marcus Pyles is a Systems Engineer with great love for all wearable robotics, including exoskeletons and powered prosthetics. He is the author of:
79 Wearable Robotics Companies & University Programs Around the World: A Look at This New Industry and Its Main Players, from Exoskeletons to Bionic Limbs (Amazon.com), and
Cyborgs Among Us: SEO Search Terms To Keep An Eye On The Wearable Robotics Industry: Know What to Search for & How to Be Searchable Online (Amazon.com)
Marcus is a guest writer for the Exoskeleton Report and we thank him for his time researching and writing this article.