The AI in robotic prostheses predicts the type of terrain users will be stepping on, quantifies the uncertainties associated with that prediction, and then incorporates that uncertainty into its decision-making.
Researchers at North Carolina State University have integrated computer vision and artificial intelligence into off-the-shelf robotic prosthetics to improve walking on different types of terrain.
The team built a device that attaches to the prosthetic limb to track movement and recognize different terrains. The AI is able to recognize six different terrains: downstairs, upstairs, cement, grass, tile, and concrete.
SEE ALSO: NC State Researchers Develop New Model For Inaccurate IoT Data Replacement
“Lower-limb robotic prosthetics need to execute different behaviors based on the terrain users are walking on,” said Edgar Lobaton, associate professor of computer engineering at North Carolina State University.
“The framework we’ve created allows the AI in robotic prostheses to predict the type of terrain users will be stepping on, quantify the uncertainties associated with that prediction, and then incorporate that uncertainty into its decision-making.”
For the tests, cameras were outfitted on both the leg and the head. Lobaton said that while the headgear improves distance, the AI is able to recognize terrain changes within a few feet, and should be able to manage without the headgear.
“If the degree of uncertainty is too high, the AI isn’t forced to make a questionable decision – it could instead notify the user that it doesn’t have enough confidence in its prediction to act, or it could default to a ‘safe’ mode,” said Boxuan Zhong, lead author and Ph.D. graduate from NC State.
The team is aiming to trial the framework on a robotic device in the near future, to ascertain if this AI system could be an advantage for people with prosthetic limbs.
“We are excited to incorporate the framework into the control system for working robotic prosthetics – that’s the next step,” said Helen Huang, co-author of the paper.