Leveraging Geometric Modeling-Based Computer Vision for Context Aware Control in a Hip Exosuit

Human beings adapt their motor patterns in response to their surroundings, utilizing sensory modalities such as visual inputs. This context-informed adaptive motor behavior has increased interest in integrating computer vision (CV) algorithms into robotic assistive technologies, marking a shift towa...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on robotics Ročník 41; s. 3462 - 3479
Hlavní autoři: Tricomi, Enrica, Piccolo, Giuseppe, Russo, Federica, Zhang, Xiaohui, Missiroli, Francesco, Ferrari, Sandro, Gionfrida, Letizia, Ficuciello, Fanny, Xiloyannis, Michele, Masia, Lorenzo
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 2025
Témata:
ISSN:1552-3098, 1941-0468
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Human beings adapt their motor patterns in response to their surroundings, utilizing sensory modalities such as visual inputs. This context-informed adaptive motor behavior has increased interest in integrating computer vision (CV) algorithms into robotic assistive technologies, marking a shift toward context aware control . However, such integration has rarely been achieved so far, with current methods mostly relying on data-driven approaches. In this study, we introduce a novel control framework for a soft hip exosuit, employing instead a physics-informed CV method grounded on geometric modeling of the captured scene for assistance tuning during stairs and level walking. This approach promises to provide a viable solution that is more computationally efficient and does not depend on training examples. Evaluating the controller with six subjects on a path comprising level walking and stairs, we achieved an overall detection accuracy of <inline-formula><tex-math notation="LaTeX">93.0\pm 1.1\%</tex-math></inline-formula>. CV-based assistance provided significantly greater metabolic benefits compared to non-vision-based assistance, with larger energy reductions relative to being unassisted during stair ascent (<inline-formula><tex-math notation="LaTeX">-18.9 \pm 4.1\%</tex-math></inline-formula> versus <inline-formula><tex-math notation="LaTeX">-5.2 \pm 4.1\%</tex-math></inline-formula>) and descent (<inline-formula><tex-math notation="LaTeX">-10.1 \pm 3.6\%</tex-math></inline-formula> versus <inline-formula><tex-math notation="LaTeX">-4.7 \pm 4.8\%</tex-math></inline-formula>). Such a result is a consequence of the adaptive nature of the device, enabled by the context aware controller that allowed for more effective walking support, i.e., the assistive torque showed a significant increase while ascending stairs (<inline-formula><tex-math notation="LaTeX">+33.9\pm 8.8\%</tex-math></inline-formula>) and decrease while descending stairs (<inline-formula><tex-math notation="LaTeX">-17.4\pm 6.0\%</tex-math></inline-formula>) compared to a condition without assistance modulation enabled by vision. These results highlight the potential of the approach, promoting effective real-time embedded applications in assistive robotics.
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2025.3567489