Human upper-limb movement is underpinned by complex muscle dynamics, which are central to understanding motor control, rehabilitation, and ergonomics. Laboratory-based motion capture combined with electromyography (EMG) remains the gold standard for quantifying muscle activity, but these approaches are expensive, constrained to controlled settings, and impractical for routine clinical or large-scale use. Recent progress in computer vision has enabled the extraction of 3D kinematics from consumer-grade video systems. The open-source platform OpenCap has shown that lower-limb and whole-body kinematics can be reconstructed from smartphone recordings, with utilities for estimating joint forces. However, direct inference of upper-limb muscle dynamics remains largely unexplored.
This PhD project will address this gap by developing a computational framework for estimating upper-limb muscle activity from RGB-D video data. The work will extend existing kinematic pipelines with physics-informed neural networks and musculoskeletal modelling to predict muscle-level forces during upper-limb tasks. Validation will be performed against gold-standard laboratory data, including EMG and inverse dynamics, across tasks such as reaching, grasping, lifting, and rehabilitation exercises.
The project will deliver an open-source, scalable approach for accurately, inexpensively, and efficiently estimating muscle activity in the upper limb compared to current lab-based methods. This has the potential to transform rehabilitation monitoring (e.g., stroke recovery), including remote assessments, simulations to determine tailored treatment pathways, and large-scale research into everyday motor behaviour. By contributing back to the OpenCap ecosystem, the project will ensure reproducibility, foster community engagement, and accelerate the translation of biomechanics and machine learning research into practice.

