THE ARCHITECTURE
OF HUMAN MOTION
We pair Google MediaPipe pose estimation with sequential deep nets and ensemble classifiers so every frame becomes training signal: real-time pose correction, workout classification, and muscle-group activation—without sacrificing latency.
Models at a glance
Workout DNN
A feed-forward deep neural network ingests sequential pose features from MediaPipe streams to classify 22 workout classes in real time with 98% macro F1.
Muscle ensemble
A Random Forest classifier maps the same pose-derived features to seven muscle-group activation labels, holding 90% F1 while staying lightweight enough for live inference.
MediaPipe vision
MediaPipe Holistic / Pose graphs deliver stable landmark estimates on-device; those tensors train and serve the low-latency regression and classification heads that power the HUD.
Soft Actor-Critic
Pose correction
A Soft Actor-Critic sequential policy outputs continuous adjustments for twelve joint positions, achieving roughly 5% mean absolute error against expert-labelled kinematics while keeping step times compatible with live coaching.
-
check_circle
Sequential control
SAC reads the pose sequence and proposes joint deltas that track your programme’s safety envelope.
-
check_circle
Classifier fusion
DNN workout labels and Random Forest muscle activations gate which corrections the policy prioritises.
-
check_circle
ML CI/CD
Modular pipelines continuously integrate new checkpoints so research improvements ship without freezing the product.
Precision Through Data
A breakdown of the headline metrics from a live TrueForm inference session.