Plugins
pose
Peer dependency requirement
To use this plugin, you must install MediaPipe as a peer dependency:
npm install @mediapipe/tasks-vision
The pose plugin uses MediaPipe's Pose Landmarker and exposes landmark and mask data in ShaderPad-friendly GLSL form.
import pose from 'shaderpad/plugins/pose'
const shader = new ShaderPad(fragmentShaderSrc, {
plugins: [pose({ textureName: 'u_webcam', options: { maxPoses: 2 } })],
})
The plugin reads from the texture named by textureName. Initialize and update that exact ShaderPad texture name, or the detector will have no live source to read from.
Options
| Option | Meaning |
|---|---|
modelPath?: string | custom MediaPipe model path |
maxPoses?: number | maximum poses to detect |
minPoseDetectionConfidence?: number | detection threshold |
minPosePresenceConfidence?: number | presence threshold |
minTrackingConfidence?: number | tracking threshold |
history?: number | history depth for landmarks and pose mask |
Events
Subscribe with shader.on(name, callback).
| Event | Callback | Meaning |
|---|---|---|
pose:ready | () => void | model assets are loaded and the plugin is ready |
pose:result | (result: PoseLandmarkerResult) => void | latest MediaPipe result for the current analyzed frame |
shader.on('pose:result', result => {
console.log(result.landmarks.length)
})
Uniforms
| Uniform | Meaning |
|---|---|
u_maxPoses | configured maximum number of poses |
u_nPoses | current detected pose count for the latest frame |
u_poseLandmarksTex | raw landmark texture used internally by nPosesAt() and poseLandmark() |
u_poseMask | body mask texture used internally by poseAt() and inPose() |
Most shaders should use the helper functions below instead of sampling u_poseLandmarksTex or u_poseMask directly.
Helper Functions
If history is enabled, every helper below also has an overload with a trailing int framesAgo argument. 0 means the current analyzed frame, 1 means the previous stored frame, and so on.
nPosesAt
int nPosesAt()
int nPosesAt(int framesAgo)
Returns the number of poses stored for the current or historical frame.
poseLandmark
vec4 poseLandmark(int poseIndex, int landmarkIndex)
vec4 poseLandmark(int poseIndex, int landmarkIndex, int framesAgo)
Returns vec4(x, y, z, visibility).
x,y: normalized landmark position in ShaderPad UV spacez: MediaPipe landmark depth valuew: landmark visibility / confidence
vec2 leftHip = vec2(poseLandmark(0, POSE_LANDMARK_LEFT_HIP));
vec2 rightHip = vec2(poseLandmark(0, POSE_LANDMARK_RIGHT_HIP));
poseAt
vec2 poseAt(vec2 pos)
vec2 poseAt(vec2 pos, int framesAgo)
Returns vec2(confidence, poseIndex).
x: segmentation confidence for the sampled pixely: the matchingposeIndex, or-1.0when no pose matched
This is the fastest way to ask “which pose owns this pixel?” without manually decoding u_poseMask.
vec2 hit = poseAt(v_uv);
if (hit.y >= 0.0) {
float confidence = hit.x;
int poseIndex = int(hit.y);
vec2 torso = vec2(poseLandmark(poseIndex, POSE_LANDMARK_TORSO_CENTER));
color.rgb = mix(color.rgb, vec3(torso, confidence), confidence);
}
inPose
float inPose(vec2 pos)
float inPose(vec2 pos, int framesAgo)
Returns the confidence component from poseAt(). This is 0.0 when no pose matched, otherwise the pose-mask confidence for that pixel.
Landmark Layout
The plugin exposes MediaPipe’s standard 33 pose landmarks plus six derived landmarks:
POSE_LANDMARK_BODY_CENTERPOSE_LANDMARK_LEFT_HAND_CENTERPOSE_LANDMARK_RIGHT_HAND_CENTERPOSE_LANDMARK_LEFT_FOOT_CENTERPOSE_LANDMARK_RIGHT_FOOT_CENTERPOSE_LANDMARK_TORSO_CENTER
Commonly useful named constants include:
POSE_LANDMARK_LEFT_EYEPOSE_LANDMARK_RIGHT_EYEPOSE_LANDMARK_LEFT_SHOULDERPOSE_LANDMARK_RIGHT_SHOULDERPOSE_LANDMARK_LEFT_ELBOWPOSE_LANDMARK_RIGHT_ELBOWPOSE_LANDMARK_LEFT_HIPPOSE_LANDMARK_RIGHT_HIPPOSE_LANDMARK_LEFT_KNEEPOSE_LANDMARK_RIGHT_KNEE
For the full MediaPipe landmark index map, use the upstream Pose Landmarker model reference.
This page covers the ShaderPad-facing API surface. For MediaPipe result object structure and model changes, use the upstream MediaPipe docs.