How It Works

How EchoDepth Works

44 facial Action Units. FACS standard. VAD model. Six steps from input to decision intelligence.

01

Input

EchoDepth receives video input — live feed, recorded session, or uploaded content — from the deployment environment.

02

Action Unit Detection

EchoDepth detects and classifies 44 facial Action Units in real time using FACS-standard computer vision models.

03

Cultural Calibration

AU patterns are calibrated against the relevant cultural cohort from EchoDepth's 14-cohort reference set across 6 countries.

04

VAD Mapping

Calibrated AU patterns are mapped to Valence-Arousal-Dominance coordinates, producing a three-dimensional emotional state output.

05

Signal Generation

Emotional state data is processed into decision-relevant signals: Trust Score, Credibility Rating, Resistance Indicator, Confidence Score.

06

Output Delivery

Outputs are delivered via dashboard, API or structured report — formatted for the specific deployment context and governance requirements.

EchoDepth real-time analysis in action
Platform Outputs →Book a Demo →