FACS Methodology

The Facial Action Coding System (FACS): The Science Behind EchoDepth

By Jonathan Prescott, Founder & CEO, Cavefish28 March 202610 min read

The Facial Action Coding System (FACS) is the gold-standard scientific framework for measuring human facial movements. EchoDepth is built on it — analysing 44 Action Units to detect the compound signals that predict behaviour. This is a complete, plain-language explanation of what FACS is, where it came from, and why it matters.

What Is FACS?

The Facial Action Coding System was developed by psychologists Paul Ekman and Wallace Friesen in the 1970s and has been refined through decades of cross-cultural research. It is a comprehensive anatomical taxonomy of facial muscle movements — not expressions, but the discrete muscle contractions that produce them.

Each Action Unit (AU) corresponds to one or more specific facial muscles. AU1, for example, is the inner brow raise — produced by the frontalis muscle. AU6 is the cheek raiser — produced by the orbicularis oculi. These individual movements combine into compound expressions. The combination of AU6 and AU12 (lip corner pull) produces what Ekman termed the “Duchenne smile” — a genuine smile that involves the eyes as well as the mouth and is reliably distinct from a performed one.

The 44 Action Units EchoDepth Measures

EchoDepth implements all 44 Action Units across four anatomical regions:

AU 1–7
Upper Face

Brow raises, lid tighteners, nose wrinklers — signals of attention, concern, surprise and disgust

AU 9–17
Lower Face

Lip corner pulls, chin raises, lip tighteners — signals of happiness, contempt, sadness and uncertainty

AU 18–26
Mouth Region

Lip stretches, jaw drops, mouth opens — signals of fear, shock, engagement and speech patterns

AU 27–46
Head & Eye

Head tilts, blink rates, eye deflections — signals of interest, evasion, credibility and dominance

“The same expression can be produced by different AU combinations. Measuring AUs rather than expressions gives EchoDepth access to the signal beneath the signal.”

Why AUs Matter More Than Expressions

A smile is not a smile. The AU6+AU12 combination (Duchenne smile) reliably predicts genuine positive affect. AU12 alone — the zygomatic major pulling the lip corners up without orbital involvement — does not. Research published through the National Academies of Sciences on deception detection has consistently found that compound AU patterns are more reliable indicators of genuine emotional state than broad expression categories.

This distinction matters enormously in enterprise contexts. An executive presenting quarterly results may display confidence verbally while AU patterns around the eyes and mouth signal stress and suppressed anxiety. A buyer in a sales demo may nod agreement while AU patterns signal confusion and withdrawal. These signals are invisible to the unaided observer — and systematically missed. EchoDepth makes them visible.

Cultural Calibration — Why It Is Not Optional

Ekman's original research proposed six universal basic emotions with culturally invariant facial expressions. Subsequent research has refined this — demonstrating that while certain AU patterns have cross-cultural validity, the frequency, intensity and context of emotional expression varies significantly by culture.

An emotional AI system trained primarily on Western faces applied without calibration to East Asian, South Asian or African subjects will systematically misclassify emotional states. This is not a theoretical risk — it is a documented failure mode in uncalibrated systems. EchoDepth calibrates across 14 cultural cohorts in 6 countries precisely to avoid this failure.

FACS in Enterprise — What It Enables

The practical implication of FACS-based analysis in enterprise contexts is a set of signals that have been unavailable until now. In financial services, EchoDepth detects credibility gaps in earnings presentations before they reach the market. In sales, it flags buyer disengagement within minutes of a demo starting. In defence, it provides operators with a quantified behavioural risk signal independent of subjective assessment.

None of this is possible with simpler emotional classification systems. It requires the granularity of full FACS implementation — which is why EchoDepth measures all 44 Action Units.

Frequently Asked Questions

What is the Facial Action Coding System (FACS)?

The Facial Action Coding System (FACS) is a comprehensive scientific framework for measuring human facial movements. Developed by psychologists Paul Ekman and Wallace Friesen, FACS codes the face into Action Units — specific muscle movements that combine to produce observable expressions. It is the gold standard for facial emotional measurement and the scientific foundation of EchoDepth's 44 Action Unit analysis.

How many Action Units are in FACS?

The full FACS system contains 44 Action Units covering the upper face, lower face and head/eye movements. EchoDepth implements all 44 Action Units, making it one of the most comprehensive FACS implementations in enterprise emotional AI.

What is the difference between FACS Action Units and facial expressions?

Facial expressions (smiling, frowning, grimacing) are the gross visible output of underlying muscle movements. Action Units are those underlying movements themselves. Because the same expression can be produced by different AU combinations — and because AU combinations can reveal emotional states that do not produce obvious expressions — measuring AUs is significantly more informative than classifying expressions.

Is FACS scientifically validated?

Yes. FACS has been validated across decades of psychological research and is widely used in clinical psychology, security research and affective computing. The scientific literature on FACS reliability and validity is extensive — including work published through the National Academies of Sciences and the Paul Ekman Group.

Jonathan Prescott
Founder & CEO, Cavefish Ltd. MBA, Bayes Business School. Builder of EchoDepth.
About the author →
What Is Emotional AI?EchoDepth Platform — 44 Action Units in ActionWhy Transformation Programmes Fail

See 44 Action Units at Work in EchoDepth

Book a Demo →