Measurement System Analysis (MSA)

What Engineers and Quality Teams Actually Need to Know

1) What MSA Is (and Why It Exists)

Measurement System Analysis (MSA) evaluates whether your measurement system is good enough to make decisions. If measurements are unreliable, every downstream activity—SPC, capability studies, control plans, acceptance decisions—is compromised.

MSA answers one core question:
Can we trust the data used to control and judge the process?

If the answer is no, controlling the process is an illusion.


2) What Counts as a “Measurement System”

A measurement system is more than a gage.

It includes:

  • the gage or sensor

  • fixtures and part positioning

  • the operator

  • the measurement method

  • the environment (temperature, vibration, cleanliness)

  • calibration and maintenance

  • software and algorithms (for automated systems)

Ignoring any of these creates blind spots.


3) Where MSA Fits in the Quality System

MSA underpins:

  • Control Plans (every listed measurement must be valid)

  • SPC and control charts

  • Process capability studies (Cpk/Ppk)

  • PFMEA detection ratings

  • PPAP and launch readiness

A rule that never changes:
No MSA → no credible SPC or capability numbers.


4) Types of MSA (What to Use and When)

Variable MSA (Continuous Data)

Used for dimensional or numeric measurements.

Common studies:

  • Gage R&R (Repeatability & Reproducibility)

  • Bias

  • Linearity

  • Stability

Typical examples:

  • calipers, micrometers, CMMs, torque tools, pressure sensors


Attribute MSA (Pass/Fail or Visual)

Used when results are categorical.

Common studies:

  • Attribute agreement analysis

  • False accept / false reject analysis

Typical examples:

  • visual inspection

  • go/no-go gages

  • cosmetic checks

Attribute systems are inherently riskier and must be treated carefully.


Special Measurement Systems

  • Automated vision systems

  • In-line sensors

  • Software-based evaluations

These still require MSA—often more rigorous due to complexity.


5) Gage R&R — The Core Study

What Gage R&R Measures

  • Repeatability: variation when the same operator measures the same part multiple times

  • Reproducibility: variation between different operators

  • Part-to-part variation: actual product variation

The goal is to ensure the measurement variation is small compared to part variation and tolerance.


Key Metrics Explained Simply

%GRR (or %Study Variation)
How much of total variation comes from the measurement system.

%Tolerance
How much of the tolerance is consumed by measurement error.

Number of Distinct Categories (NDC)
How many meaningful “bins” the system can separate.


Typical Acceptance Guidelines (Practical, Not Dogmatic)

  • ≤10%: generally acceptable

  • 10–30%: conditionally acceptable (risk-based decision)

  • 30%: not acceptable

These are guidelines, not laws. High-risk characteristics should be held to stricter standards.


6) Bias, Linearity, and Stability (Often Ignored, Often Critical)

Bias

Difference between measured value and true value.

  • Caused by calibration errors, method flaws, worn gages

Linearity

Change in bias across the measurement range.

  • A gage may be accurate at one end of the tolerance and wrong at the other

Stability

Change in measurement performance over time.

  • Tool wear, environmental changes, drift

If these are ignored, Gage R&R results can be misleading.


7) Attribute MSA — The Hard Truth

Attribute systems are subjective by nature.

Common problems:

  • inspectors disagree with each other

  • inspectors disagree with themselves over time

  • standards are vague or poorly defined

Key indicators:

  • % agreement

  • false accept rate (shipping bad parts)

  • false reject rate (scrap/rework)

If attribute inspection is used for critical characteristics, it is a red flag.


8) MSA and Control Plans (Direct Dependency)

Every measurement listed in the Control Plan must:

  • have a defined method

  • have passed MSA appropriate to its risk

  • be capable of detecting nonconformance reliably

If a Control Plan lists a measurement that has not passed MSA, the control is not valid.


9) MSA and PFMEA (Detection Rating Logic)

PFMEA detection ratings must reflect actual detection capability.

  • Strong detection = automated, validated measurement with proven MSA

  • Weak detection = manual inspection with poor repeatability

MSA results should directly influence detection scores and improvement actions.


10) Sample Size and Study Design (Where People Go Wrong)

Common mistakes

  • using too few parts

  • parts not covering full tolerance range

  • reusing the same parts without randomization

  • operators not following the real production method

Good practice

  • select parts that span expected variation

  • randomize measurement order

  • use real operators and fixtures

  • replicate real production conditions

Bad study design produces good-looking numbers that lie.


11) MSA for Automated and Vision Systems

Automation does not eliminate MSA.

You must evaluate:

  • repeatability of sensor output

  • sensitivity to lighting, orientation, contamination

  • software thresholds and algorithms

  • false accept / reject rates

Automated systems often fail in edge cases unless properly validated.


12) When MSA Must Be Redone

Redo or update MSA when:

  • gage is replaced or repaired

  • software or firmware changes

  • fixture or method changes

  • tolerance tightens

  • environment changes significantly

  • abnormal trends appear in SPC data

MSA is not “one and done.”


13) MSA in Audits and PPAP

Auditors and customers expect:

  • documented MSA for critical measurements

  • link between MSA, control plan, and PFMEA

  • evidence that poor systems were improved or replaced

Missing or weak MSA is a common reason for PPAP rejection.


14) Typical MSA Improvement Actions

  • improve fixturing and part location

  • switch from attribute to variable measurement

  • increase gage resolution

  • improve operator training and work instructions

  • automate measurement where justified

  • redesign the characteristic to be easier to measure

Sometimes the correct action is changing the design, not the gage.


15) Common Myths About MSA

  • “The gage is calibrated, so it’s fine” → false

  • “Automation doesn’t need MSA” → false

  • “10–30% is always OK” → false

  • “MSA is only for audits” → false

MSA is about decision risk, not paperwork.


16) Practical Checklist: Is Your Measurement System Acceptable?

  • Can it clearly distinguish good from bad parts?

  • Is variation small relative to tolerance?

  • Is it stable over time?

  • Are operators consistent?

  • Is it validated for its actual use?

If any answer is no, the system needs improvement.