Resources
SDOH screening instruments: a working comparison
The US ecosystem of SDOH screening tools is heterogeneous. Different instruments cover different domains, target different settings, and were validated on different populations. This is a side-by-side reference for the instruments most commonly encountered in emergency medicine, primary care, and community health settings, with notes on when each is the better operational fit.
For program leaders selecting an instrument, the right starting point is to write down what domains your program will respond to, then pick the shortest instrument that covers them. Domain coverage you cannot act on is documentation overhead with no clinical return.
AHC-HRSN
Accountable Health Communities Health-Related Social Needs- Steward
- Centers for Medicare & Medicaid Services (CMS)
- Items
- 10 core items, 16 supplemental items (26 total)
- Domains covered
- Housing instabilityFood insecurityTransportationUtility help needsInterpersonal safety
- Setting
- Patient-administered or clinician-administered. Designed for use across clinical and community settings.
- Validation context
- Developed and tested as part of the AHC model. Used widely in CMS-aligned reporting.
- When to choose it
- The de facto US baseline for hospital and ED-based screening. Shorter than PRAPARE; favored when integration into a structured EHR module is the goal.
PRAPARE
Protocol for Responding to and Assessing Patients' Assets, Risks, and Experiences- Steward
- National Association of Community Health Centers (NACHC)
- Items
- 21 items across 16 core measures plus 5 optional measures
- Domains covered
- Personal characteristicsFamily and homeMoney and resourcesSocial and emotional healthOptional: incarceration, refugee status, safety, domestic violence
- Setting
- Designed for federally qualified health centers and primary care; adopted in some EDs.
- Validation context
- Field-tested across hundreds of community health centers. Has standardized FHIR-based mappings.
- When to choose it
- Stronger domain coverage than AHC-HRSN; longer, more burdensome to administer in a single ED encounter.
AHC core 10
Accountable Health Communities core 10- Steward
- CMS (subset of AHC-HRSN)
- Items
- 10 items
- Domains covered
- HousingFoodTransportationUtilitiesSafety
- Setting
- ED-friendly short form. Patient self-administered on a tablet or kiosk in under three minutes.
- Validation context
- The same psychometric base as AHC-HRSN, applied to the core domains only.
- When to choose it
- Most ED programs that want a baseline screen settle here for time reasons. Loses the supplemental domain coverage but is operationally tractable.
HVS
Hunger Vital Sign- Steward
- Children's HealthWatch
- Items
- 2 items
- Domains covered
- Food insecurity (only)
- Setting
- Two-question screen, often deployed alongside broader instruments to catch food insecurity rapidly.
- Validation context
- Validated against the USDA 18-item Household Food Security Survey Module with high sensitivity and specificity in pediatric and adult populations.
- When to choose it
- Single-domain screen; not a substitute for a multi-domain tool but a useful add-on for high-throughput settings.
WE CARE
WE CARE- Steward
- Boston Medical Center / academic groups
- Items
- Variable, typically 7 to 10 domain-specific items
- Domains covered
- FoodHousingChildcareEducationEmploymentHeatOther psychosocial
- Setting
- Pediatric primary care; some emergency department implementations.
- Validation context
- Multiple studies of feasibility and downstream referral effects in pediatric populations.
- When to choose it
- Strong evidence base in pediatrics. Less common in adult ED settings.
Reporting an instrument's performance, in your population
Published validation data is necessary but not sufficient. Every instrument behaves differently in a different population. A program that wants to know whether the screen it is running is performing as expected should report sensitivity, specificity, positive predictive value, and negative predictive value against a reference standard within its own population.
For multi-instrument comparisons, percent-positive rates differ across instruments even on the same patient sample. Inter-instrument agreement should be reported with Passing-Bablok or Deming regression for continuous outputs, or with confusion matrices and Cohen's kappa for categorical outputs.
Subgroup analysis is the part most published implementation papers skip. Completion rates, positivity rates, and operating characteristics frequently differ across demographic subgroups in ways that change which instrument is right for which population. The diagnostic sample size question for these subgroup analyses is non-trivial; sample sizes that are adequate for the overall mean are routinely underpowered for the subgroup.
Match the instrument to the response
The most common implementation failure mode is screening for needs the program cannot respond to. If your program has no transportation referral pathway, screening for transportation insecurity creates documentation burden and erodes trust without producing a connected service. The right discipline is to start with the response side, then choose the screening instrument whose domains match.
For programs partnering with a home-based care vendor, the response-side checklist is concrete: which Z-codes can the vendor act on, what is the dispatch latency, and how does the outcome flow back to the ordering EHR. The closed-loop referral anatomy reference covers the architecture.
Further reading
SDOH Z-codes for home-care referrals
Mapping a positive screen to the structured ICD-10 documentation.
Anatomy of a closed-loop referral
USCDI, Gravity Project, eReferral standards, and where most loops break.
USCDI SDOH data classes
What certified health IT must support, structured by data class.
PRAPARE (NACHC)
Primary source for the instrument and its FHIR mappings.
Selecting an instrument for your program?
The conversation should start with what your program will respond to. We help with both sides.