The National Resident Matching Program (NRMP) recently released the 2025 Main Residency Match Results and Data Report, documenting the outcomes of the largest Match in its history. According to the report, 52,498 applicants registered for the Match, while 47,208 submitted rank order lists and were considered active applicants. Programs offered a record 43,237 certified residency positions, including 40,041 first-year (PGY-1) slots. Nearly all were filled through the Match or the Supplemental Offer and Acceptance Program.
On paper, the data appear comprehensive. They capture volume, growth, and outcomes across specialties and applicant types. What they do not capture is how applicants moved through the system before those outcomes were determined.
That distinction matters.
Outcomes Without Process
The NRMP’s annual reports are designed to describe results, not decision-making. They show who matched, where, and in what numbers. They do not document how programs screened applications, how interview invitations were distributed, or how subjective judgments were made.
These stages are the most consequential filters in the residency pipeline, yet they are absent from the official Match tables.
The omission is structural, not accidental. The Match algorithm operates only after interviews are completed and rank lists are submitted. Everything that happens before that point remains outside the scope of the published data.
What the Numbers Suggest Indirectly
Some insight into pre-Match dynamics can be inferred from other NRMP publications, particularly the Charting Outcomes in the Match reports. These reports combine Match outcomes with applicant characteristics such as USMLE Step scores, number of contiguous ranks, and specialty choice.
Across multiple editions, the data show substantial overlap in test score ranges between matched and unmatched applicants, including between International Medical Graduates and U.S. MD seniors. Applicants with competitive scores exist in every category, yet match probabilities differ consistently by applicant type.
The numbers alone do not explain why this happens. They do suggest that academic metrics are not sufficient to account for observed differences in outcomes.
Rank Lists as a Proxy for Opportunity
The NRMP also publishes data on the average length of rank order lists. Applicants who match tend to rank more programs than those who do not. This relationship is often cited as evidence that ranking more programs improves match probability.
What the data do not show is how applicants arrive at longer rank lists in the first place.
Rank lists depend on interviews. Interviews depend on screening decisions that are not reported publicly. As a result, rank list length reflects not only applicant strategy, but also access to interview opportunities granted earlier in the process.
In this sense, rank list data function as a proxy for opportunity rather than a pure measure of competitiveness.
What Surveys Add and What They Miss
To address gaps in outcome reporting, the NRMP conducts Program Director and Applicant Surveys. These surveys ask respondents to identify factors they consider important when selecting interviewees and ranking applicants.
According to recent survey results, letters of recommendation, perceived commitment to specialty, USMLE scores, and the Medical Student Performance Evaluation are frequently cited as influential factors.
However, survey data have limitations. They are self-reported and retrospective. They capture perceptions rather than verified decision logic. They are also not linked to individual Match outcomes in the published tables.
As a result, surveys provide context, but they do not resolve the central opacity of interview selection.
The Interview Stage Remains Unmeasured
Peer-reviewed research in graduate medical education has repeatedly identified interview selection as a decisive stage in the Match process. Many of the variables involved at this stage, including institutional familiarity, faculty advocacy, and qualitative assessments of fit, are difficult to quantify.
These factors do not appear in Match outcome data.
Two applicants with similar scores and credentials may experience very different interview access, yet those differences are invisible in the results tables. The data record the outcome, not the path.
Risk Without a Metric
Residency programs operate under accreditation standards, board pass rate scrutiny, and institutional accountability. These pressures shape how programs assess risk when reviewing applications.
Risk, however, is not a variable reported by the NRMP. It is a judgment made internally by programs and applied inconsistently across institutions and specialties.
The absence of this information does not imply uniformity. It means the most influential judgments in the process remain unmeasured.
Why Interpretation Requires Caution
The expansion of the Match has created the impression that increased capacity naturally leads to greater access. In numerical terms, the system has grown. In distributional terms, patterns remain familiar.
Outcome data alone cannot explain why. Without visibility into interview selection, informal signaling, and program-level decision rules, interpretations of Match results risk overstating what the numbers can support.
The data are accurate, but incomplete.
Conclusion
NRMP Match reports provide essential information about residency placement outcomes. They document scale, participation, and final allocations.
They do not capture how decisions are made before the algorithm runs.
Understanding what the data do not measure is necessary for reading Match results responsibly. Until the intermediate stages of selection are reported with similar rigor, the Match will continue to be understood primarily through its endpoints rather than its mechanics.
The numbers tell a story. They do not tell the whole one.