Diversity Recruiting: Strategies and Tools That Actually Work
7 diversity recruiting strategies backed by data from 1,200+ companies. 90% of employers using skills-based hiring report improved diversity - see what works.
7 diversity recruiting strategies backed by data from 1,200+ companies. 90% of employers using skills-based hiring report improved diversity - see what works.
14 min read
Steven Lu
Diversity recruiting is the practice of building hiring processes that attract, evaluate, and hire candidates from underrepresented groups by removing systemic barriers rather than relying on good intentions alone. The strategies that actually work are skills-based hiring, structured interviews, inclusive job descriptions, blind screening, proactive sourcing from underrepresented talent pools, AI-powered candidate matching, and funnel-stage diversity metrics. McKinsey's 2023 analysis of 1,265 companies across 23 countries found that organizations in the top quartile for gender diversity are 39% more likely to outperform peers financially. That's up from 15% when the firm first measured it in 2015.
But there's a disconnect between intention and execution. Most recruiting teams know diversity matters. Fewer know how to build it into every stage of the hiring funnel - from the words in a job post to the way interviews are scored. And the regulatory ground is shifting fast. Federal affirmative action requirements for contractors were revoked in January 2025, while cities like New York now mandate annual bias audits on AI hiring tools.
This guide breaks down each strategy with real data, explains what's changed legally, and shows how to implement these tactics without a six-figure consulting budget.
TL;DR: Diversity recruiting works when it's built into process, not bolted on as a program. Skills-based hiring improves diversity for 90% of employers who adopt it (TestGorilla, 2024). Combine it with structured interviews, inclusive job descriptions, and AI sourcing tools that strip protected characteristics from candidate evaluation.
| Strategy | What It Fixes | Key Evidence | Difficulty |
|---|---|---|---|
| Skills-based hiring | Credential bias in screening | 90% report improved diversity (TestGorilla, 2024) | Medium |
| Blind screening | Name and demographic bias in resume review | Increases minority interview rates in most settings (HBR, 2023) | Low |
| Structured interviews | Affinity bias and gut-feel evaluation | 2x more effective at predicting performance (SHRM) | Medium |
| Inclusive job descriptions | Gendered language deterring applicants | Fills roles 3 weeks faster (Textio) | Low |
| Proactive diverse sourcing | Narrow talent pipelines | 850M+ profiles via AI sourcing (Pin) | Medium |
| AI-powered matching | Human bias at scale | Removes protected characteristics from evaluation | Low |
| Funnel-stage metrics | Invisible pipeline drop-offs | Identifies exact stage where bias occurs | Medium |
Companies in the top quartile for ethnic diversity outperform bottom-quartile peers by 27% financially (McKinsey, 2023). The business case has moved past debate into measurable returns across revenue, innovation, and talent retention.
BCG's landmark 2018 study of 1,700+ companies across eight countries established the baseline: organizations with above-average management diversity generated 19% higher innovation revenue - meaning a larger share of total revenue came from products and services launched in the prior three years. Subsequent research continues to confirm the pattern. Diverse teams don't just perform better on existing work. They create more new things.
The talent side is equally clear. According to an Eagle Hill Consulting survey conducted by Ipsos in 2023, 53% of U.S. workers say a company's DEI efforts are a key factor when deciding where to work. Among Gen Z workers, that number hits 77%.
Meanwhile, the problem isn't getting smaller. The EEOC logged 88,531 new discrimination charges in FY 2024 - a 9% increase over FY 2023. The agency secured $700 million for over 21,000 victims of employment discrimination, the highest monetary recovery in its recent history.
The question isn't whether diversity recruiting delivers ROI. It's whether your process is designed to capture it.
Executive Order 14173, signed January 21, 2025, revoked Executive Order 11246 - eliminating affirmative action requirements for federal contractors that had been in place since 1965 (Morgan Lewis, 2025). The OFCCP ceased all investigations under the old order within three days. If your company held federal contracts, your compliance obligations just changed dramatically.
Here's what didn't change: Title VII of the Civil Rights Act remains fully in effect. Discrimination in hiring based on race, color, religion, sex, or national origin is still illegal. Section 503 (disability) and VEVRAA (veterans) affirmative action requirements also remain intact because they're statutory, not executive-order-based.
New risks emerged, too. Federal contractors must now certify they don't operate programs that violate federal anti-discrimination laws. False Claims Act liability is attached to those certifications. That means getting diversity recruiting wrong - in either direction - carries real legal exposure.
While federal requirements contracted, state and local governments are expanding oversight. MultiState tracks 78 bills in 23 states (as of March 2025) restricting DEI in public institutions. Most don't yet reach private employers directly, but North Carolina's HB 171 extends restrictions to non-state entities receiving public funds.
On the AI compliance side, NYC's Local Law 144 requires annual independent bias audits of any automated employment decision tool (AEDT) used to evaluate candidates who live in the city. Penalties run $500 to $1,500 per day per violation. Illinois and Maryland have similar legislation in various stages. If you use AI in hiring, bias audits aren't optional - they're either required now or will be soon. For a deeper look at how AI can reduce bias when implemented properly, see our guide on reducing hiring bias with AI.
The practical takeaway? Diversity recruiting isn't going away because of regulatory shifts. The approach just needs to focus on what's always been legal and effective: removing barriers, expanding talent pools, and evaluating candidates on merit.
Ninety percent of employers using skills-based hiring report improved diversity, up from 85% the prior year (TestGorilla, 2024). The same survey of 1,019 employers and 1,100 employees across eight countries found that skills-based hiring adoption hit 81%, up from 73% in 2023. It's the single most effective structural change a recruiting team can make for diversity outcomes.
Why does it work? Traditional hiring filters - four-year degree requirements, specific company pedigrees, industry tenure minimums - correlate with socioeconomic background more than they correlate with job performance. When you drop a bachelor's degree requirement and replace it with a skills assessment, you're not lowering the bar. You're measuring something that actually predicts success.
A 2024 study from UC Berkeley and the University of Chicago sent 83,000 fake applications to 100+ Fortune 500 companies. White-sounding names received callbacks roughly 9% more often than Black-sounding names across all employers. At the worst firms, the gap hit 24%. Skills-based screening eliminates the stage where name-based bias has the most impact: the resume review.
For a complete guide to this approach, see our article on skills-based hiring for recruiters.
Start by auditing every open role for degree requirements that aren't actually necessary. Pennsylvania, Maryland, Utah, and Colorado have already dropped degree requirements for most state government jobs. Major employers like Google, Apple, and IBM did the same years ago.
Next, define the actual competencies each role requires. Write job posts around those competencies rather than credentials. Use pre-employment assessments - coding challenges for engineers, writing samples for content roles, case studies for analysts - to evaluate what candidates can do rather than where they went to school.
This pairs naturally with AI candidate matching. Tools that search based on skills and experience rather than job titles and alma maters surface candidates who traditional filters would miss. Pin's AI, for example, scans 850M+ candidate profiles and never feeds names, gender, or protected characteristics into its matching algorithms - which means the results are based on what a candidate can do, not who they are.
Blind screening works - but only in specific conditions. A Harvard Business Review analysis of two decades of academic studies across Europe, Canada, and the U.S. found that anonymizing applications increased interview selection rates for women and ethnic minorities in most settings. But it backfired in contexts where employers were already actively trying to increase diversity.
About 20% of organizations currently use blind hiring, while roughly 60% of HR practitioners are familiar with it (HBR, 2023). The gap between awareness and adoption suggests many teams are uncertain about implementation.
Three conditions determine whether blind screening helps:
The practical move: use blind screening for initial resume review, then pair it with structured interviews and skills assessments for later stages. Strip names, photos, school names, and graduation years from applications before they reach a hiring manager. But don't treat it as a standalone fix.
Structured interviews are twice as effective at predicting job performance as unstructured ones (SHRM). When every candidate answers the same questions, scored against the same rubric by trained interviewers, the process filters for competence instead of chemistry. That distinction matters because "culture fit" - the most common unstructured criterion - consistently favors candidates who share the interviewer's background.
The mechanics are straightforward. Design 5-8 behavioral or situational questions tied directly to the role's required competencies. Create a scoring rubric with 3-5 performance levels and concrete behavioral anchors for each level. Train every interviewer on the rubric before they conduct a single conversation. Score independently before any debrief discussion.
What makes this hard isn't the design - it's the discipline. Interviewers naturally want to go off-script, ask follow-ups based on gut feeling, or weigh "how the conversation felt" alongside structured scores. That drift reintroduces exactly the bias you're trying to eliminate. The fix is accountability: review scoring patterns across interviewers regularly and flag anyone whose scores consistently diverge from the panel average.
If you're hiring at volume, AI tools for high-volume hiring help enforce structure by standardizing the logistics so interviewers can focus entirely on evaluation.
Job posts written with gender-neutral language fill positions three weeks faster than those with masculine-coded wording, according to research from Textio's platform analysis. Language shapes who applies. Words like "aggressive," "dominant," and "competitive" consistently deter women from applying, even when they're fully qualified for the role.
This isn't about watering down job requirements. It's about describing those requirements accurately. "Builds consensus across teams" communicates the same skill as "dominates cross-functional alignment" - but the first version attracts a broader, more diverse applicant pool.
Run every posting through a bias scanner before it goes live. Tools like Textio and Ongig flag gendered language, unnecessary jargon, and exclusionary requirements automatically. If you don't have access to a dedicated tool, start with these manual checks:
The biggest mistake in diversity sourcing is relying solely on diversity-specific job boards. Most diverse talent is passive - they're employed, not actively browsing Diversity.com or Jopwell. You need to go where they already are, using tools that can surface candidates based on skills and experience without filtering by demographic proxies.
Pin's AI sourcing searches 850M+ profiles and evaluates candidates purely on qualifications. No names, gender, age, or other protected characteristics are fed to the AI at any point. This isn't a "diversity mode" bolted onto a standard search - it's how the system works by default. The result: recruiters see candidates who match the role's actual requirements, drawn from a talent pool that covers 100% of North America and Europe.
As John Compton, Fractional Head of Talent at Agile Search, put it: "I am impressed by Pin's effectiveness in sourcing candidates for challenging positions, outperforming LinkedIn, especially for niche roles."
Layer these sourcing channels on top of AI-powered search to maximize reach into underrepresented talent pools:
The key insight: diversity sourcing isn't about finding "diverse candidates." It's about removing the barriers that keep qualified people from entering your pipeline in the first place. For more on sourcing methodology, see our guide to AI candidate sourcing.
You can't improve what you don't measure. Track representation at every stage of the hiring funnel: sourced, screened, interviewed, offered, hired. Most companies lose diverse candidates disproportionately between the on-site interview and the offer stage - meaning the problem isn't sourcing. It's evaluation.
McKinsey and LeanIn.org's 2024 Women in the Workplace report reveals the persistence of the "broken rung." For every 100 men promoted to manager, only 81 women receive that same first promotion. For Black women, the number drops to 54 - a regression to 2020 levels (McKinsey/LeanIn, 2024). Women hold 29% of C-suite positions overall, up from 17% in 2015, but women of color hold just 7%.
Vanity metrics - total diversity percentages across the company - hide where the pipeline breaks. Instead, track these at each funnel stage:
Run these reports monthly. Share them with hiring managers. When a specific stage shows a drop-off for underrepresented candidates, that's where you intervene - not with training, but with process changes.
AI-powered recruiting tools can reduce bias at scale - but only when they're designed to do so. A 2024 study from the University of Washington and the Brookings Institution found that large language models used for resume screening favored white-associated names 85% of the time and never once favored Black male names over white male names. The bias isn't new. It's inherited from the training data.
That's why the design of AI hiring tools matters as much as whether you use them at all. The key distinction is what information the AI receives. Tools that feed candidate names, photos, educational institutions, or location data into their matching algorithms will replicate existing patterns of discrimination - faster and at larger scale than human reviewers.
Pin takes a different approach. Its AI evaluates candidates against role requirements without ever seeing names, gender, age, or other protected characteristics. The system has checkpoints at every step, with regular team reviews of AI outputs and third-party fairness audits. It's also SOC 2 Type 2 certified, meaning data handling meets independently verified security and privacy standards.
Pin's multi-channel outreach hits a 48% response rate across email, LinkedIn, and SMS - try Pin's automated outreach free.
For teams evaluating AI sourcing tools, here's what to check:
For a broader look at how AI recruiting works across the hiring lifecycle, start there.
Theory is useful. Execution is what changes outcomes. Here's a practical diversity recruiting playbook that combines every strategy covered above into a sequential workflow.
Start building diverse candidate pipelines with Pin's AI sourcing - free to try
Skills-based hiring. TestGorilla's 2024 survey of 1,019 employers found 90% report improved diversity after adopting it. It replaces credential-based filters that correlate with socioeconomic background rather than job performance.
Yes. EO 14173 revoked affirmative action for federal contractors, but Title VII still prohibits employment discrimination. Skills-based hiring, structured interviews, and inclusive job descriptions remain fully legal because they focus on removing barriers and evaluating merit.
AI reduces bias when it evaluates candidates on skills without processing names, photos, or gender. Pin's AI scans 850M+ profiles this way. Poorly designed AI can amplify bias - a 2024 Brookings study found LLMs favored white-associated names 85% of the time.
Track representation at every funnel stage: sourced, screened, interviewed, offered, hired. Most organizations lose diverse candidates between interview and offer. McKinsey's 2024 data shows only 81 women are promoted per 100 men at the manager level.
If you hire in New York City, yes. NYC Local Law 144 requires annual independent bias audits for automated hiring tools, with penalties of $500 to $1,500 per day. Illinois and Maryland are advancing similar laws.