GDPR and Recruiting: 7 Rules Every Hiring Team Must Know
GDPR has generated EUR 6.8B in fines since 2018, with 193 in the employment sector. Learn the 7 GDPR rules for recruiting teams - including AI-specific obligations.
GDPR has generated EUR 6.8B in fines since 2018, with 193 in the employment sector. Learn the 7 GDPR rules for recruiting teams - including AI-specific obligations.
14 min read
Jenn Vu
Updated At: Feb 24, 2026
The General Data Protection Regulation (GDPR) requires recruiting teams to have a lawful basis for processing every piece of candidate data they collect, store, or share - with fines reaching EUR 20 million or 4% of global annual turnover for violations. If your team sources candidates in the EU, stores resumes from EU applicants, or uses AI tools that process EU candidate profiles, these rules apply regardless of where your company is headquartered.
The enforcement numbers make the risk concrete. EU regulators have issued 2,781 GDPR fines totaling EUR 6.8 billion since May 2018, with 193 of those fines - worth EUR 360.9 million - targeting the employment sector specifically (GDPR Enforcement Tracker, February 2026). Data breach notifications across Europe hit 443 per day in 2025, a 22% jump from the prior year (DLA Piper, January 2026). Recruiting teams handle some of the most sensitive personal data in any organization, and regulators are paying attention.
TL;DR: GDPR requires a lawful basis for every candidate data touchpoint, with EUR 6.8B in cumulative fines since 2018 (GDPR Enforcement Tracker, 2026). Recruiting teams must manage consent, honor data requests within 30 days, limit retention to 6-12 months, and run DPIAs for AI tools. Covers 7 rules, enforcement actions, and a DPIA checklist.
GDPR applies to any organization that processes personal data of EU residents - and recruitment involves processing enormous volumes of it (GDPR Article 83). Names, email addresses, phone numbers, work history, education, salary expectations, interview notes, assessment scores: all of this qualifies as personal data under the regulation. Some of it - like health information, ethnic origin, or disability status that candidates may disclose - qualifies as special category data with even stricter protections.
For recruiting teams, GDPR compliance comes down to seven core obligations:
The regulation doesn't distinguish between a 10-person startup and a Fortune 500 company. If you process EU candidate data, you're in scope. Here's what each rule means in practice.
Recruiters must establish a lawful basis for each type of candidate data, collect only what's necessary for the hiring decision, disclose their data processing clearly, and honor deletion or access requests within 30 days. These four rules govern every step of candidate data collection and processing.
Every piece of candidate data you process needs one of six lawful bases under GDPR Article 6. For recruiting, two matter most: legitimate interest and consent.
Legitimate interest is the most common basis for active recruitment. When a recruiter reaches out to a candidate about a specific open role, the company has a legitimate interest in processing that person's professional data. But the Information Commissioner's Office (ICO) notes that legitimate interest requires a three-part balancing test: the interest must be real and present, the processing must be necessary, and the candidate's rights must not override your interest.
Consent is required when you want to keep candidate data beyond a specific recruitment process - for example, adding someone to a talent pool for future roles. Consent must be freely given, specific, informed, and unambiguous. A pre-ticked checkbox on your careers page doesn't qualify. Neither does burying consent language in a privacy policy that nobody reads.
Getting this wrong is expensive. The Irish Data Protection Commission fined LinkedIn EUR 310 million in October 2024 for relying on invalid consent and unlawful legitimate interest claims for behavioral profiling of its members (Irish DPC, 2024). That fine didn't involve recruiting directly, but the principle applies whenever you profile candidates without a proper lawful basis.
Collect only the data you actually need for the recruitment decision. GDPR Article 5(1)(c) requires that personal data be "adequate, relevant and limited to what is necessary." In practice, this means you shouldn't ask candidates for their date of birth, marital status, nationality, or social security number at the application stage. You don't need it to evaluate whether they can do the job.
The same principle applies to sourcing. If your AI sourcing tool pulls in data points that aren't relevant to the role - social media activity, personal photos, family information - that's likely excessive processing.
Candidates must know what data you're collecting, why you need it, how long you'll keep it, and who you'll share it with. This information should appear in a clear privacy notice - not hidden in legal boilerplate. Under Articles 13 and 14, you must provide this notice at the point of data collection (for data collected directly from candidates) or within a reasonable period (for data sourced from third parties like LinkedIn profiles or candidate databases).
If you're using AI tools to screen or rank candidates, you need to disclose that too. More on this in the AI section below.
GDPR gives candidates a set of enforceable rights over their personal data. Recruiting teams must respond to these requests within 30 days (ICO, recruitment guidance).
| Right | What It Means for Recruiters | Deadline |
|---|---|---|
| Subject Access Request / Right of Access (SAR) | Candidate can request all data you hold about them | 30 days |
| Right to Erasure | Candidate can ask you to delete their data | 30 days |
| Right to Rectification | Candidate can correct inaccurate data | 30 days |
| Right to Data Portability | Candidate can request data in a machine-readable format | 30 days |
| Right to Object | Candidate can object to processing based on legitimate interest | Must cease unless compelling grounds exist |
Here's the practical challenge: if a candidate sends a Subject Access Request, can your team actually find all the places their data lives? Their resume in your ATS, notes in your CRM, messages in your team inbox, scoring data in your AI screening tool, records in your interview scheduling platform. Many recruiting teams discover their data is scattered across a dozen systems with no single view.
Build a data map before a request comes in, not after. Identify every system where candidate data is stored, who has access, and how to export or delete records from each one. The 30-day clock starts when you receive the request - not when you figure out where the data sits.
GDPR doesn't name an exact retention period for candidate data, but guidance from the European Data Protection Supervisor and the ICO is clear: unsuccessful candidate data should be retained no longer than 6 to 12 months unless you have explicit consent to keep it longer.
In November 2024, the ICO audited AI recruitment tool providers and found that some were retaining candidate data indefinitely to build large databases - without candidates' knowledge (ICO, November 2024). That's a direct GDPR violation. Your retention policy should specify exactly how long you keep data for each purpose: active applications (duration of hiring process plus a reasonable buffer), talent pools (with consent, typically 12-24 months with renewal), and interview records (6 months maximum for unsuccessful candidates).
If you're a US company sourcing EU candidates - or if your recruiting tools store data on US servers - you need a legal transfer mechanism. This is where many hiring teams unknowingly fall out of compliance. Uber was fined EUR 290 million by the Dutch Data Protection Authority (DPA) in July 2024 for transferring European drivers' personal data to US servers without adequate safeguards for over two years (GDPR Enforcement Tracker).
The current options for EU-to-US data transfers:
Which mechanism should you use? If you're a US company, start with DPF self-certification - it's the fastest path. But don't rely on it alone. The pending CJEU challenge means the DPF could be invalidated (as happened twice before with Safe Harbor and Privacy Shield). Put SCCs in place as a backup so your data flows aren't disrupted if the legal landscape shifts again.
For a detailed walkthrough on building compliant cross-border hiring processes, see our guide on how to hire EU talent as a US company.
GDPR Article 22 restricts decisions based solely on automated processing that produce legal or similarly significant effects. In recruiting, this means: if your AI tool automatically rejects candidates without any human review, that likely violates Article 22.
The restriction applies to AI resume screeners that auto-reject below a threshold score, chatbot-based screening that eliminates candidates based on keyword answers, and ranking algorithms that determine which candidates a recruiter never sees. The key word is "solely" - if a human recruiter reviews and makes the final decision, Article 22 is less likely to apply. But the human review must be meaningful, not a rubber stamp.
Candidates also have the right to request human intervention, express their point of view, and contest the decision. Your AI recruiting workflow needs a clear process for handling these requests.
EU regulators have issued 193 GDPR fines in the employment sector, totaling EUR 360.9 million, with individual penalties reaching EUR 310 million for a single company (GDPR Enforcement Tracker, February 2026). Three recent enforcement actions are directly relevant to how recruiting teams collect, store, and transfer candidate data.
LinkedIn - EUR 310 million (October 2024). The Irish Data Protection Commission found that LinkedIn processed member data for behavioral advertising using invalid consent mechanisms and unlawful reliance on legitimate interest. For recruiters, this is a warning about how candidate profiling data is collected and used. If your sourcing tool builds behavioral profiles of candidates without proper legal basis, you're in the same territory.
Uber - EUR 290 million (July 2024). The Dutch DPA found that Uber transferred European drivers' personal data to US servers for over two years without adequate safeguards after the Privacy Shield framework was invalidated. Any recruiting team using US-based tools to store EU candidate data without DPF certification or SCCs faces the same transfer risk.
Clearview AI - EUR 30.5 million (September 2024). The Dutch DPA fined Clearview AI for scraping facial images from the internet without consent to build a facial recognition database. This directly parallels how some recruiting tools scrape candidate profiles from public sources without a lawful basis - a practice the ICO flagged in its November 2024 audit of AI recruitment tools.
The pattern across all three: regulators are cracking down on data processing that happens without proper legal basis, without adequate transparency, or without appropriate transfer mechanisms. These aren't abstract risks. They're the exact activities recruiting teams perform daily.
What makes these fines especially notable: they target both the companies that collect data (LinkedIn, Uber) and the companies that build tools with that data (Clearview AI). As a recruiting team, you're potentially liable for your own data practices and your choice of vendors. Due diligence on the tools in your stack isn't optional - it's a compliance obligation.
The ICO audited AI recruitment tool providers in November 2024 and issued nearly 300 recommendations - all of which were accepted or partially accepted (ICO, November 2024). The findings revealed serious gaps in how AI tools handle candidate data.
Among the most alarming findings: some AI recruitment tools allowed recruiters to filter candidates by protected characteristics including gender, ethnicity, and age inferred from names and other data. Others retained candidate data indefinitely to build large databases without candidate knowledge. These practices violate both GDPR's data minimization and purpose limitation principles.
GDPR creates three specific obligations for teams using AI in recruiting:
The European Data Protection Board (EDPB)'s Opinion 28/2024 (December 2024) added another layer: if an AI model was trained on unlawfully processed personal data, the model's deployment may itself be unlawful - even if the current processing appears compliant. For recruiting teams, this means asking your AI vendors not just how they process data today, but how they trained their models in the first place.
These GDPR obligations also overlap significantly with the EU AI Act, which classifies recruiting AI as high-risk under Annex III and adds its own compliance requirements enforceable by August 2, 2026. The Act also bans emotion recognition AI in the workplace entirely - if any of your interview tools analyze facial expressions or voice tone, that's already illegal. Teams that address both regulations together will avoid duplicating compliance work.
One thing worth noting: GDPR regulates what data you collect and how you process it. The EU AI Act regulates the decisions your tools make with that data. Both apply simultaneously. A recruiting AI tool that processes data lawfully under GDPR can still violate the EU AI Act if it lacks documentation, human oversight, or bias monitoring. Think of them as parallel requirements, not alternatives.
A Data Protection Impact Assessment (DPIA) is a structured process for identifying and minimizing data protection risks before they cause harm. GDPR Article 35 requires one for any processing likely to result in high risk to individuals - and AI-powered candidate screening, matching, and ranking almost always meets that threshold.
Most compliance guides skip the practical steps. Here's a seven-step checklist for running a DPIA on your recruiting AI.
The DPIA should be a living document. When the EU AI Act's high-risk obligations take effect in August 2026, your DPIA can serve as a foundation for the Act's required risk management system. For more on how data security certifications fit into this picture, see our guide on SOC 2 compliance for recruiting software.
GDPR-compliant recruiting software must include consent management, automated data retention, SAR handling, bias-free AI architecture, and a full audit trail. The ICO's November 2024 audit found that most AI recruitment tools failed on at least two of these five areas - particularly around data retention and protected characteristic filtering.
Pin is SOC 2 Type 2 certified with encryption at rest and in transit, and its public trust center documents compliance certifications transparently. That level of architectural compliance is exactly what GDPR demands from the tools handling your candidate data.
Source candidates with GDPR-compliant AI - try Pin free
Evaluating your full recruiting tech stack against these criteria? Our guide to the best AI recruiting tools in 2026 includes compliance as a key evaluation factor.
Yes. GDPR has extraterritorial scope - any company that processes personal data of EU residents must comply, regardless of headquarters location. The EU-US Data Privacy Framework (upheld by the European General Court in September 2025) provides a legal transfer mechanism, but you still need to follow GDPR's core data processing rules. If you source from databases containing EU profiles, you're in scope.
ICO and EDPS guidance recommends retaining unsuccessful candidate data for no longer than 6 to 12 months. Longer retention requires explicit consent - for instance, when adding candidates to a talent pool. The ICO's November 2024 audit found some AI recruiting tools retaining data indefinitely without candidate knowledge, which regulators flagged as a clear violation.
AI recruiting tools must comply with transparency requirements (Articles 13-14), automated decision-making restrictions (Article 22), data minimization principles, and DPIA obligations (Article 35). The EDPB's Opinion 28/2024 also established that AI models trained on unlawfully processed data may themselves be unlawful to deploy. These obligations layer on top of the EU AI Act's high-risk requirements.
GDPR fines can reach EUR 20 million or 4% of global annual turnover, whichever is higher. In the employment sector, regulators have issued 193 fines totaling EUR 360.9 million since 2018. The largest fine relevant to recruiting - LinkedIn's EUR 310 million penalty in October 2024 - involved invalid consent mechanisms for member data processing.
Not always. Active sourcing for a specific open role can rely on legitimate interest rather than consent. However, you must conduct a legitimate interest assessment, be transparent about your data processing, and respect candidates' right to object. Storing sourced profiles in a talent pool for future roles does require explicit consent with a clear retention period.
GDPR compliance isn't a one-time audit. It's an ongoing operational requirement that touches every part of how you find, evaluate, and hire candidates. The regulation has generated EUR 6.8 billion in fines since 2018 (GDPR Enforcement Tracker, 2026), and employment-sector enforcement is accelerating. As AI recruiting tools become standard, the overlap between GDPR and the EU AI Act's recruiting compliance requirements creates a dual challenge that hiring teams need to address now.
Here's where to start: