Important Information
This website uses cookies. By using this website you accept the use of cookies. Learn more.
Luana Dragomirescu – DPO, Counsel SCA Popovici Nițu Stoica & Asociații
Ioana Cazacu – Managing Associate SCA Popovici Nițu Stoica & Asociații
Artificial intelligence (AI) is profoundly transforming the way companies recruit talent. With only one year to go before the provisions of EU Regulation 1689/2024 on artificial intelligence (AI Act) establishing additional conditions for the use of AI systems in recruitment will take effect, employers must prepare for a new complex legal framework when technology regulations meet the personal data protection regulations.
The time is therefore ripe for a brief overview of the legal requirements applicable to employers who will use AI systems in recruitment from August 2026 onwards.
In particular, this summary considers the use of AI systems that, in addition to processing data available internally at the employer level, collect and process (also) data from public sources (e.g., internet search engines, social networks) for recruitment purposes.
AI-assisted recruitment: a high-risk use case
According to the AI Act, AI systems intended for use in recruitment – including for targeted advertising, CV screening, or candidate assessment – are considered high-risk AI systems. This classification triggers strict obligations for employers as “users” or “deployers” of these systems.
This also applies to systems that, in addition to the employer's internal data, collect and process information from public sources for selection purposes.
Key obligations for employers under the AI Act
Starting August 2, 2026, employers using high-risk AI systems must comply with the following requirements:
Failure to comply with these obligations may result in penalties of up to EUR 15,000,000 or, in the case of an undertaking, up to 3% of its global annual turnover in the preceding financial year, whichever is higher.
AI literacy – a requirement already in place
Since February 2025, employers must ensure an adequate level of AI literacy for staff who operate or interact with such systems. This involves training tailored to the level of knowledge, experience, and context of use.
GDPR compliance in the context of AI
The use of AI in recruitment inevitably involves the processing of personal data.
Even if the data comes from public sources (e.g., LinkedIn profiles, public posts), accessing, collecting, and using it constitutes personal data processing and falls under the General Data Protection Regulation (EU) 2016/679 ("GDPR").
Compliance with GDPR requirements is not limited to data protection, but involves a responsible approach and transparency towards candidates, with the integration of general data processing principles at every stage of the AI-assisted selection process.
Some useful considerations for employers in the AI-assisted selection process are briefly outlined below:
As with any data processing, the use of AI systems in the analysed context must comply with the general principles of Article 5 of the GDPR regarding limitation and proportionality:
A key aspect in the use of AI systems for candidate selection remains the risk of generating incorrect or misleading results, known as "false positives". These occur when the AI system incorrectly identifies a candidate as unsuitable, even though they meet the relevant criteria. Such errors can have direct consequences for the rights of the individuals concerned and may lead to the unfair exclusion of worthy candidates. To prevent such situations, it is essential that automated decisions are complemented by qualified human intervention and that the decision-making process is documented and audited regularly. Algorithmic transparency and the ability to review and correct AI decisions thus become key elements in ensuring a fair recruitment process that complies with legal requirements.
Conclusion: AI in recruitment – opportunity and responsibility
Artificial intelligence can streamline recruitment processes, but it comes with significant legal obligations. Employers should approach compliance with both AI Act and GDPR with diligence to avoid legal and reputational risks.
Recommendation: start early with an audit of your AI systems, an assessment of the impact on data protection for each AI system used, training for staff on AI supervision, and updating the internal policies and information notes to data subjects. Proactive compliance will make the difference between responsible innovation and exposure to penalties.