Contact Members Join
AmCham Romania
Members only
Home |Privacy policy
Business Intelligence The use of AI systems in recruitment - Obligations for employers at the crossroad between the AI Act and GDPR

The use of AI systems in recruitment - Obligations for employers at the crossroad between the AI Act and GDPR

by Popovici Nitu Stoica & Asociatii August 7, 2025

Website www.pnsa.ro

Luana Dragomirescu – DPO, Counsel SCA Popovici Nițu Stoica & Asociații

Ioana Cazacu – Managing Associate SCA Popovici Nițu Stoica & Asociații

Artificial intelligence (AI) is profoundly transforming the way companies recruit talent. With only one year to go before the provisions of EU Regulation 1689/2024 on artificial intelligence (AI Act) establishing additional conditions for the use of AI systems in recruitment will take effect, employers must prepare for a new complex legal framework when technology regulations meet the personal data protection regulations. 

The time is therefore ripe for a brief overview of the legal requirements applicable to employers who will use AI systems in recruitment from August 2026 onwards.

In particular, this summary considers the use of AI systems that, in addition to processing data available internally at the employer level, collect and process (also) data from public sources (e.g., internet search engines, social networks) for recruitment purposes.

AI-assisted recruitment: a high-risk use case

According to the AI Act, AI systems intended for use in recruitment – including for targeted advertising, CV screening, or candidate assessment – are considered high-risk AI systems. This classification triggers strict obligations for employers as “users” or “deployers” of these systems.

This also applies to systems that, in addition to the employer's internal data, collect and process information from public sources for selection purposes.

Key obligations for employers under the AI Act

Starting August 2, 2026, employers using high-risk AI systems must comply with the following requirements:

  • use in accordance with the supplier's instructions: taking appropriate technical and organizational measures to ensure that such systems are used in accordance with the user instructions that come with the systems, made available by the manufacturer;
  • qualified human oversight: entrusting human supervision to natural persons who have the necessary competence, training, and authority, as well as the necessary support;
  • input data quality: to the extent that it exercises control over the system’s input data, it ensures that the input data is relevant and sufficiently representative in relation to the intended purpose of the system;
  • continuous monitoring: monitors the operation of the system based on the user instructions provided by the supplier;
  • keeping log files: keeping log files generated automatically by the system, to the extent that these log files are under their control for a period adequate for the intended purpose of the system, of at least six months;
  • informing employees: before putting the system into service or using it at the workplace, the employers shall inform the workers' representatives and the workers concerned that they will be involved in the use of the system;
  • data protection impact assessment: where applicable, uses the information about the system provided by the supplier to comply with its obligation to carry out a data protection impact assessment pursuant to Regulation (EU) 2016/679 (GDPR);
  • transparency towards candidates: deployers who make decisions or contribute to decisions relating to natural persons shall inform natural persons that they are subject to the use of high-risk AI systems.

Failure to comply with these obligations may result in penalties of up to EUR 15,000,000 or, in the case of an undertaking, up to 3% of its global annual turnover in the preceding financial year, whichever is higher.

AI literacy – a requirement already in place

Since February 2025, employers must ensure an adequate level of AI literacy for staff who operate or interact with such systems. This involves training tailored to the level of knowledge, experience, and context of use.

GDPR compliance in the context of AI

The use of AI in recruitment inevitably involves the processing of personal data.

Even if the data comes from public sources (e.g., LinkedIn profiles, public posts), accessing, collecting, and using it constitutes personal data processing and falls under the General Data Protection Regulation (EU) 2016/679 ("GDPR").

Compliance with GDPR requirements is not limited to data protection, but involves a responsible approach and transparency towards candidates, with the integration of general data processing principles at every stage of the AI-assisted selection process.

Some useful considerations for employers in the AI-assisted selection process are briefly outlined below:

  • Identifying a valid legal ground for processing: this means that there must be a legal ground for this processing (e.g., the candidate's consent, legitimate interest, legal obligation, etc.) in accordance with Article 6 of the GDPR. Legitimate interest may be invoked, but it must be justified by a necessity and proportionality analysis.
  • Clear information for candidates: candidates must be clearly and fully informed, in accordance with Article 14 of the GDPR, about what data is collected and from what sources, the purpose of the processing, the legal ground, the storage period, their rights (access, rectification, objection, erasure, etc.) and, where applicable, the existence of automated decision-making (if applicable). This information shall be provided in an information note available from the application stage or published in the recruitment notice.
  • Avoid non-transparent automated decisions: if the AI system automatically evaluates CVs, extracts data from external sources, creates profiles, or makes selection decisions, additional rules apply, such as: fully automated evaluation without human intervention is not permitted, a data protection impact assessment (DPIA) is required before implementing such a system, and algorithmic discrimination must be avoided (e.g., exclusion based on geographical criteria, career breaks, etc.).

As with any data processing, the use of AI systems in the analysed context must comply with the general principles of Article 5 of the GDPR regarding limitation and proportionality:

  • only data that is relevant and necessary for the stated purpose (e.g., professional experience, skills) may be collected;
  • it is not permitted to collect data on: family, political opinions, health, religion, ethnicity, etc., except under very strict conditions;
  • storage period must be limited and justified (e.g., 6-12 months for future recruitment);
  • data must be protected by appropriate technical and organizational measures; access to data must be limited to persons directly involved in the recruitment and selection process.

A key aspect in the use of AI systems for candidate selection remains the risk of generating incorrect or misleading results, known as "false positives". These occur when the AI system incorrectly identifies a candidate as unsuitable, even though they meet the relevant criteria. Such errors can have direct consequences for the rights of the individuals concerned and may lead to the unfair exclusion of worthy candidates. To prevent such situations, it is essential that automated decisions are complemented by qualified human intervention and that the decision-making process is documented and audited regularly. Algorithmic transparency and the ability to review and correct AI decisions thus become key elements in ensuring a fair recruitment process that complies with legal requirements.

Conclusion: AI in recruitment – opportunity and responsibility

Artificial intelligence can streamline recruitment processes, but it comes with significant legal obligations. Employers should approach compliance with both AI Act and GDPR with diligence to avoid legal and reputational risks.

Recommendation: start early with an audit of your AI systems, an assessment of the impact on data protection for each AI system used, training for staff on AI supervision, and updating the internal policies and information notes to data subjects. Proactive compliance will make the difference between responsible innovation and exposure to penalties.

More from Business Intelligence

Previous Next