Article: Inside Philippines: AI and data privacy in recruitment

HR Technology

Inside Philippines: AI and data privacy in recruitment

AI is revolutionising recruitment in the Philippines – but can companies balance speed with ethics, data privacy, and candidate trust?
Inside Philippines: AI and data privacy in recruitment
 

Can AI-powered recruitment balance speed with fairness?

 

A growing number of Philippine companies are streamlining hiring with AI. From Metro Manila to regional business hubs, HR departments are deploying AI-powered tools to screen CVs, schedule interviews, and match candidates to roles with a level of efficiency that was unthinkable just a few years ago.

And it is not just in the Philippines. According to LinkedIn’s APAC Future of Recruitment Report, 69% of organisations in Asia-Pacific are already leveraging AI or machine learning for HR functions, and the Philippines is firmly in the vanguard of this shift.

But as AI opens new frontiers in hiring speed and precision, it also introduces complex dilemmas around data privacy, algorithmic bias, and candidate trust. The stakes are high: businesses that fumble these challenges risk legal penalties, reputational fallout, and a shrinking talent pool.

Moreover, the Philippines’ Privacy Commission is making clear that in the evolving digital job market, responsible AI is not just a nice-to-have – it is a compliance necessity and a competitive advantage.

As AI adoption soars, benefits and risks multiply

AI is now embedded in every stage of the hiring process for many Philippine companies. Tools like Qureos, Seek, Impress.ai, and locally tuned platforms such as Kalibrr automate everything from job postings to candidate screening and even video assessments. Kalibrr, for instance, boasts AI-driven matching that claims to cut time-to-hire by up to 50%.

Beyond recruitment, HR leaders are also embracing AI for workforce analytics, people management, and performance tracking. Automation platforms like Zapier continue to rise alongside chatbot solutions.

The surge in automation is largely about efficiency – AI eliminates repetitive tasks and enables HR professionals to focus on higher-value work. Yet, this efficiency comes at a price. As more personal data flows through these platforms, the risks of data misuse, algorithmic discrimination, and data breaches grow.

HR teams also face resistance to change, skills gaps, and rising ethical concerns. The challenge is not just building faster systems; it is building fairer, more transparent, and more secure ones.

Read: As HR sleeps, AI agents get busy

AI’s double-edged sword: Bias, consent, and trust

AI’s promise of objectivity in recruitment is tempered by its susceptibility to bias. If trained on flawed or incomplete data, AI tools can perpetuate and even amplify discrimination. For example, gender bias has been documented in systems that score women’s CVs lower due to underrepresentation in training sets, while ethnic bias can result in Filipino surnames being unfairly screened out.

Socioeconomic and geographic disparities are also baked into AI outcomes. Candidates from rural provinces often face barriers due to weak internet access and limited digital literacy, excluding them from opportunities in AI-first hiring platforms. Companies risk reinforcing these divides unless they proactively design for inclusion.

The trust gap is stark. A 2024 survey found that nearly half of Filipino job seekers believe AI is more biased than human recruiters. In the public sector, efficiency sometimes trumps fairness, eroding candidate confidence.

Data Privacy Act: Foundation and roadmap

At the core of AI governance in the Philippines stands the Data Privacy Act of 2012 (DPA), enforced by the National Privacy Commission (NPC). The DPA sets out clear requirements: transparency, legitimate purpose, proportionality, fairness, and lawfulness in all personal data processing. The NPC’s 2024 Advisory on AI, meanwhile, spells out how these principles apply to AI systems. Its mandates include:

  • Transparency: Companies must clearly state how and why AI is used, with information provided in plain language.
  • Data minimisation: Only collect what is strictly necessary; indiscriminate scraping of public data is not allowed.
  • Human oversight: Automated decisions that significantly impact individuals – like hiring or rejection – must be reviewable by a human.
  • Data subject rights: Candidates have the right to be informed, object, access, correct, erase, and contest decisions made by AI.

The DPA also requires organisations processing significant volumes of personal data to register with the NPC and appoint a Data Protection Officer. Non-compliance can mean fines up to USD 90,000 per violation, and criminal penalties including imprisonment for serious offences.

Informed consent: More than a checkbox

Securing meaningful consent in an AI-driven world is no small feat. The DPA requires that consent be freely given, specific, and informed, but AI’s complexity makes this difficult in practice. Many candidates feel they must agree to data processing if they want to be considered for a job, which is hardly voluntary.

The NPC is pushing organisations to go beyond perfunctory checkboxes. Companies must provide ongoing, accessible information about how AI is used and offer real alternatives or recourse – not just a one-time yes-or-no consent.

Notably, the NPC clarified that even publicly available personal data is protected under the DPA and cannot be indiscriminately used for AI training or other purposes.

Data security: Rising attacks and supply chain risks

AI recruitment systems collect and process large volumes of sensitive personal data, making them attractive targets for cybercriminals. In 2023, 84% of Philippine organisations reported negative impacts from supply chain cyberattacks, and 32% lacked systems to detect incidents within their vendor networks.

The DPA mandates prompt reporting of data breaches involving sensitive information. Concealing a breach is itself a punishable offence, with up to five years’ imprisonment and fines up to USD 18 million. Companies must implement strong encryption, restrict access to authorised staff, and conduct regular audits.

Moreover, they are accountable for the practices of third-party vendors – an especially tough challenge given the country’s documented supply chain vulnerabilities.

Data retention: When AI’s appetite meets legal limits

AI systems benefit from vast, diverse datasets for training and improvement. But the DPA’s data minimisation and retention rules are strict: personal data must not be kept longer than necessary for its declared, legitimate purpose, and indefinite retention is forbidden.

For employee records, the maximum retention is ten years, but for unsuccessful applicants or for training AI models, companies must anonymise or securely delete data when its purpose expires.

Read: In 2025, AI bias persists in HR tech

Human in the loop: The final backstop

No matter how advanced the AI, the DPA and NPC require that hiring decisions with significant impact must involve a meaningful human intervention. Candidates must be able to question, contest, and appeal automated outcomes.

AI should support, not replace, human judgement, especially in high-stakes scenarios. Privacy Impact Assessments are now a must for organisations deploying AI in recruitment. These help identify risks, determine where human review is needed, and ensure compliance with the DPA.

Building trust and competitive advantage

Compliance is only the start. Organisations that embed privacy-by-design, bias mitigation, and transparent communication into their recruitment AI will earn a decisive edge in attracting top talent. Best practices include:

  • Regularly auditing AI models for bias and fairness
  • Providing clear, plain-language explanations for AI-driven processes
  • Empowering Data Protection Officers and investing in ongoing privacy training
  • Rigorously vetting third-party vendors for data security
  • Offering candidates accessible channels to exercise their data rights and contest decisions

What’s next: Responsible AI as the standard

The future of AI in Philippine recruitment is bright but conditional. As the technology matures, so too will regulatory scrutiny and candidate expectations. The NPC’s proactive stance – coupled with potential new AI-specific legislation – signals that organisations can no longer afford to treat privacy and fairness as afterthoughts.

Companies that champion responsible AI, prioritise data subject rights, and foster a culture of trust will not only stay compliant – they will stand out in a fiercely competitive talent market.

The challenge now is to move from compliance to leadership, ensuring that technology empowers both employers and job seekers, building a recruitment landscape that is not just efficient, but equitable and secure.

Read full story

Topics: HR Technology, Technology, Recruitment Technology, #Artificial Intelligence

Did you find this story helpful?

Author

QUICK POLL

What will be the biggest impact of AI on HR in 2025?