AI Use Policy for HR Teams: What to Include and How to Roll It Out
Key takeaway
Most organizations that have deployed AI tools in HR did so without a written policy governing their use. This guide covers what an HR AI use policy needs to address, the specific requirements in jurisdictions with AI employment laws, and how to communicate the policy to managers and employees.
The absence of an AI use policy in HR is not a neutral state. It means that AI tools are being used inconsistently across the organization — some managers using AI to draft performance reviews without disclosure, some recruiters using AI screening tools without adverse impact analysis, some HR business partners inputting employee compensation data into public AI systems that may use it for model training. A written AI use policy doesn't prevent AI adoption — it channels it into documented, auditable, and legally defensible practices. This guide covers what needs to be in that policy.
The six elements every HR AI use policy needs
1. Approved tools and use cases
List the AI tools your organization has approved for HR use, and for what purposes. The list should be specific enough to prevent ambiguity — 'AI tools may be used for drafting' is too vague; 'Copilot for Microsoft 365 may be used for drafting job descriptions, policy documents, and employee communications, subject to the review requirements below' is specific.
Equally important: specify what is prohibited. Common prohibitions include: using public AI systems (ChatGPT, Claude via web) to process employee PII without a data processing agreement, using AI for final employment decisions without human review, and using AI systems not on the approved list for HR purposes.
2. Review requirements before use in decisions
Every AI output used in an employment decision — hiring, performance management, termination, promotion, compensation — requires documented human review before it is used. The policy should specify: who reviews the output, what they're reviewing for (factual accuracy, consistency, potential bias), and how the review is documented. A manager who submits an AI-generated performance review without editing it has not completed a review — the policy should make this explicit.
3. Candidate and employee disclosure
New York City Local Law 144 requires employers using AI tools in hiring to disclose this to candidates and conduct annual bias audits. Similar requirements are developing in other jurisdictions. Best practice everywhere: disclose in job postings when AI is used in the screening or selection process, and provide a mechanism for candidates to request human review. For existing employees, disclose AI use in HR processes (performance review drafting, chatbot responses) in the employee handbook update that accompanies policy rollout.
4. Data handling and PII restrictions
Employee data is some of the most sensitive data an organization holds. The policy should specify: which employee data (if any) may be processed by approved AI tools, which AI systems have data processing agreements that prevent use of your data for model training, and that employee PII (names, salaries, performance ratings, health information) may not be input into public AI systems without a DPA.
5. Adverse impact monitoring for hiring AI
For any AI tool used in hiring (resume screening, candidate ranking, interview analysis), the policy should require quarterly adverse impact analysis by gender, race/ethnicity, and age. The analysis compares selection rates for candidates from different groups — a tool that selects male candidates at a materially higher rate than female candidates for the same role has an adverse impact problem, regardless of whether that was the tool developer's intent. Document the analysis and remediate tools that produce adverse impact.
6. Accuracy and error monitoring
Designate someone responsible for periodic review of AI outputs for accuracy. For HR chatbots: monthly sampling of 20–30 chatbot responses for factual accuracy against source policy documents. For AI-generated content (JDs, policies, learning content): review before publication. For AI in performance processes: manager training on what to verify and a post-cycle survey asking managers whether AI-assisted reviews were accurate.
Rollout communication
The policy rollout requires two audiences: managers (who will use AI tools directly) and employees (who will be affected by AI-assisted decisions). For managers: a 30-minute training session covering approved tools, review requirements, and what not to do — followed by a written acknowledgment of the policy. For employees: an all-company communication explaining which AI tools are used in HR processes, what decisions are AI-assisted, and how to request human review.
Policy review cadence
AI capabilities and employment laws governing AI both change rapidly. Build a calendar reminder to review the AI use policy annually — in practice, a meaningful review is needed every 6–12 months as new tools are deployed, new regulations emerge, and your organization's AI maturity evolves.
Does our AI use policy need to be reviewed by legal?
Yes. At minimum, employment counsel should review the sections covering candidate disclosure, adverse impact monitoring, and data handling before the policy is published. EEOC guidance, New York City Local Law 144, and emerging state laws create specific obligations that generic policy language may not address correctly.
What happens if an employee refuses to interact with AI tools?
Provide a human alternative for employee-facing AI tools (chatbot, AI-assisted HR services) on request. The EEOC and state civil rights agencies have not issued formal guidance on AI opt-out rights, but providing a human fallback is both good employee relations practice and risk management. For AI tools used in hiring, candidate opt-out mechanisms are required in some jurisdictions.
Do we need to conduct an adverse impact analysis before deploying an AI hiring tool?
Best practice is pre-deployment analysis on the vendor's validation data, plus post-deployment monitoring of actual selection rates. New York City Local Law 144 requires annual bias audits by an independent auditor for covered employment decisions. Several other jurisdictions are implementing similar requirements. Conduct the pre-deployment analysis with your employment counsel and implement quarterly monitoring from day one of deployment.