Your Hiring Algorithm Has Been Making Decisions for Months. Can You Explain a Single One of Them?
The HR AI compliance window is closing. August 2026 is not a soft deadline. And the organizations that treat it like one will discover this in the worst possible way.
I want to open with something that happened during a discovery process in an employment discrimination case.
The plaintiff’s attorney asked the defendant company to produce the decision logic behind the hiring algorithm that had screened out the plaintiff’s application. The request was specific: not a general description of how the system works, but the specific reasoning applied to this specific candidate on this specific date.
The company turned to its vendor. The vendor provided documentation describing the model’s general architecture and the categories of signals it considers. The documentation did not, and could not, explain what happened to this particular candidate in this particular instance.
The company’s HR team, the legal team, and eventually the court were all looking at the same wall. The system had made a decision. Nobody could explain it.
This story is not an outlier. It is the story that employment attorneys are starting to tell more frequently, and it is the story that regulators in New York City, California, the European Union, and a growing number of other jurisdictions are building their enforcement frameworks around. The question they are asking is no longer “was this decision fair?” It is the harder, more immediate question: “Can you explain it?
Most organizations cannot. And the window to build the infrastructure that would allow them to is closing.
The algorithm made thousands of employment decisions. The company can explain zero of them in the specific, individual terms that regulators and courts are now requiring. This is the HR AI compliance crisis that was always coming.
The Regulatory Picture, Assembled in One Place
Part of why organizations are behind on this is that the regulatory landscape is fragmented enough that no single piece of legislation created an obvious forcing function. The pressure has been building from multiple directions simultaneously, and the full picture is more urgent than any single regulation suggests when examined in isolation.
The Questa AI team mapped this landscape in Why Regulators Are Watching Your HR Algorithms. Let me assemble the relevant pieces here in one place, because the aggregate is what creates the urgency.
New York City’s Local Law 144 requires annual independent bias audits for automated employment decision tools, with results published publicly and candidates notified at least ten days before the tool is used on them. The law applies when AI influences the decision, regardless of whether a human makes the final call. This is already in force.
California’s Civil Rights Council regulations, effective October 2025, extend anti-discrimination law to automated decision systems in employment. Employers must test proactively for bias, maintain four years of records, and provide alternative assessment processes for candidates who could be disadvantaged by the automated system. This is already in force.
The EU AI Act classifies employment AI as a high-risk application under Annex III. Conformity assessments, transparency documentation, human oversight mechanisms, and registration in the EU database are all required. The compliance deadline for existing high-risk systems is August 2026. That is not a long time away, and the work required to meet it is not trivial.
GDPR Article 22 gives data subjects the right to not be subject to solely automated decisions with significant effects, plus the right to human review and explanation. Employment decisions — hiring, promotion, performance rating, termination — have significant effects almost by definition.
The EEOC and state equivalents continue to treat AI-driven hiring tools as extensions of the employer under existing civil rights law. Disparate impact is a valid legal theory regardless of whether an AI regulation specifically applies. If your algorithm consistently produces racially or gender-skewed outcomes, you face liability under decades-old discrimination law that predates AI entirely.
Read these together and the picture is clear: there is no jurisdiction where a major organization can deploy AI in HR today without meaningful compliance obligations. The obligations vary, but they converge on a few common requirements: explainability, human oversight, bias testing, and documentation. Organizations that have deployed HR AI without building these capabilities have a compliance gap, and the gap is getting harder to ignore.
The Explainability Problem Is Technical, Not Just Organizational
I want to spend some time on the technical dimension of this, because the explainability requirement is often treated as an organizational process question — who reviews the AI output, how do we document that review — when it is actually, at its root, an architectural question.
Most HR AI systems in production today are built on vector-based matching: candidates and job requirements are represented as numerical vectors, and the system identifies candidates whose vectors are most similar to the target. This approach works. It is computationally efficient and scales well across large applicant pools.
It also cannot explain its reasoning in terms a human reviewer, a regulator, or a plaintiff’s attorney would recognize as an explanation.
When you ask a vector-based system why it ranked Candidate A above Candidate B, the mathematically honest answer is that A’s vector was closer to the target vector in high-dimensional embedding space. That is not an explanation of the decision. It is a restatement of the output in slightly more technical terms. It tells you what happened but not why, and “why” is exactly what regulators are requiring.
This is one of the reasons the shift toward graph-based knowledge representation is relevant to HR compliance, not just to general AI accuracy. The Annex III of the Act covers the broader architectural question, but the HR compliance implication is specific: graph-based systems encode entities — candidates, job requirements, skills, qualifications, decision factors — as nodes with explicit relationships. When a graph-based system ranks a candidate, it can show you the traversal: this candidate was ranked for these reasons, based on these specific relationships between these specific nodes. That is an explanation. That is what EU AI Act Article 13 and GDPR Article 22 require.
The AI regulations makes this point directly: explainability is not a product feature you add to a working system. It is an architectural property that has to be present in the system from the beginning. If your current HR AI vendor cannot tell you whether their system can produce individual-level decision explanations, the answer is probably no, and that is a conversation worth having before August 2026 rather than after.
You cannot provide meaningful human oversight of a system you cannot explain. And you cannot defend an employment decision to a regulator with ‘the algorithm recommended it.’ These two facts together define the compliance problem.
August 2026: What the Deadline Actually Requires
The EU AI Act’s August 2026 deadline for Annex III system compliance is the most concrete forcing function in the current regulatory landscape, and it is worth being specific about what compliance actually entails.
The AI compliance in HR departments covers this in detail. Here is the version that matters for organizational planning.
The conformity assessment for a high-risk AI system is not a checkbox exercise. It requires:
• Technical documentation of the system’s logic — not the vendor’s marketing materials, but actual documentation of how the system makes decisions, what data it was trained on, what its known limitations and failure modes are, and how it handles edge cases
• Evidence of bias testing — specific testing against protected characteristics with documented results, conducted in conditions that reflect actual deployment
• Human oversight mechanisms — not nominal human review, but substantive review by people who understand what the system is doing and have the authority and training to override it
• Processes for affected individuals to obtain explanation and challenge decisions — a functioning complaints and appeal pathway, not a theoretical right
• Registration in the EU AI database — which requires having the above documentation ready to submit
• Post-market monitoring plan — ongoing rather than point-in-time compliance
The conformity assessment process itself typically takes three to six months when you have the underlying documentation ready. If you do not have the documentation, you need to build it first, which means engaging with your vendor, auditing the system, and potentially redesigning workflows. Organizations that have not started this work in April 2026 are already behind a reasonable timeline for August 2026 compliance.
The question is not whether to do this work. It is whether you do it on your own timeline, proactively, with the benefit of planning and resource allocation, or whether you do it reactively, under pressure from an enforcement action, a litigation discovery request, or a failed procurement qualification.
The Vendor Liability Problem Nobody Is Talking About Clearly Enough
There is a specific misunderstanding about HR AI liability that I want to name directly, because it is probably the most common source of organizational false comfort in this space.
The misunderstanding is this: “We use a vendor-supplied tool. If there’s a problem with the algorithm, that’s the vendor’s liability, not ours.”
This is incorrect in almost every jurisdiction where HR AI compliance obligations exist.
In the US, under existing civil rights law and the EEOC’s guidance, employers are liable for discriminatory outcomes produced by tools they use in employment decisions, regardless of whether those tools were supplied by a third party. Technology does not transfer liability. If a vendor’s algorithm produces disparate impact outcomes, the employer faces the legal consequences.
Under the EU AI Act, the organization deploying the AI system is the “deployer” and bears specific obligations regardless of who developed the system. The conformity assessment obligation, the human oversight requirement, and the documentation requirements apply to the deployer, not only to the developer. If the vendor’s system cannot produce the documentation the conformity assessment requires, the deployer must either obtain it, require the vendor to provide it, or stop using the system.
Under GDPR, the data controller — typically the employer — bears the accountability obligation for automated decisions affecting data subjects. The vendor is typically a data processor, which creates its own set of obligations, but it does not transfer the controller’s accountability.
The practical implication is that the questions about your HR AI system’s decision logic, bias testing, and explainability are questions you must be able to answer, and that means getting answers from your vendors that go beyond what their sales materials say. If your vendor cannot provide the technical documentation that a conformity assessment requires, you have a procurement decision to revisit before August 2026.
The HR Department as Shadow Regulator
There is a framing that has emerged in the employment law and HR compliance communities that I find clarifying: in the current environment, HR is increasingly functioning as the internal regulator of workplace AI.
The formal regulatory frameworks are patchwork. Federal guidance in the US is fragmented and in flux. State laws vary significantly. The EU AI Act applies in Europe but not universally. The GDPR applies broadly but its AI-specific application is still being developed through enforcement.
In the absence of a single clear regulatory framework, organizations that are getting this right are the ones where HR and legal have jointly assumed the governance function: cataloguing every AI touchpoint in the people workflow, assessing compliance exposure against the most stringent applicable standards, and building internal policies that would survive scrutiny in any jurisdiction where the organization operates.
This is not a technology project. It is a governance project. The technology team builds or procures the systems. HR and legal govern how they are used, what oversight processes exist, what documentation is maintained, and how affected individuals can understand and challenge decisions made about them.
The organizations that have done this well have three things in common. They assigned explicit ownership of AI governance at a senior level — not diffused across departments, but owned. They mapped the full scope of AI influence in employment decisions before designing the governance response — because the scope is almost always larger than anyone initially estimates. And they required vendors to provide documentation before procurement, not after a compliance question surfaced.
For the full technical and regulatory picture, the Regulators Are Coming for Your HR Algorithms. Here’s What That Actually Means for Your Organization. covers the specific frameworks in depth and includes the practical governance framework. The Substack format here lets me focus on the organizational decision: what to do with this information, and when.
The One Question That Cuts Through Everything
I want to close with the framing I find most useful for organizations trying to assess where they actually stand on this.
The question is not “are we compliant with the EU AI Act?” That question is complex, jurisdiction-dependent, and can produce a technically accurate answer that obscures serious underlying problems.
The question is: “If an EEOC investigator, an EU supervisory authority, or a plaintiff’s attorney asked us tomorrow to explain specifically how our hiring algorithm treated a specific candidate, could we provide that explanation in individual, specific, documented terms that would satisfy the legal standard?”
This question is useful because it is concrete, it is binary, and it cannot be answered with a reference to the vendor’s documentation or the system’s general architecture. Either you can produce an individual-level explanation or you cannot. If you cannot, you have a problem regardless of which specific regulation applies to you.
The organizations that answer “yes” to this question have made architectural and governance choices that make compliance possible. The organizations that answer “no” or “I’m not sure” have a planning project on their hands, and the timeline for completing it is shorter than it may appear.
The infrastructure that makes the answer “yes” is what Questa AI was built to provide: AI systems with explainable reasoning, local data processing that keeps sensitive employment data inside the organization’s own infrastructure, and audit trails designed for regulatory accountability rather than internal reporting. The compliance investment is not a cost center. It is the architecture that makes sustainable AI deployment in HR contexts possible.
The organizations that have built it will look back on this period as a strategic window. The ones that have not will look back on it as the window they missed.
If this issue was useful, please forward it to your CHRO, general counsel, or whoever in your organization is responsible for AI governance. This is the kind of decision that benefits from more people being in the room earlier.

