AI

I am Antoine Kanaan. I built Legal AI as infrastructure, not a toy

Mohammed Fathy
Mohammed Fathy

6 min


Antoine Kanaan on Building Legal AI That Lawyers Can Actually Trust

Antoine Kanaan does not talk about AI as magic. He talks about liability, governance and workflow. Speed matters to him, but discipline matters more. The through line in his thinking is simple: in legal, if trust breaks, you are finished.


How early pressure shaped his decision-making

When asked about his habit of moving fast, Kanaan is clear that it was not a personality trait, it was necessity. Building in unstable environments, with tight capital and markets shifting weekly, left little room for hesitation. Decisions were made with 70 per cent of the information, and then owned.

Pressure, he says, sharpens pattern recognition. You stop waiting for perfect data and start looking for signal. Over time, you learn what matters and what does not.

But speed has a cost. Move before consensus and you unsettle people who need longer alignment cycles. Push too hard and you overcorrect. “Speed compounds growth,” he says, “but it also compounds mistakes if you are not disciplined.” The lesson was not simply to move quickly, but to build internal guardrails that keep velocity from becoming recklessness.


What legal workflow he is actually replacing

On the question of what HAQQ really is, Kanaan pushes back on the “chatbot” label. In his view, that framing misses the point entirely.

The problem is not drafting. It is fragmentation. Today’s law firm workflow is scattered: drafting in Word, research in Lexis, time tracking in one system, CRM in another, conflicts in spreadsheets. AI is often bolted on top like a novelty.

HAQQ, he argues, replaces the invisible chaos rather than one isolated task. Matter creation, KYC, conflict checks, drafting, review, time capture, billing, strategy, audit trail. The AI is not a feature layered on top. It sits inside what he calls the operating system of the firm and learns how that firm actually works.

Most legal tech products optimise one function. Drafting. Research. Billing. They improve a task but leave the workflow broken. Kanaan’s claim is broader: rebuild the workflow end to end, and embed intelligence inside it. Only then does the system produce what he describes as a dynamic digital fingerprint, creating an AI Twin shaped by the firm’s own structured data.


The first 90 days that made adoption compound

When the conversation turns to scaling across thousands of firms in more than 80 countries, Kanaan does not credit marketing. He points to discipline in the first 90 days.

Three rules guided early adoption. First, onboard real lawyers, not “AI curious” users. Second, force concrete use cases: draft a contract, run a conflict check, generate a risk memo. Not “ask anything.” Third, structure data from day one.

If the operating system captures contextual, timestamped, role-based data, adoption compounds. If it does not, the AI resets every session and the value evaporates. Compounding, in his view, comes from turning usage into dependency through structured information. Without that, you have novelty. With it, you have infrastructure.


Slowing down to earn trust

Pressed on how he balances speed with the legal profession’s non-negotiables, Kanaan is blunt. “Trust in legal is oxygen. Lose it once, you are finished.”

The trade-offs were real. Certain features were slowed to build governance layers: human sign-off loops, immutable audit logs, role-based permissions. HAQQ refuses to train on client data. Compliance was overbuilt before it was marketed.

It cost time. It protected reputation.

For lawyers, confidentiality, competence, disclosure and oversight are obligations, not preferences. An AI system that ignores that reality is, in his words, “a liability machine”. He would rather delay release than compromise on auditability.


What can be productised, and what cannot

Asked to draw the line between automation and judgement, Kanaan is precise.

Structured reasoning can be productised. Clause comparison, risk ranking, compliance mapping, deadline tracking, conflict detection, pattern recognition across thousands of matters. These are systems problems.

What should not be automated is final judgement. Ethical calls. Strategy dependent on human nuance. Client counselling in emotionally charged situations. “AI can inform. It must not decide.” A machine cannot be accountable. A lawyer can.

The distinction is less about capability and more about responsibility.


The moment firms moved from curiosity to dependency

When asked about his biggest success, Kanaan does not cite user numbers. He describes a shift in perception.

The turning point was not an impressive demo. It was showing a partner a client-ready, 11-page risk memo exported in Word within minutes, structured exactly as a lawyer expects. Negotiation-ready output.

Adoption accelerated not because it was flashy, but because it was usable. The product crossed from interesting tool to something that could run a practice. His rule is simple: show, do not promise.


The positioning mistake that cost him months

Asked about failure, he points to an early messaging error. HAQQ was briefly positioned like generic AI: faster drafting, cheaper work.

It attracted the wrong audience, those looking for shortcuts rather than infrastructure. Focus drifted. Months were lost.

The corrective rule now is unforgiving: if your messaging can be confused with ChatGPT, you have failed. Legal AI, in his view, is not about speed for its own sake. It is about defensibility.


Leaving engineering to study law

When asked about dropping out of engineering at 18 to pursue law, Kanaan frames the best decision as perspective. He learned how lawyers think before attempting to automate them. That insider understanding shapes HAQQ’s architecture.

The worst misjudgement was underestimating how long credibility takes in legal. In technology, traction speaks loudly. In law, trust does. If he could revisit that period, he would invest earlier in institutional partnerships and bar-level endorsements.


Scaling culture without personal oversight

On the question of leading a team that has grown beyond his ability to coach everyone personally, Kanaan rejects the idea that culture is organic.

“You cannot personally coach 90 people.” So standards must be codified. Clear principles. Written playbooks. Extreme ownership. Fast internal demos. Brutal clarity about what good looks like.

Culture, for him, is enforced standards plus shared mission. If someone cannot explain why confidentiality and auditability matter in AI, they are misaligned.


Hiring for liability, not brilliance

When the conversation turns to hiring, his red flags are revealing.

Candidates who talk about models but not liability. Those who optimise for speed over defensibility. Anyone who says AI will replace lawyers. Anyone who cannot explain privilege, conflicts or regulatory risk. Anyone impressed by demos rather than audit trails.

Read next

Impressive CVs are easy to fake, he argues. Deep understanding of legal responsibility is not. One hallucinated clause can cost a client millions. If a builder does not respect that, they should not be in legal AI.

For Kanaan, that is the dividing line. Not technical talent, but whether you understand what is at stake.

Read next