What AI Can’t Replace
The skills AI cannot replicate are the ones firms are hiring for.
From where we sit in legal recruitment, the rise of AI in law is reshaping who gets hired, what skills are valued, and how candidates position themselves. But there is a more immediate problem playing out.
People are treating ChatGPT as though it were legal counsel. They are not consulting a lawyer; they are consulting a text predictor, and walking into negotiations, disputes, and employment situations with a level of confidence the tool has not earned them.
AI has genuine utility. It is useful for getting across an unfamiliar topic quickly, sense-checking language, or doing background research before a substantive conversation. However, there is a meaningful difference between being informed and being advised, and a growing number of people are conflating the two in ways that create real risk.
From a talent perspective, this matters more than the broader conversation about AI and jobs tends to acknowledge. Below are the four problems we keep encountering, and what they tell us about where the legal market is heading.
1. AI does not engage with the facts of a matter
Legal outcomes turn on specifics, while ChatGPT produces a generic answer to a situation that is not generic. It cannot read your contract, review your correspondence trail, or account for the jurisdictional nuances of your specific matter. The details that carry the most legal weight are precisely the ones a generative AI tool will either approximate or ignore entirely.
Across the firms and in-house teams we work with, the professionals doing this well are those who use AI to increase efficiency while applying deep, fact-specific analysis that the technology cannot replicate. The ability to properly interrogate the facts of a matter has not diminished in value; if anything, it has increased.
2. Legal knowledge and legal judgement are not the same thing
When we are briefed on senior appointments, whether disputes specialists, employment partners, or commercial leads, the requirement that appears consistently is judgement.
Knowing when to push and when to settle. Understanding how a particular tribunal or regulatory authority tends to approach specific arguments. Reading the dynamics of a negotiation in real time. These are capabilities that are built over years of practice and direct client exposure. They cannot be produced by pattern-matching across a training dataset.
What we observe is that AI is, if anything, making this kind of strategic judgement more marketable rather than less. As routine legal tasks become increasingly automated, the professionals who are advancing in this market are those whose value lies in areas the technology cannot reach.
3. The way people prompt AI reflects the outcome they want
Very few people approach a generative AI tool by asking it to identify the weaknesses in their position. The far more common approach is to ask what they can claim, what remedies are available to them, or whether they have a strong case. The tool obliges with a confident, well-structured response that validates the framing of the question.
This is a structural problem where the output reflects the input. A self-serving question produces a self-serving answer, and the polish of the prose lends it an authority it has not earned.
This is one reason why strong client-facing skills are becoming a more prominent differentiator in the candidates firms are seeking. Professionals who can deliver an honest assessment of a difficult position, clearly and constructively, are exactly what the market needs. Where AI will tell you what you asked it to confirm.
4. People interpret AI output selectively
When AI presents a range of possible outcomes or remedies, people tend to anchor on the most favourable figure. What begins as "possible remedies in comparable situations" becomes, in the reader's mind, a realistic expectation of what they are entitled to.
We are seeing the consequences of this in employment disputes, contract negotiations, and commercial disagreements, often at the point where matters are further along than they need to be. Clients arrive over-informed in some respects and significantly under-advised in others. The gap between those two states is where a great deal of legal professional value now sits.
There is merit in using AI to get informed, then consult someone qualified who will give you an honest assessment of your position, including the parts you would rather not hear.
That distinction underscores why qualified legal professionals remain indispensable. The gap widens between readily available AI-generated information and genuinely considered legal advice.
For legal professionals, the market is actively looking for practitioners who can bridge that gap: those who can meet a client where they are, work with the expectations they have formed, and guide them towards a better outcome with clarity and credibility.