A lawyer I work with mentioned recently that they’d used machine translation for a client document. It had gone wrong. Not catastrophically, but enough to create confusion that took weeks to untangle. When I asked whether they’d flagged the issue or shared what they’d learned with colleagues, they looked at me like I’d suggested they make a formal confession.
What I’ve noticed from lots of recent discussions with lawyers, is they don’t like to talk about translation failures involving AI tools. Not publicly, not in professional forums, rarely even among peers. Why?
The obvious answer is liability. Admitting you used machine translation for client work, and that it produced an error that went unseen, opens questions about competence, due diligence, and whether you met your professional obligations, pretty much in the same way hallucinations in court filings do. No one wants to invite scrutiny from clients, regulators, or insurers by documenting their mistakes in a LinkedIn post or elsewhere.
But there’s something else at play. Using AI for translation often happens quietly, without formal documentation or approval processes. A junior associate runs a document through a tool to get the gist. A partner uses it for a quick check on foreign language correspondence. We’re told this is efficiency. Typically nothing about this is viewed through the lens of risk. When something goes wrong, acknowledging the failure means acknowledging the decision to use the tool in the first place, and explaining why that seemed appropriate at the time.
What happens, though, when these failures stay hidden? We lose the data we need to understand how and when machine translation actually fails in legal contexts. We can’t identify patterns. We can’t develop guidelines for appropriate use. We can’t warn colleagues about specific types of documents or legal concepts where automated tools consistently produce misleading translations.
The silence creates a perverse information asymmetry. Technology vendors can promote their tools with confident claims about accuracy and reliability.
Individual lawyers making decisions about whether to use these tools have access to marketing materials and demo scenarios but not to the accumulated evidence of how these systems perform in actual legal work.
How do you assess risk when the failures aren’t visible?
My point is not to shame anyone who’s used AI translation tools. However, we do need to create space for honest discussion of when these tools work, when they don’t, and what the consequences look like when they fail in legal contexts.
Professional judgement includes knowing when to rely on technology and when human expertise is non-negotiable. But that judgement depends on having accurate information about how these tools perform in practice, not just vendor promises about capabilities.
Until lawyers can discuss translation failures without fear of professional consequences, we’re making decisions about AI adoption based on incomplete information. And clients are the ones who bear that risk.
#legaltranslation #duediligence #riskawareness


