This site uses cookies.

Managing the Risks of AI in Expert Evidence: 10 Practice Standards for Lawyers and Experts - Ramune Mickeviciute, Hugh James LLP & Geoffrey Simpson-Scott, Hodge Jones and Allen LLP

23/10/25. We hope we do not exaggerate by saying that one of the biggest nightmares that any lawyer faces is being penalised for using or submitting false or inaccurate information.

Most of you might have already heard about good cases stumbling not because of the law itself, but because of missed communication or unclear expectations. In a world that is run by technology to ease our professional lives, one might question how this happens.

Artificial Intelligence (AI) has been introduced as a perfect tool to ease the burden for many of us. However, it is not without any faults.

The use of AI beginning to appear more often in expert evidence, particularly in clinical negligence litigation. From basic proofreading through to data analysis and even drafting, AI tools are becoming part of the professional landscape.

We consider how this can create issues in legal proceedings so that you do not end up in the same position as some of us who have been unfortunate in this field. We we are going to discuss the effects of AI in clinical practice, and in particular the use of it when preparing expert evidence.

What is AI?

The term ‘AI’ is used so very often these days; however, how many of us know exactly what it means? We often have to remind ourselves what it is; how we can benefit from it; and what dangers it might impose?

A fancier explanation is that AI is ‘a technology that enables machines to perform tasks that typically require human intelligence’. AI encompasses various technologies like machine learning and deep learning, which allow systems to learn and adapt from data. It also creates models by training an algorithm to make predictions or decisions based on data. It encompasses a broad range of techniques that enable computers to learn from and make inferences based on data without being explicitly programmed for specific tasks.

In more casual terms, it is a technology that is self-programmed to learn from a lot of data and is there to help us in some situations. AI can appear in a form of different apps to help to tackle specific or broad range tasks.

For instance, when it comes to legal practice, AI can assist us in analysing information; our written communication; our drafting; extracting information from research; reviewing documents; summarising information; and spellchecking (amongst others).

Also, just to show it in numbers, 96% of law firms are reportedly now using AI in some capacity.

While AI has the potential to speed up processes and uncover insights, it also comes with risks. Experts and lawyers alike need to be alert to these challenges, since unreliable or poorly explained AI use can damage credibility and even make evidence inadmissible. The reported cases and insights from England and other common law jurisdictions show that this is already a familiar story.

AI Use in Expert Evidence

One of the challenges that lawyers might face is to try and control the sue of AI by the third parties involved with the case. One of these is likely to be our medical experts.

Doctors/medical staff started using AI years ago and the use continues to grow in many areas of their profession, which is more difficult for us to keep track of. This is not limited to their practice alone, and experts are starting to use their AI facilities to daft reports.

Experts use AI tools to make calculations, predictions or gather some additional data as well as to draft the body of their report.

While the use of AI is not prohibited, the danger lies in using it without double checking that the data produced is indeed accurate for the specific case that we are dealing with. If evidence is submitted to the court and/or our opponent containing false data, we face serious sanctions.

We have identified several pitfalls and proposed solutions to help you address the problem by bridging the gap between what we intend to do and what then actually happens.

Pitfalls for expert evidence

To address the issues that we might face when dealing with expert evidence, we propose ten practice standards. Each one sets out a practical expectation followed by the reasoning behind it and, of course, a practical solution to deal with it.

  1. Ask Experts to Disclose AI Use Early

Practice Standard: Ask experts in your Letter of Instruction to confirm if they plan to use AI and how they intend to use it.

Reason: You need to know if AI has been used at all.

Transparency is the starting point. AI can influence many parts of a report, sometimes in subtle ways. If lawyers do not know where AI has been used, they cannot properly review its reliability. Asking the question up front, at the instruction stage, ensures everyone starts from a position of openness. The best lawyers do not necessarily work harder; they communicate earlier, document clearly, and anticipate uncertainty.

  1. Scrutinise AI Outputs for Accuracy

Practice Standard: Check AI-generated content carefully and make sure experts confirm that all key points are accurate.

Reason: False or made-up information can damage credibility.

AI is prone to “hallucination,” where it generates convincing but false information. This has already caused embarrassment in the courts. Every fact, citation, or figure derived from AI must be double-checked against reliable sources. If something cannot be independently verified, it should not form part of expert evidence. The answer is not more regulation; it is more trust by building credibility through transparency.

3. Push for Explanations of the Process

Practice Standard: Ask experts to explain how the AI reached its conclusions and what human checks they carried out.

Reason: AI processes can be hidden and hard to understand (the ‘black box’ issue).

Without knowing the inputs, prompts, or checks involved, it is impossible to judge whether AI results are credible. Experts should document how they used AI and what they did to confirm its accuracy. This mirrors the expectation that experts explain their methodology when carrying out specialist tests. Clarity is not just good ethics; it is good business. The more transparent our processes, the fewer disputes we face and the stronger the expert evidence becomes.

  1. Question the Data Sources

Practice Standard: Ask experts to explain where their AI tool got its data and how well it matches the patient group in question.

Reason: Biased data can lead to misleading or unfair conclusions.

AI is only as good as the data it is trained on. If that data excludes certain groups or is unrepresentative, its results will be skewed. Experts must show why the dataset is appropriate to the case at hand. Vague reassurance that “this tool is commonly used” will not withstand scrutiny. Trust grows in the quiet moments. The most lasting expert relationships are often built between deadlines via the clarity of our updates and the honesty of our expectations.

  1. Require Human Oversight and Clinical Judgement

Practice Standard: Make sure experts explain how their own clinical judgement or experience supports the use of AI in the report.

Reason: Relying too heavily on AI without human input is risky.

AI cannot replace the role of the expert. Judges decide the facts; experts apply their knowledge to assist the court. If an opinion is shaped by AI, the expert must still show how their personal experience and clinical judgment underpin the conclusions. The irony of modern practice is that whilst we have never had more data, we have never been more in need of human judgement and empathy.

  1. Confirm the AI Tool Was Current and Reliable

Practice Standard: Confirm that the AI tool used was up to date and reliable at the time, and that the expert accounted for any changes in the tool or data.

Reason: Older or inconsistent tools can give inaccurate results.

AI systems evolve quickly. A report based on an outdated or uncalibrated tool risks producing flawed conclusions. Experts should identify which version of a tool they used, when it was last updated, and how they ensured its reliability.Every challenge here is really a test of systems, not people. When errors arise, it is rarely about intent; instead it is about process design, communication, or follow-up.

  1. Distinguish Between Correlation and Causation

Practice Standard: Ask experts to clearly show why each step in their reasoning is more likely than not, rather than just a possible link.

Reason: Avoid confusing coincidence (correlation) with cause.

Spotting patterns is AI’s strength; but not every pattern demonstrates causation. In law, the test is probability, not possibility. Experts must explain why AI-identified links genuinely represent cause and effect, not just coincidence. It is often not about more expertise; but the better use of it. Applying expert insight consistently is what earns trust, both internally and with judges.

  1. Challenge Anything That Looks Odd

Practice Standard: Follow up on anything in the expert’s report that seems odd, unlikely, or hard to follow.

Reason: Blind trust in AI output leads to ‘missing the obvious’ weak links in an opinion.

When AI results seem to “fit” too neatly with assumptions, there is a risk they are accepted without question. Lawyers should treat AI outputs like any other evidence: probe unusual or unclear points and insist on clear reasoning. Behind every assumption lies a question of trust. Experts’ assumptions are only as effective as the confidence practitioners and judges place in those who apply them.

  1. Agree Standards of Compliance in Advance

Practice Standard: Agree early on with your expert what legal and professional standards apply to the use of AI.

Reason: Poor understanding of the rules can make evidence inadmissible.

Both legal requirements (such as CPR 1998, Part 35) and professional guidance are evolving. Agreeing standards at the outset avoids problems later. We can help experts with the legal framework, while both experts and lawyers must ensure their professional obligations are met and problems headed off. Every practitioner remembers a case that made them rethink how they communicate with experts and changed our perspective.

  1. Develop and Use a Consistent Checklist

Practice Standard: Use a simple checklist to assess how AI is used in expert evidence and whether it meets the required standards.

Reason: A consistent approach helps manage risk and avoid surprises.

It is one thing to understand the practice standard; it is another to see how it plays out under pressure. AI is here to stay, so ad hoc responses are not enough. A checklist gives both experts and lawyers a structured way to assess reliability, transparency, and compliance. Judges are more likely to value evidence that comes from a careful, documented process; moving us from principle to practice.

Cost Effectiveness

In many firms, it is not the big policy changes that build trust with new technology; it is the everyday habits that show reliability.

If you think about how you ran your cases before AI, it was not unorthodox behaviour to check all of the work that is done by others, including junior members of the team and experts. Using AI does not mean that we should escape our routine; only that we upgrade our eyesight to notice some of those possible pitfalls. Small habits create big results.

We consider that the easiest way to start will be continuing to know the facts of our cases and keeping an eye out for anything that seems unusual. As always, do thorough checks and ask those questions that you consider relevant. This is definitely costs effective and rewarding long term.

Sanctions

Sanctions for using false data are severe, so it is mandatory to ensure that your expert evidence is completely accurate. Just to give some flavour to all that we have raised above, reputational damage and losing cases are potential consequences. The sanctions that lawyers could face include, but are not limited to, fines imposed on law firms; unfavourable judicial rulings and comments; wasted or adverse costs orders; and regulatory outcomes like being struck off. 

We can face these sanctions even where it is the expert who has improperly used AI. The cases paint a clear picture: lawyers start with good systems until real deadlines, health issues, workload and costs pressures tested them and their ‘errors’ becoming public.

Conclusion

AI is reshaping the way expert evidence is prepared and challenged. While it can support data analysis and improve efficiency, it also introduces risks that cannot be safely ignored.

These ten practice standards provide a framework for responsible use: promoting transparency, insisting on verification, and keeping human judgment at the centre. Handled in this way, AI can be a useful tool without undermining the credibility of the experts or the fairness of proceeding.

At the end of the day, we are in control of managing our use of AI and experts. It is our duty to check everything and ensure that evidence is accurate. The turning point comes when we stop feeling ‘managed’ by AI and start to understand how we can use it confidently. We hope that these practice standards assist you with your day-to-day job as much as they do us.

Ramune Mickeviciute, Solicitor at Hugh James LLP and Co-Author of ‘A Practical Guide to Fixed Costs in Clinical Negligence Cases’.

Geoffrey Simpson-Scott, Partner at Hodge Jones and Allen LLP and Author of ‘A Practical Guide to Clinical Negligence’ (Third Edition).

Both are available from Law Brief Publishing.

Image ©iStockphoto.com/Andrii Yalanskyi

All information on this site was believed to be correct by the relevant authors at the time of writing. All content is for information purposes only and is not intended as legal advice. No liability is accepted by either the publisher or the author(s) for any errors or omissions (whether negligent or not) that it may contain. 

The opinions expressed in the articles are the authors' own, not those of Law Brief Publishing Ltd, and are not necessarily commensurate with general legal or medico-legal expert consensus of opinion and/or literature. Any medical content is not exhaustive but at a level for the non-medical reader to understand. 

Professional advice should always be obtained before applying any information to particular circumstances.

Excerpts from judgments and statutes are Crown copyright. Any Crown Copyright material is reproduced with the permission of the Controller of OPSI and the Queen’s Printer for Scotland under the Open Government Licence.