The modernisation of the in-house legal function
Debbie ThomasMonday 16 March 2026
In-house legal teams are modernising their operating models with the help of technology. In-House Perspective examines the questions that arise about how legal judgement is exercised, how decisions are made and how accountability is maintained within evolving operating models.
Across corporate and public sector settings, in-house legal teams have been modernising their operating models with the help of technology. The pace of change has accelerated in recent years, particularly since 2023, driven by a combination of post-pandemic changes to hybrid working practices, the wider adoption of collaborative and cloud-based tools, breakthroughs in generative AI and sustained cost scrutiny.
This modernisation is about more than efficiency or speed. As legal work becomes increasingly shaped by systems rather than individuals, questions arise about how legal judgement is exercised, how decisions are made and how accountability is maintained within evolving operating models.
In practice, modernisation includes the adoption of legal technology tools, such as document automation for rapid drafting, contract lifecycle management for end-to-end oversight, e-discovery for handling data at volume and integrated case management systems designed to streamline workflows.
Taken together, these systems and practices are reshaping how legal work is initiated, prioritised and undertaken within organisations. At the same time, they raise important questions about how in-house teams operate in this new environment and whether governance frameworks around accountability, escalation and record-keeping are keeping pace with the way legal services are now delivered.
From inbox-driven work to structured legal workflows
One of the clearest shifts in legal operating models used in-house has been the move away from inbox-driven work towards structured, technology-enabled workflows. Janie Trice, Head of Legal at executive search and advisory solutions company WittKieffer, believes that contract lifecycle management (CLM) software has had the most significant impact on how in-house teams work, making specific reference to efficiencies, transparency and accountability. Combining this technology with AI allows the business to self-serve when dealing with contracts. This allows ‘the legal or procurement department to focus on higher level, more complex commercial contracting issues,’ she says.
Building on these points, Stacey Quaye, Senior Director and Global Head of Product Legal at financial technology company Tipalti, underscores that CLM, intake tools, playbooks – digital or codified sets of predefined rules, decision trees, templates, standard clauses and escalation protocols embedded into dashboards and automated workflows – and designed approvals have helped teams to shift from email chains to systems that are designed to be reusable.
“People are encouraged, with technological developments and playbooks, to self-advise, and then to visit the legal team in a form of query and queue system
Sally Wokes
Officer, IBA Corporate and M&A Law Committee
Thresholds are set so that parameters differentiate the routine from the exceptional, automatically flagging deals of a certain type, value or level of complexity. Approvals are embedded, and standardisation via templates frees up lawyers for exceptions or strategy and dynamic document workflow. The need for repetitive negotiations is removed, as deviations from the standard path trigger an explanation or escalation. Over time, intake data gathered by the system refines thresholds. As a result, a feedback loop is created where the system itself evolves and is optimising triage accuracy and reducing unnecessary intervention by lawyers.
Quaye points to a wider paradigm shift from ad hoc, document-centric work to a platform-based operating model, where standardised workflows, AI automation and real-time data are replacing inbox chaos and Word documents with predictable, scalable delivery. In heavily regulated sectors such as financial services, ‘that matters because volume plus regulation plus scrutiny is the reality,’ she says. ‘When your contracts touch outsourcing, data, financial crime controls, schemes, consumer duty [and] resilience, you can’t run the whole thing on heroics and memory.’
Sally Wokes, an officer of the IBA Corporate and M&A Law Committee, explains that in-house teams are shifting away from free access to a ‘“front door” policy […] people are encouraged, with technological developments and playbooks, to self-advise, and then to visit the legal team in a form of query and queue system’. Under this arrangement, business teams self-serve for routine requests, while true exceptions, such as non-standard matters requiring human judgement, are queued for structured review by a lawyer.
Who owns decisions
Legal technology is often framed as providing answers, but it also creates a more complex set of decision-making scenarios for in-house teams.
Quaye draws attention to how substantially AI has changed the landscape. ‘AI is now doing triage, spotting deviations, summarising risk, suggesting fallbacks and routing work,’ she says. ‘That’s not neutral. It changes when legal judgement is applied, how it’s applied, and sometimes who feels like they “made” the decision.’
In other words, AI tools are entering the realm of authority in which their role changes from helper to that of active gatekeeper in the workflow when used for screening and pre-sorting. In such instances, judgement could become more template or logic-dependent and, as lawyers work within AI-suggested options, they are seeing a narrower slice of matters, pre-ranked risks or workflow paths, rather than starting from first principles.
While these innovations can deliver scalability without headcount growth, they also pose serious questions around who calibrates the triage logic, who audits the playbooks and who owns decisions when lawyers only see the exceptions.
In-house lawyers may feel they followed the system. The business may feel the legal team approved the decision because the tool cleared it. The organisation may conclude that ownership rests with the system rather than with any individual.
Governance as a core design principle
As legal operating models evolve, governance takes on an increasingly central role. Quaye believes that governance issues such as accountability, escalation and record-keeping should be addressed very explicitly, ‘but only if you treat governance as a core design principle and part of the build, not as an afterthought’. She points to three conditions that consistently distinguish the most robust governance setups. One, where ‘decision rights are clear,’ in terms of who needs to approve, sign off or check, including what needs to be sent to the risk team, the compliance department or the board. Two, where ‘escalation is rules based’, using a traffic light system in which triggers are clearly defined. And three, where ‘record-keeping is automatic’, or built in by default, with audit trails that capture final decisions and reasons.
Such a tech set-up facilitates the process by hardwiring approval paths, capturing version history and maintaining an up-to-date log of who approved what and when. When tech is treated like a productivity tool without a decision-making framework around it and governance is informal, the ‘risk spikes with AI’, says Quaye, because if you leave the decisions to AI without human oversight, ‘you’ll move faster but you’ll be less defensible’.
“AI is now doing triage, spotting deviations, summarising risk, suggesting fallbacks and routing work. That’s not neutral
Stacey Quaye
Senior Director and Global Head of Product Legal, Tipalti
While Trice agrees that the best CLM systems build accountability and escalation directly into their product as part of the user experience, she outlines a dual pathway to strengthen governance further. First, by asking lots of questions to pinpoint gaps or limitations in the system, before implementing manual processes or alternatives ‘to ensure net gains on governance’. Second, she stresses ‘getting the behaviours to change through a well-thought out change management process’ so that the software truly enhances governance. Through these approaches, Trice applies best practice by earmarking the highest-value accountability features the company cares about most, then highlighting them during rollout to obtain ‘excitement from the business about the technology’.
Hailing from a large, regulated organisation at an earlier stage of transformation, Kenny Robertson, Head of Innovation, Legal, Governance and Regulatory Affairs at NatWest Group, highlights his company’s plans to introduce spend management tools and AI pilots during Q1 2026. He explains that ‘governance issues, including accountability and record-keeping, in all cases have been front and centre, as has managing the bank’s emerging risk landscape around AI’. The team have introduced enterprise-wide tools to act as their front door and have collated ‘data points around workloads supported, and course correct[ed] away from low value, repetitive tasks onto those of more utility to stakeholders’.
These points are given wider context in the GC Legal Tech Pulse 2025 Insight Report, produced by Legal Business with Thomson Reuters and published in September, which surveyed over 150 general counsel (GCs) and highlighted the extent to which technology is transforming the work of in-house legal teams. The survey revealed that ‘almost a third of respondents admitted that they still find it “difficult” or “very difficult” to introduce new legal tech into their team’. The report references finding champions or ‘enthusiasts’ who can build internal support and speed up acceptance of the new technology within the legal team. In addition, several respondents made explicit reference to the fact that smart automation could have a transformative effect on compliance, particularly in a global context.
Risk enters the equation
The governance-by-design considerations referenced by Quaye play a critical role in managing risk where generative AI is involved. Particularly during 2025, there was a sharp acceleration in cases involving lawyers submitting court filings that included information where AI had generated fabricated legal authorities such as fake case names, citations, courts and dates, as well as quotes from non-existent judgments – in other words, AI hallucinations. For in-house teams, which routinely draft board papers, regulatory submissions, contract summaries and litigation strategies using similar tools, a hallucinated precedent could mislead executives, weaken defences in regulatory enquiries or contaminate internal decision-making.
Court cases involving AI hallucinations that take place in the US – of which there are now over 800 – are being captured in a database run by legal researcher Damian Charlotin. Meanwhile, the rise of such cases has led to the creation of new guidelines from the American Bar Association (ABA), which emphasise ‘maintaining judicial integrity by ensuring that AI aids judicial functions without supplanting human judgement’.
One mitigation approach increasingly being discussed is the application of legal user experience (UX) design principles. These are guidelines adapted specifically for legal technology platforms, with a focus on making complex legal workflows intuitive, minimising the likelihood of mistakes through smart constraints and prompts and achieving compliance by prioritising clarity, safety and efficiency for lawyers and business users.
In this way, design safeguards are embedded directly into the platform’s user experience – for instance, mandatory verification prompts that ask lawyers to check and confirm before they can use the system’s final outputs, clearly highlighting content that is AI-generated content along with built-in warnings to prompt users to verify citations.
The ABA guidelines also warn against inputting sensitive data into AI systems, particularly where privacy protection isn’t assured. This is a critical concern for GCs handling confidential commercial information, personal data or privileged communications.
Trice considers that, given human nature, there’s the sense that ‘someone will always try to game the system’. Therefore, to protect against risk, during implementation it’s important to think about how a process might be abused, or a rule evaded. Data itself is a risk factor in that the more data held in the cloud or by software tools, the greater the chance of a breach when changes are made. Due consideration should be given to the quality of information. Trice explains that ‘by their nature, CLMs create better “paper trails” to understand contract lifecycles, but [they are] only as good as the humans that put versions in’. The system may also be used to help facilitate the tracking of other types of risk, such as revenue leaks, or to easily identify and track higher-effort deals.
Quaye builds on the human side of governance and risk issues. Dashboards and automated workflows can create a sense of ‘false comfort’, she says, by giving the impression that everything is under control, ‘but if your playbook is outdated or your risk tiering is wrong, you’ve just industrialised the wrong answer’ by automating uniformly incorrect answers.
She describes the issue of ‘model risk creeping into legal’. This concern echoes Trice’s warning that governance gains only materialise when systems are paired with behaviour change. Once legal teams use AI to sort, score and route work, they have effectively introduced decision-support models into the function. In sectors such as financial services, such models are tightly governed by being validated, monitored and checked for drift against explicit tolerances, which define acceptable levels of error or variance.
In many legal teams, the same tools arrive under a ‘productivity’ banner and without the discipline of a tight governance structure to keep it in check, and model risk seeps into legal almost by accident. Quaye underscores the risk of what she refers to as ‘shadow rationale’, where the system records the approval, but not the reasoning that led to it. Due to that information having taken the form of conversations, it’s not formally recorded, so when an audit takes place, litigation arises or a regulator requests a reconstruction of the decision-making process around why a risk was accepted, it’s not possible to provide the information because ‘if your audit trail lives in Teams, it doesn’t really exist’.
Wokes – who’s a partner at Slaughter and May in London – explains that self-advising is not without its risks. Regular updates to playbooks and templates are required, alongside frequent training as well as spot checking for quality control purposes.
Building robust systems and processes
One way to ensure a robust audit trail, Quaye explains, is to develop a simple ‘habit of capturing decision plus reason plus owner in one place, in a lightweight, structured way that people will actually use’. This has the dual purpose of satisfying regulators while enabling organisations to clearly demonstrate the ‘why’ behind the outcomes, by stating the options considered, the mitigations that were chosen, the risk trade-offs that were explored and information about the escalation logic.
Quaye believes that AI tools can also be used as a rich audit trail, particularly when legal teams use playbooks and workflows properly, and they can sharpen accountability. By tracing the path through the system, you can see the owner, the escalation and the sign-off. But she agrees that ultimately, a simple approach of capturing the details in one place is the better option due to AI’s many outputs, prompts and overrides – which are deliberate decisions by a human lawyer to depart from what the system recommends or would do automatically. In practice, however, judgement often becomes fragmented across tools and interactions.
Trice highlights that CLM systems provide useful oversight of where a contract is in the process, in addition to tracking who’s doing what and whether something is early, late or on time. They also facilitate historical tracking, which benefits in-house teams and the business to understand ‘when they agreed to an exception, and with many systems, [they] can help you add additional accountability or rules to allow for those exceptions in similar circumstances,’ she says. ‘More than accountability, I think the transparency of the process and decision-making is far more transformative for an organisation.’
“More than accountability, I think the transparency of the process and decision-making is far more transformative for an organisation
Janie Trice
Head of Legal, WittKieffer
The direction of travel is clear. The GC Legal Tech Pulse 2025 Insight Report states that legal tech adoption is growing fast, with AI accelerating this trend. Meanwhile, corporate support for legal technology investment is growing. Quaye believes that modernising legal is inevitable and cautions that ‘the avoidable risk is modernising the operating model faster than the governance model. Speed is great […] until you have to explain your decisions later.’
Governance questions GCs should be asking
Drawing on input from the in-house lawyers interviewed for this article, the following questions reflect the issues practitioners say GCs should be considering as legal functions continue to modernise and transform.
These questions are not intended as a compliance checklist, but as prompts to support internal discussion, design decisions and assurance conversations.
Decision-making and accountability
- where are legal decisions being made today – by individuals, teams or systems?
- what decision rights and escalation triggers apply, and are these clearly defined and consistently understood?
- where AI or automated outputs influence decisions in practice, who remains accountable when things go wrong, and how can that accountability be evidenced?
Governance by design
- which governance risks have been explicitly designed into new tools or operating models, rather than addressed after implementation?
- how have principles such as security, privacy or performance been built into systems by design, and where are the gaps?
- what assumptions are being made about how tools will be used, and how confident are we that those assumptions hold in practice?
Escalation, scrutiny and future explainability
- how are legal decisions and their underlying reasoning being recorded – not just approvals or outcomes?
- would the organisation be able to explain, with evidence, why a particular legal risk was accepted, if asked by a regulator or court in two years’ time?
- are audit trails, records and decision logs designed to support future scrutiny, not just current operations?
Human behaviour, culture and unintended use
- how might individuals game, bypass or work around new systems, and what does that reveal about risk?
- how strong is the organisation’s culture of adherence to legal and governance standards once processes become faster or more automated?
- what complaints, challenges or resistance will probably surface after implementation, and from whom?
Managing up, out and ahead
- what will senior leadership or the board want to understand about these changes, particularly around risk, accountability and assurance?
- how are those concerns being anticipated and addressed, rather than reacted to?
- what evidence can legal leaders point to that transformation has reduced risk, and not simply increased efficiency?
|