Palantir and the rule of law

Technology company Palantir is embedded everywhere from NATO to the US immigration enforcement agency. Global Insight assesses the rule of law implications.
Something’s rotten with the state of AI. The problem’s frivolity and the answer’s Palantir. That’s the key message of a recent book written by the tech company’s co-founder and CEO, Alexander Karp, authored with its legal counsel and Head of Corporate Affairs Nicholas Zamiska, called The Technological Republic: Hard Power, Soft Belief, and the Future of the West.
The great boom in innovation created by digital technologies has been squandered, they argue, in prose that often sermonises the decline of America’s values – especially patriotism and a purposeful work ethic. Assessing the impact of Amazon, Facebook and Google, they write that ‘a generation of founders cloaked themselves in the rhetoric of lofty and ambitious purpose – indeed their rallying cry to change the world has grown lifeless from overuse – but often raised enormous amounts of capital and hired legions of talented engineers merely to build photo-sharing apps and chat interfaces for the modern consumer.’
Shunning their duty to support the US government in everything from the development of military software to medical research in favour of digital fripperies, the ‘tech bros’ have helped cede American and European technological supremacy to countries such as China, the book’s authors suggest. Only ‘a union of the state and the software industry’ will return the West to its prelapsarian, post-Second World War status, Karp and Zamiska conclude.
Another co-founder of Palantir, Peter Thiel, has also written and spoken regularly on the intersection between technology and business. In a recent column discussing the end of the ‘pre-internet past’, for example, published a few months after the election of Donald Trump as US president for the second time, he stated that ‘America is not an exceptional country. It is no longer even a great one’ and argued that ‘the future demands fresh and strange ideas.’
A long-term conservative libertarian, Thiel’s ideas of using technology to streamline and, in some cases, drastically reduce the size of the state have become more mainstream in Silicon Valley over the past few years. Thiel has always been influential as a businessman in the US, but now his ideas have gained more traction. In fact, the US vice president, JD Vance, has credited Thiel as a major influence – including on his view that technologies have generally failed to drive positive social change.
A path less trodden
Two years ago in a newspaper article, Karp said that the West had reached another ‘Oppenheimer moment’ – referring to J Robert Oppenheimer, the scientist appointed in 1942 to lead US efforts to build the first nuclear bomb. Given the potential of weapons powered by AI to destroy humanity, the technology industry had a patriotic duty to ensure that those tools fell into the right hands, Karp said.
In fact, Palantir has been on such a trajectory for many years. Founded in 2003 by former PayPal employees, including Karp and Thiel, it received early seed money from the latter and from the venture capital arm of the US Central Intelligence Agency (CIA), known as In-Q-Tel. Taking its name from the ‘seeing stones’ in JRR Tolkien’s The Lord of the Rings fantasy epic, its mission was to help organisations understand the burgeoning data created by integrated computing systems and act on it. By 2013, Palantir’s relationships with US government agencies were far-reaching, according to documents leaked to the media. As well as assisting combat troops on the ground, for example by discerning patterns in the deployment of roadside bombs, its software helped integrate the huge databases created by the CIA and US Federal Bureau of Investigation.
Today, Palantir is a multi-billion-dollar company and boasts clients around the world. For example, Karp was credited as being the first chief executive of a major Western business to meet with the Ukrainian President Volodymyr Zelensky just three months after Russia’s invasion. Using the country as a testing ground for developing AI capabilities, today Palantir is reportedly embedded in Ukraine’s military and civic infrastructure – from targeting Russian personnel to collecting evidence of possible war crimes. Many private companies have also been supporting Ukraine in its war against Russia. This has led, according to a 2023 speech by General Mark Milley, former Chairman of the US Joint Chiefs of Staff, to the most significant, fundamental change in the character of war ever recorded in history.
The company also boasts an impressive roster of governments and civic organisations as clients. The UK government, for example, appointed a consortium led by Palantir to build the National Health Service’s Federated Data Platform (the NHS FDP) – an attempt to integrate data from across the organisation so that healthcare professionals and patients can access records and make appointments more easily. Palantir’s work on the UK government’s Covid-19 vaccine rollout was decisive in understanding and controlling huge amounts of data at speed, according to its written evidence to the country’s parliament in 2022. Recently, NATO signed a contract with Palantir for an AI-powered battlefield command system that rapidly collates information, target tracking and decision-making.
No stranger to controversy
Despite Thiel and Karp positioning themselves as ideological outsiders to the Silicon Valley tech giants, commentators point to links and in some cases suggest political alignment with President Donald Trump and his administration. For example, in a 2025 book – Owned: How Big Tech on the Right Bought the Loudest Voices on the Left – on the relationship between the technology industry and the government, Eoin Higgins wrote that Thiel, Karp and Tesla CEO Elon Musk met President Trump in 2016. The subsequent relationship between these powerful technology moguls and the current president has blown hot and cold in the intervening years, but some commentators say that Karp’s recent focus on technology-powered militarism aligns him more closely with Donald Trump than with Kamala Harris – the presidential candidate who Karp supported in the 2024 election.
Palantir’s impressive track record of achievements has attracted scrutiny of how transparent it is and its closeness to individuals within governments
Palantir’s impressive track record of achievements has attracted scrutiny of how transparent it is, its closeness to individuals within governments – not only in the US but elsewhere – and its use of its systems in border control and surveillance technologies.
In May, former Palantir employees shared a letter stating that the software they had helped to develop was now being used by the Immigration and Customs Enforcement (ICE) office in the US to track the movement of migrants. That deal – reportedly worth around $30bn – aims to speed up ICE’s deportation programme for ‘illegal aliens’, particularly those allegedly belonging to criminal organisations, as well as violent criminals and those who have overstayed their visa. In addition, ICE said it aimed to achieve real-time data on non-residents entering or leaving the country.
Palantir was chosen, in part, because ICE already used the company’s software in other areas, making the new system easier to deploy and integrate, according to ICE’s published rationale. But the additional purchase has also raised complaints from activists who argue that the Trump administration is violating human rights with its escalating deportation campaign – and by extension, they say, Palantir is complicit. Global Insight has approached Palantir for comment on this and other issues raised in this article but has not received a response.
Unchecked harms
One problem that opponents of large technology businesses have is that the law on regulating AI is still in its infancy. The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law did introduce a risk-based system for protecting human rights and democratic processes in 2024 (see ‘Creating the legal limits for AI’, Global Insight December/January 2025,). The EU, Israel, the UK and the US were early signatories. But the framework didn’t cover AI used in the interests of national defence and security, or research and development – some of the very areas where scrutiny is most needed.
Meanwhile, the EU’s AI Act aims to protect citizens from harm caused by such technology, but working through the exact taxonomy in practice will probably take years if it is to become robust and may well entail the settling of multiple legal challenges. Even then, the rules apply to those parts of international businesses that operate in Europe, creating a patchwork of regulation globally. One possible consequence, then, is that battlefield and other cutting-edge AI that potentially violates the rule of law could be used unchecked, while useful civic applications may be rejected by an increasingly suspicious public.
Signs of this trend are beginning to emerge. In 2024 the campaign organisation Good Law Project raised the alarm over how patient data extracted from the NHS FDP network might be used, though there’s no evidence that Palantir plans to use any of that data for commercial use. The Project began the first step of legal action after the NHS published a heavily redacted version of its contract with Palantir. The document was particularly opaque on how personal data would be used. When a second version was released following the Project’s intervention – though still with large areas of redacted details – it emerged that those terms hadn’t been agreed with Palantir even though the contract was already signed, leading many activists and NHS specialists to become concerned over potential ‘mission creep’.
On its website, NHS England says of the FDP deal that ‘access to NHS health and social care data within the NHS Federated Data Platforms will be carefully controlled. Only authorised users will be granted access to data for approved purposes. The supplier will not control the data in the platform, nor will they be permitted to access, use or share it for their own purposes.’
It's likely that companies such as Palantir will become more tightly knit into national and regional government, public and private enterprises. The benefits are clearly huge in many areas for productivity and public administration. But, in their haste to develop cutting-edge systems, government officials and contractors need to be more transparent about how those arrangements are structured – who benefits and how data is used. If the public decides to opt-out of sharing data on AI systems, the chances of their success will plummet.
Arthur Piper is a freelance journalist. He can be contacted at arthur@sdw.co.uk
Image credit: Firman Dasmir/AdobeStock
AI and tech: in focus
The IBA has created a dedicated page collating its content on artificial intelligence and technology issues: