The misinformation threat to corporates

The spread of false information is a major issue for corporates. Misinformation can hurt organisations by damaging consumer trust and ultimately affecting their bottom line. In-House Perspective assesses the threat and how counsel can help counter it.
Until recently, the chances of a company suffering any lasting damage as a result of a deliberate misinformation campaign were thankfully low. But not anymore. Now, organisations can see their business nosedive in an instant if a lie gains enough traction that people doubt the quality, safety or ethics of a company and its products and boycott them.
The problem for companies is threefold: the internet allows falsehoods to be spread quickly, while artificial intelligence (AI) can replicate them and make them believable effortlessly. Added to that, people don’t necessarily need to fully believe the rumours or stories to dent a corporate’s good name and financial bottom line – they just need to react to them by thinking twice.
The spread of false information is a major issue, the seriousness of which shouldn’t be downplayed. The World Economic Forum’s (WEF) latest Global Risks Report cites government misinformation and disinformation as one of the key leading short-term risks that could fuel instability and undermine trust in authority. But it also warns that this growing trend could have a negative impact on corporates: for example, misinformation and disinformation around some industries could stifle growth and sales. For sectors such as biotech, this is a serious problem, with biohackers and other non-medical professionals touting ‘unproven’ health remedies or performance-enhancing procedures while slamming those that are regulated and safe.
Furthermore, the WEF warns that some governments may foment aggressive misinformation and disinformation campaigns about goods and services from targeted countries, hardening public perceptions and leading to more frequent consumer boycotts of products. AI could exacerbate the issue further, the organisation warns, as algorithms programmed to highlight trending or popular content could prioritise reader engagement over accuracy and unintentionally promote misinformation in the process.
‘Misinformation doesn’t just distort reality,’ says Elika Dadsetan-Foley, CEO and Executive Director at business transformation consultancy Visions. ‘It erodes trust, fuels division, and creates lasting reputational damage. Companies that ignore this risk do so at their own peril.’
“Misinformation doesn’t just distort reality. It erodes trust, fuels division, and creates lasting reputational damage
Elika Dadsetan-Foley
CEO and Executive Director, Visions
Why and how companies become the target of malicious campaigns can vary. A marketing campaign or policy stance can hit a nerve for groups of individuals with certain political views, for example, or a high-profile executive may voice an unpopular or controversial personal opinion on a social issue. A company may be doing business on the wrong side of a border, or an organisation can show signs of financial distress and simply become a fun target for chaos. Whatever the reason, false stories can gather momentum quickly – and cause serious damage.
The aftermath of misinformation
In 2023 academics from Cardiff and Stanford Universities published a joint paper called Between brand attacks and broader narratives: How direct and indirect misinformation erode consumer trust, which looked at the consequences of the spread of misinformation on company marketing campaigns.
The researchers distinguished between ‘direct’ and ‘indirect’ misinformation. The former category included fake news, where false information is intentionally distributed online and is designed to mimic the format of legitimate sources, as well as fake reviews, where sellers are paid by companies to post favourable appraisals of products to the detriment of competitors. They found that when consumers were exposed to direct misinformation, it could influence their decision-making – irrespective of whether they believed the false narrative or not.
Even when consumers were exposed to indirect misinformation – such as legitimate brands having ads on clickbait news websites that peddle false stories – researchers found that, by association, consumers could experience confusion, doubt and a general sense of vulnerability and mistrust about a brand, which could affect their spending habits.
There are a range of ways misinformation can threaten companies, says Doil Son, a Member of the IBA Technology Law Committee Advisory Board. Besides consumer boycotts, false narratives can quickly shape public perception, erode brand trust and trigger stock price volatility, leading to a loss of investor confidence and reputational harm. They can also result in disengaged and polarised workforces, meaning employees leave or refuse to join organisations they believe are misaligned with their values.
At least one company has recently suffered a social media backlash after misinformation circulated associating it with extremist political movements. Another suffered a stock price hit after a fake account on the tech platform X impersonated the company and deceptively suggested that its products would be given away for free.
Employee safety can also be placed in jeopardy by misinformation, depending on how emotive the falsehoods are, says Son, who’s Managing Partner at Korean law firm Yulchon. Misinformation campaigns that target specific industries can also create increased legal and regulatory scrutiny – as well as lead to legal claims – as authorities, shareholders and stakeholders demand increased assurance. ‘Misinformation – especially when amplified by generative AI – causes damage to reputation, finances and operations. It also incurs significant legal risks to companies,’ warns Son.
“Misinformation – especially when amplified by generative AI – causes damage to reputation, finances and operations
Doil Son
Member, IBA Technology Law Committee Advisory Board
Responding to misinformation
The sophistication of modern disinformation campaigns – which can include the use of deepfake videos, doctored images and AI-generated text – enables falsehoods to be more convincing and achieve a wider spread more quickly before organisations can respond.
While companies see disinformation as a serious enterprise risk and are investing in ‘disinformation security’ capabilities, the techniques used in such misinformation attacks are also constantly evolving. Unsurprisingly, executives acknowledge that the risk landscape is intensifying as a result of AI-driven content creation. ‘Misinformation is not just a PR problem: it’s a strategic business risk,’ says Dadsetan-Foley. ‘Companies must treat it like any other crisis: anticipate it, plan for it, and respond with clarity and consistency.’
The practices used to protect and prepare for misinformation campaigns may include a combination of proactive monitoring, crisis planning and cross-functional response, says Verónica Volman, a senior lawyer at law firm RCTZZ in Buenos Aires. While flagging false content to platform administrators and requesting enforcement of terms-of-service is the obvious first step, companies will need to be more proactive and should integrate disinformation scenarios into their crisis management and incident response planning, she says.
One leading practice is to develop a dedicated guide or protocol for misinformation attacks – which should be regularly updated – and conduct simulations to test it. Additionally, companies could invest in tools and teams to monitor online conversations, social media and news for any mention of the company or its key personnel. ‘Continuous brand monitoring can spot unusual spikes in negative sentiment or the emergence of damaging rumours,’ says Volman. ‘By detecting false stories quickly, the company can respond or seek removal before the misinformation gains wide traction.’
“By detecting false stories quickly, the company can respond or seek removal before the misinformation gains wide traction
Verónica Volman
Senior Lawyer, RCTZZ
Investing in technical tools to detect and counter false content is also a ‘must’, says Volman. ‘While no single technology can stop all fake content, a combination of content authentication, network security and threat intelligence tools can significantly raise an organisation’s defences. Engaging independent fact-checking organisations can also help,’ she says. Training employees and executives on how to spot deepfake videos or phishing emails that spread urgent false news will also assist in reducing the company’s potential vulnerability to certain disinformation tactics, adds Volman.
Other useful steps include establishing cross-functional teams that bring together communications/PR, marketing, information technology (IT)/security, legal and HR, as well as establishing a clear guide on PR and communication. ‘How a company communicates externally during a misinformation incident can really influence the outcome of the campaign,’ says Volman. ‘Internally, companies should communicate with employees during such events, explaining the truth so that staff can act as ambassadors correcting customers or friends and not inadvertently spreading the rumour further.’
Martin Schirmbacher, Member of the IBA Technology Law Committee Advisory Board, says the most pressing challenge is that employees, customers and other stakeholders often consume and share information uncritically – especially when it appears in social media feeds or is amplified by AI-driven algorithms.
‘Younger staff, in particular, tend to take search engine results or AI-generated answers at face value without adequate verification,’ says Schirmbacher, who’s a specialist IT lawyer at German firm HÄRTING. ‘This creates a volatile information ecosystem where false narratives can gain traction rapidly. The consequences for businesses can be severe: loss of reputation, reduced trust, and in many cases, measurable financial harm.’
Schirmbacher says it’s important for companies to also recognise the risks of relying on inaccurate or misleading incoming information. The only sustainable way to address this is by building digital literacy across the organisation, he says. Employees need to be trained not to trust unreliable sources and to develop the ability to distinguish between credible and false information. Companies should establish trusted internal sources and ‘foster a culture of verification rather than one of uncritical forwarding or resharing’, he says.
‘Ultimately, combatting misinformation is not a one-time effort. It requires ongoing vigilance, cross-functional coordination, and a clear understanding that information integrity has become a core element of corporate risk management,’ says Schirmbacher. ‘Investing in employee awareness and digital literacy is key. Training staff to critically assess and report suspicious content helps build an informed internal line of defence – both as recipients of information and as potential inadvertent amplifiers of false narratives.’
Legal remedies
To protect themselves against misinformation targeting the company externally, businesses should adopt a multi-layered approach that integrates technological tools, legal strategies and internal readiness. First, says Schirmbacher, companies must actively monitor their digital footprint – across social media, news outlets and review platforms – to detect misleading or harmful content early. AI-driven monitoring tools can support the early identification of viral narratives before they escalate. Second, a robust and proactive communication strategy is essential. Companies should be prepared to respond quickly and transparently to counter false claims – ideally supported by pre-approved crisis communication protocols, he says. Third, legal remedies are a crucial tool for addressing false information.
‘Whether the misinformation appears on news websites, social media, competitor pages or private blogs, companies are not required to passively accept falsehoods published online,’ says Schirmbacher. ‘Legal action – whether through takedown requests or formal proceedings – should be an integral part of the corporate response toolkit.’
If false statements harm a company’s reputation or cause financial damage, defamation laws will probably apply, says Schirmbacher. Such statements include false allegations about products, management, working conditions or corporate values. Legal claims can be pursued in civil court against the originators of the misinformation, if identifiable.
However, pursuing anonymous or foreign actors remains a challenge in practice, Schirmbacher warns. Most major platforms – such as X, Meta, LinkedIn, TikTok and YouTube – have policies against impersonation, fake news and defamatory content. Takedown requests can be initiated based on terms of service violations. These are often the fastest remedies and are especially important when false content is going viral. Schirmbacher explains that under the EU’s Digital Services Act (DSA), companies have stronger rights to request the removal of illegal or harmful misinformation from online platforms through improved notice-and-action procedures. Platforms must respond transparently and promptly, and large platforms face additional obligations to mitigate systemic risks such as disinformation.
If misinformation involves fake accounts, spoofed websites or brand misuse, legal recourse may be available through trademark infringement, unfair competition or identity theft laws. Swift action – including cease-and-desist letters, domain seizures or court injunctions – is key to minimising exposure. ‘In-house legal teams and external counsel play a crucial role in assessing risks, coordinating responses, and ensuring that legal actions are proportional and well-documented. They can also engage directly with platforms, regulators and media, and help preserve evidence for potential litigation or insurance purposes,’ says Schirmbacher. ‘In the age of algorithmic amplification, legal risk management must be as agile as the misinformation it seeks to counter.’
“In the age of algorithmic amplification, legal risk management must be as agile as the misinformation it seeks to counter
Martin Schirmbacher
Member, IBA Technology Law Committee Advisory Board
Son agrees that there are several options available for legal recourse but cautions that how successfully they can be relied upon may depend on the jurisdiction where a complaint is brought. While defamation claims are an option, he says, in some cases companies may be able to claim tortious interference with contractual or business relationships. Meanwhile, if misinformation comes from competitors, remedies under unfair competition laws or consumer protection statutes may apply, and in some countries, criminal defamation, cybercrime laws or data protection regulations may offer avenues for enforcement or investigation, he adds.
But lawyers warn that it can be very difficult to hold any person or company to account for spreading misinformation via the internet or social media sites. Once interim and ‘soft’ measures are exhausted – such as cease-and-desist letters – legal remedies may not always be helpful and the level of recourse they may provide can vary widely from country to country, says Lisandro Frene, Chair of the Platforms, E-commerce & Social Media Subcommittee of the IBA Technology Committee. For example, in the US online platforms are shielded from civil liability for content provided by their users under Section 230 of the country’s Communications Decency Act, which also protects them from moderation activities they undertake in good faith to remove certain content. To fully tackle the problem depends on knowing who made the misleading or defamatory claims and, in the age of AI and social media, this is difficult to confirm.
As a result, the choice of the right jurisdiction and law for legal action can have a major impact on the outcome of the claim, especially when disinformation is spreading internationally, says Frene, who’s also a TMT partner at RCTZZ in Buenos Aires. Depending on where the company decides to claim, he says, it should analyse the legal framework regarding false advertising and unfair competition, brand protection and copyright laws, and fake content legislation, if enacted, as well as that relating to defamation – US courts, for example, will generally not enforce foreign defamation judgments that don’t meet the country’s free speech standards.
Also, it’ll be important to assess the channel through which a misinformation campaign has been spread to see what legal recourse might be available, adds Frene. ‘For example, even though the EU’s DSA is not a defamation law per se, it requires very large online platforms to have notice-and-action systems to remove illegal content quickly, which includes defamatory content, giving companies a mechanism to notify platforms of libellous disinformation and have it taken down EU-wide,’ he says.
He adds that there are other ways in-house counsel can help. For example, they can advise their organisations on the likelihood of success in court according to recent jurisprudence – and then recommend the best course of action – and document all available evidence to help external counsel fight the case. In-house lawyers can also make it clear to management that involving external counsel at an early stage will help protect the company’s reputation, says Frene.
Neil Hodge is a freelance journalist and can be contacted at neil@neilhodge.co.uk