top of page

Mitigating AI Risk Relating to Political Influence

While we are witnessing exponential growth in the development and proliferation of generative artificial intelligence systems (or AI generative models), industry participants already foresee a number of critical risks. With benefits spanning across diverse sectors such as healthcare, law, education, and science, the current pace of AI development also constrains the ability to more fully assess potential negative externalities, specifically the potential to create societal harm, according to a recent report by Georgetown University’s Center for Security and Emerging Technology, OpenAI, and Stanford Internet Observatory.


Online and social media platforms still lack robust safeguards against exploitation for political influence operations, where covert or deceptive efforts are designed to mislead target audiences. According to the report, an “influence operation could have impact based on content if it (1) persuades someone of a particular viewpoint or reinforces an existing one, (2) distracts them from finding or developing other ideas, or (3) distracts them from carving out space for higher quality thought at all.”


Because advertisers, media outlets, and platforms already compete heavily for attention, it would require little effort for distraction operations to exploit this by overshadowing important information necessary for fact-gathering or decision-making with irrelevant, sensational content. In brief, a target does not need to be persuaded by misleading content, but rather that they are not persuaded by some other piece of information.


Potential mitigations involve deeper consideration in terms of how an influence operation might be disrupted by building upon the following questions:

  • Model Design and Construction: How could AI models be built so they are robust against being misused to create disinformation? Could governments, civil society, or AI producers limit the proliferation of models capable of generating misinformation?

  • Model Access: How could AI models become more difficult for bad actors to access for influence operations? What steps could AI providers and governments take?

  • Content Dissemination: What steps can be taken to deter, monitor, or limit the spread of AI generated content on social media platforms or news sites? How might the “rules of engagement” on the internet be altered to make the spread of AI-generated disinformation more difficult?

  • Belief Formation: If internet users are ultimately exposed to AI-generated content, what steps can be taken to limit the extent to which they are influenced?


With the rapid adoption of large language models into various use cases, it may be virtually impossible to ensure that they would not be used to generate disinformation. Consequently, AI developers involved in the design and development of large language models have a responsibility to take reasonable steps to minimize potential harm created by those models. Similarly, social media companies retain an obligation to take all appropriate measures within their control to combat the spread of misinformation, while policymakers should also deeply consider how they can contribute and help make a difference.



Additional References:


Recent Posts

See All

Why ChatGPT Raises Trust Issues

Many recent LinkedIn posts extol the virtues of ChatGPT for copywriting, content generation, and business and marketing planing. Whilst machine-learning models such as ChatGPT produce text that may ap

bottom of page