<img src="https://ws.zoominfo.com/pixel/Gp2oBIDg9xlCLu0ZPKB4" width="1" height="1" style="display: none;">

The Threat of Generative AI to Constituent Communication, Trust, and National Security

A primer and set of recommendations

Generative Artificial Intelligence (AI), like OpenAI’s ChatGPT, poses a unique challenge to digital communication between lawmakers and constituents that threatens to undermine trust and, at worst, poses a risk to national security. This document provides an overview of the risk and lays out a plan to mitigate it. 

How AI can be used by bad actors to influence policy

Generative AI leverages neural networks to analyze vast datasets, learn patterns, and can then generate new content based on that data. It’s as simple as asking a question and requires minimal training and technical expertise. 


For example, a bad actor could ask an AI tool - “Can you write a letter to my lawmaker explaining why we shouldn’t be funding Ukraine?”. The results are generated in seconds, can be quite compelling and are indistinguishable from what a real person could write (see some examples below).


AI can also scale rapidly, and the landscape is evolving quickly. For example, the next generation of AI tools (like AutoGPT - an AI that automates tasks), will be able to simultaneously:

  • Craft messages: AI agents can create thousands of unique, topic-specific messages, complicating identification and prioritization of genuine concerns.
  • Aggregate data: AI agents can analyze constituent data to create seemingly accurate profiles for personalized messages, making AI-generated content detection harder.
  • Synthesize and send messages: AI agents can coordinate to send messages through various channels, overwhelming systems and hindering lawmakers' ability to address authentic concerns.

The potential scale of messages submitted to lawmakers is unprecedented and is a challenge for all involved in the constituent communication ecosystem - lawmakers staffs, CWC, and Vendors, including Constituent Management Systems and Delivery Agents (like Countable).

AI can now, or will very soon, be able to produce compelling voice and video as well.

Learn More: Countable's AI & Grassroots Advocacy Resource Center

Implications for digital constituent correspondence

The constituent communication ecosystem assumes that constituents authentically represent themselves, requiring only a name, address, and email. While spam is a problem and has risen, it’s been made more manageable through process and technology innovations. 

Generative AI is a quantum leap beyond what we’ve encountered previously and makes our current countermeasures inadequate. Bad actors can, today, send messages to lawmakers while appearing to come from real people, making it challenging to identify auto-generated content, and do so at an incredible scale.

“Real world” scenarios

  • A foreign actor, like Russia, could generate messages supporting the withdrawal of support for Ukraine, potentially influencing lawmakers' decisions on a critical foreign policy issue.
  • A disgruntled hacker could flood a lawmaker's inbox with thousands of seemingly authentic messages from apparently real people, disrupting their ability to respond to genuine constituent concerns.
  • A political campaign could leverage this technique via a third party without knowing it and unethically counter its competition without knowing, undermining the democratic process.

What needs to be done

As generative AI poses significant challenges to digital communication between lawmakers and constituents, it's crucial to identify effective countermeasures. In the short term, we can look for signs in AI-generated messages. For the long term, we’ve provided recommendations to address these concerns.

For lawmakers and their staff - signs to look for:

  • Look for changes to patterns around message composition:

    • Messages that exceed a typical message length.

    • Highly structured, overly organized, messages can be an indication.

    • Patterns in messaging. E.G., If 100 messages that come in on a specific date all follow the same opening pattern:
      • As a farmer / As a teacher… / As a student…

    • Ambiguity
      • Lack of timely, specific information.
      • The content of the letter does not particularly relate to the issue.

    • Factual Errors
      • Outdated information: For now, most AI datasets are behind.
      • For example, ChatGPT only goes as far as September 2021. (Although a sophisticated operator can “teach” the AI new information). 

    • Inconsistencies
      • Look for where the information contained in the message does not match what you know about the author (e.g. a “Mr.” writing as a Mom, etc.)

Long term solutions:

Solving this issue is not going to be easy and won’t happen overnight. At its core, we will need to work together to develop new mechanisms to establish the veracity of the author sending the message and the message itself. 

We recommend a joint effort building on the following: 
Coordinate:
  • We will need to work together to address this at all levels: Lawmaker staff, CWC, CMS’s, and Delivery Agents. 
Educate
  • Provide everyone, including Legislative Correspondents (LCs) and congressional staffers, with training and tools to assess these threats and arm them with the resources they need to combat this in the short term.
Analyze
  • Study patterns from historic constituent messages
  • Compare datasets of AI-generated messages against constituent datasets
Innovate
  • Implement permission based tools like user registration and authentication at the point of message sending (something Countable offers today) to heighten the likelihood of a users authenticity.
  • Develop new methods to detect and score the likelihood of automation
  • Innovate on AI detection tools. Existing technologies work but are easily beaten.

Conclusion

Generative AI has introduced new and significant challenges to digital communication between lawmakers and constituents. By being aware of the signs of AI-generated messages and taking proactive steps to educate, analyze, and innovate, lawmakers and their staff can better address these challenges and ensure that genuine constituent concerns remain a priority in the decision-making process.

Learn More

- Video explainer and demo of ChatGPT: Advocacy in the Age of AI: Navigating the Risks

- Security and safety countermeasures - Countable | Safety & Identity Features

- AI and Constituent Correspondence Resource Center

- Try ChatGPT - https://chat.openai.com/chat