Artificial intelligence (AI), which refers to a computer’s ability to imitate human cognitive abilities like problem-solving and learning, has been all over the media recently. There are countless applications and issues within AI. The League’s primary interest is in the potential impacts of AI on our elections. It is part of our world, and it will likely continue to become even more integrated into our everyday lives, in ways both visible and invisible.
The development of AI is meant to respond to several problems — a primary example is the huge amount of data all of us produce by simply using the internet. There is value to be reaped by analyzing this data to understand consumers’ preferences, decipher people’s interactions with news headlines, and even identify voters’ political concerns. However, the amount of data generated is too massive for humans to process alone, so AI uses targeted algorithms to make sense of it.
In learning how people interact with the headlines and information they see online, AI can also support those looking to manipulate public perception by sharing disinformation. AI has assisted in bringing about a form of disinformation called a “deepfake,” an artificial image, video, voice recording, or other imitation of a person doing or saying anything the creator directs. For example, a deepfake can be — and has been — used to create a fake video of a political candidate saying something deeply offensive or inappropriate.
How to Spot Mis- and Disinformation
The distribution of disinformation, especially online, has been used in recent elections to sow polarization and distrust in election results in our country. It is crucial to address the many avenues of mis- and disinformation that circulate around an election, including emerging technologies like artificial intelligence and deepfakes, which are rapidly growing in both their prevalence and resemblance to genuine audio and video content.
What LWVUS is Doing About AI and Disinformation
The League is closely monitoring AI’s potential impacts on our elections. In October 2023, the League supported a petition to the Federal Election Commission (FEC), arguing that the FEC should regulate “Deceptive AI Campaign Communications” like other deceptive campaign communications. LWVUS urged the Commission to heed the requests in the petition and issue explicit guidance that deceptive AI campaign communications meet the definition of “fraudulent misrepresentation” according to the US Legal Code.
,
,
The League’s positions on a citizen’s right to know, citizen participation, and campaign finance apply to the issue of deceptive AI; voters deserve access to true and complete information about elections and the candidates seeking their votes. The League is concerned that deliberately false, AI-generated content in campaign ads or other communications will undermine the role of voters and corrupt the election process by attempting to deceitfully sway voters’ opinions or suppress voter turnout through misinformation about election rules.
We must preserve the integrity of our electoral process by increasing transparency in our elections. The League seeks to ensure voters have the necessary resources to make informed voting decisions. In the last two years, almost 6.5 million people have used our digital, one-stop shop for voting information, VOTE411.org, for personalized resources about voter registration, candidate and ballot information, and more. However, it should not fall solely to nonprofit organizations to provide information and ensure transparency in our election process. The FEC should regulate deceptive AI campaign communications to break down barriers to voter participation, reduce the influx of disinformation in elections, and promote transparency.
What the Federal Government is Doing About AI and Disinformation
The 2024 election is less than a year away, and the Federal Election Commission must act to ensure enforcement of federal campaign laws like 52 USC §30124, which should be interpreted to include technological advancements like AI. However, the FEC petition is only one way to tackle the problem of AI and mis- and disinformation.
,
Stay Updated
Keep up with our advocacy on AI, elections, and more!
,
In October 2023, the White House released an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that seeks to govern “the development and use of AI safely and responsibly… advancing a coordinated, Federal Government-wide approach to doing so.” The Executive Order is focused on governing AI in accordance with eight guiding principles and priorities, which focus primarily on safety, privacy, equity, and jobs. Although the EO does not explicitly reference the implications for elections, its priorities of protecting civil rights, promoting transparency and ensuring equality of access are aligned with League values. Advancing these priorities in the development of AI should result in safer elections.
Congress has held hearings in both the House and the Senate to give researchers, regulators, companies, and other stakeholders a platform to weigh in on Congress’s direction in legislating artificial intelligence. These hearings focused on many topics within the issue of AI but highlighted some common themes; these included messages of caution about the ramifications of improperly regulating AI and some optimism about the opportunities that AI could present.
Throughout all of these actions on AI, the values of transparency with the public and public participation have been heavily emphasized. The importance of transparency is not a partisan issue; it should be the minimum expectation of a healthy democracy to provide voters with complete and truthful information about elections. The possible harmful uses of artificial intelligence, including the fraudulent misrepresentation of political candidates or parties, threaten voters. We deserve to know that political advertisements are free of potentially misleading information or fraudulent misrepresentation.
,
,
How You Can Counter AI Mis- and Disinformation
With elections approaching, looking for mis- and disinformation is important. We’re all susceptible to deceptive AI, especially given the sophisticated nature of some deepfakes, but you can take steps to spot it before you take part in its spread.:
- When watching a video or listening to audio of a candidate or politician, consider that it is possibly fabricated and try to identify the source of the content. Research the source who is sharing the information and look for political affiliations. Ask yourself if the source has experience that would give them expert and correct information.
- Cross-check for reliable news sources reporting the same information or sharing the same video or audio content. It is unlikely that only one small source has the facts.
- Question the use of emotionally charged content. Reliable sources let the facts fuel your response, not emotional language. Check out some examples of loaded language.
Interested in learning more about mis- and disinformation? Stay in touch to get updates on our Democracy Truth Project, an effort to advance public understanding of our government and reduce inaccurate information.