Michigan to join state-level effort to regulate AI political ads as federal legislation pends

0 43

Lansing, Mich – As Congress and the Federal Election Commission continue to debate more transparent laws ahead of the 2024 election, Michigan is joining efforts to curb artificial intelligence and misleading media through state-level policies.

Campaigns at the state and federal level are required to make clear which political ads are generated by artificial intelligence in Michigan under legislation expected to be signed by Democrat Gretchen Whitmer in the coming days. It also prohibits the use Deep fakes generated by AI Without a separate statement stating that the media is mandated within 90 days of the election.

Deepfakes are fake media that misrepresent someone as having done or said something they did not do. They are created using artificial intelligence, an AI-type generator that can create compelling images, videos or audio clips in seconds.

There is growing concern that artificial AI will be used in the 2024 presidential race. misleading votersimpersonates candidates and undermines elections at an unprecedented rate and scale.

Candidates and committees in the race are experimenting with rapidly advancing technology, which can create persuasiveness. Fake imagesVideo and Audio clips It has become cheaper, faster and easier for the public to use in seconds and in recent years.

The Republican National Committee released an entirely AI-generated ad in April intended to show the future of the United States if President Joe Biden is re-elected. Claiming to be powered by AI in small print, it showed fake but real photos of boarded-up storefronts, armed military patrols and immigration scares.

In July, a super PAC backed by Republican Florida Gov. Ron DeSantis used an AIV voice-cloning tool to mimic the voice of former President Donald Trump, apparently narrating a social media post he made despite not saying the statement out loud. .

Experts say what could happen if campaigns or outside actors decide to use AI deepfakes in more malicious ways.

States including California so far Minnesota, Texas, and Washington have enacted laws regulating outright falsehoods in political advertising. Similar legislation has been proposed in Illinois, Kentucky, New Jersey and New York, according to a nonprofit advocacy group. Public citizen.

Under Michigan law, any person, committee or other entity that distributes advertising to a candidate is required to clearly disclose if the generator uses AI. The disclosure must be the same font size as most of the text in print ads, and on television ads “must appear in large letters for at least four seconds,” according to a legal analysis from the State Council Fiscal Agency.

Deep Lies used within 90 days of the election requires a separate disclaimer to inform the viewer that the content is fabricated to depict speech or conduct that did not occur. If the medium is video, the disclaimer must appear clearly and appear in its entirety with the video.

Campaigners can face up to 93 days in jail, a fine of up to $1,000, or both, for violating the original proposed laws. The Attorney General or the candidate aggrieved by the misleading media may apply to the appropriate circuit court for relief.

Federal lawmakers from both parties have emphasized the need to legislate deep lies in political advertising, and have held meetings to discuss it, but Congress has yet to pass anything.

The latest bipartisan Senate bill, sponsored by Democratic Sen. Amy Klobuchar of Minnesota, Republican Sen. Josh Hawley of Missouri and others, would ban “materially deceptive” deep lies involving federal candidates.

Michigan Secretary of State Jocelyn Benson flew to Washington, D.C. in early November to participate in a bipartisan discussion on AI and elections, and to urge senators to pass Klobuchar and Hawley’s federal Deceptive AI Act. Benson also encouraged senators to go home and push their state legislators to pass similar legislation that makes sense for their states.

Benson said in an interview that federal law that requires federal funds to address issues raised by AI is limited in its ability to regulate at the state and local levels.

“All of this would be possible if the federal government gave us the money to hire someone to handle AI in our states, and at the same time teach voters how to spot deep fakes and what to do when you find them,” Benson said. “This solves a lot of problems. We can’t do it on our own.

In August, the Federal Election Commission took a procedural step to rein in its rules against “fraudulent representation” of AI-generated deep-fakes in political ads. Although the commission held a public comment period on the petition submitted by the public, it has not made any decision yet.

Social media companies have also announced some guidelines aimed at curbing the spread of harmful fake news. Meta, which owns Facebook and Instagram, It was announced earlier this month That political ads running on platforms are required to be disclosed if they are generated using AI. Google announced Same AI account guide For political ads that will play on YouTube or other Google platforms in September.


Swenson reports from New York. Associated Press writer Christina A. Cassidy contributed from Washington.


The Associated Press receives support from several private foundations to enhance its coverage of elections and democracy. See more about the AP Democracy Initiative over here. AP is solely responsible for all content.

Copyright 2023 The Associated Press. all rights reserved. This article may not be published, distributed, rewritten or redistributed without permission.

Source link

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More