Proposed AI regulations would protect consumers
A bill passed by the Pennsylvania House of Representatives on Wednesday, April 10, is a good first step toward ensuring AI isn’t used to deceive consumers.
House Bill 1598 passed 146-54 and will be considered by the state Senate. The bill would add a definition to the law governing unfair and deceptive trade practices.
It would require any content generated by artificial intelligence, including text, images and video, to include a statement disclosing that AI was used.
The proposal is one of many at the state and federal level, with one industry group reporting that as of early February, 40 states were considering regulations on AI, covering deceptive practices, discrimination and deepfakes.
It’s an important topic, and one that touches more of people’s day-to-day lives than they might realize.
“AI does in fact affect every part of your life whether you know it or not,” Suresh Venkatasubramanian told the Associated Press. Venkatasubramanian is a Brown University professor who co-authored the White House’s Blueprint for an AI Bill of Rights.
“Now, you wouldn’t care if they all worked fine,” he continued. “But they don’t.”
That was made abundantly clear recently when reporters discovered New York City’s new AI chatbot, called MyCity Chatbot, was giving people wrong answers to questions on topics from workers’ rights to landlord-tenant regulations.
New York has defended the initiative, saying it’s still in the testing phase. But that speaks to the larger issue: All generative AI is still in the testing phase.
HB 1598 is a sensible extension, as there are already laws requiring businesses to be honest in advertising. U.S. Federal Trade Commission and state regulations cover everything from mandating the use of real products in advertising photos to prohibiting false or misleading health claims.
Requiring disclosure when AI was used to create content is a good first step toward protecting consumers, and we hope the state Senate takes up the bill soon.
— JK