NewsNational NewsScripps News

Actions

US regulators plan to go after harmful AI business practices

U.S. market regulators say AI tools are subject to the same legal standards as any other business tools.
US regulators plan to go after harmful AI business practices
Posted
and last updated

The Federal Trade Commission will come down hard on businesses whose AI tools harm consumers, officials said Tuesday.

Regulators are scrutinizing AI tools that businesses have used in making hiring decisions, or in deciding who to loan money to. They're watching tools that can generate text, images, voice and even video, trying to make sure consumers don't fall prey to mass deceptions or closely-targeted misinformation.

FTC Chair Lina Khan also said during a press event that the FTC will keep big companies honest, as they race for resources in the growing AI field.

"A handful of powerful firms today control the necessary raw materials, not only the vast stores of data, but also the cloud services and computing power that startups and other businesses rely on to develop and deploy AI products," Khan said.

SEE MORE: From marketing to design, brands adopt AI tools despite risk

She warned that while AI might novel abilities, regulators will still hold it to the same account as other business tools.

"There is no AI exemption to the laws on the books," Khan said.

Regulators elsewhere in the world are also trying to keep up with AI's progress. Canada has investigated ChatGPT for potential privacy violations, and Italy blocked the program so it could investigate whether it ran afoul of EU data laws.

The EU's proposed AI Act would sort the growing uses of AI tools into different risk categories, banning those considered most dangerous outright and subjecting other high-risk applications like AI-powered hiring decisions to stronger laws.


Trending stories at Scrippsnews.com