The two tech giants will no longer allow fake news sites to use their ad-selling services, the latest reaction to accusations that a flood of misleading internet content influenced voters during the U.S. presidential campaign.
Facebook says it will not place ads from fake news publishers on third party apps or websites, because the content falls under the broader category of “illegal, misleading or deceptive” content.
“We have updated the [Audience Network Policy] to explicitly clarify that this applies to fake news,” a company spokesperson said.
CEO Mark Zuckerberg has rejected allegations that Facebook allowed fake news to influence voters ahead of the election, and the company has not announced any major changes that would help filter out inaccurate content on its own site.
“Personally, I think the idea that fake news on Facebook — of which it’s a small amount of content — influenced the election in any way is a pretty crazy idea,” Zuckerberg said Thursday.
Google, meanwhile, says it will also prohibit “misrepresentative content” from appearing on its advertising network.
“Moving forward, we will restrict ad serving on pages that misrepresent, misstate, or conceal information about the publisher, the publisher’s content, or the primary purpose of the web property,” the company said in a statement.
Google has also committed to tweaking its search algorithms. On Monday, the top result for “final election result” directed users to a fake news site with incorrect numbers.
“In this case we clearly didn’t get it right, but we are continually working to improve our algorithms,” Google said in a statement.
Google does not remove pages from its search results except when they contain malware or illegal content.
The moves are still unlikely to satisfy critics who argue Facebook, Google, Twitter and other big Internet companies must do more to stop fake news from appearing in search results and feeds.