WASHINGTON (dpa-AFX) - Google announced that the company has made some improvements in Search algorithm to handle AI-generated deepfakes, due to 'a concerning increase in generated images and videos that portray people in sexually explicit contexts, distributed on the web without their consent.'
To protect people, Google has made the non-consensual image removal process easier. Moving forward, if any user requests image removal, the tech giant would also filter all explicit results on similar searches. Moreover, the company will scan the image to remove any duplicates of that image.
Additionally, the Alphabet Inc. (GOOG)-owned company is updating its ranking systems for queries where there is a higher risk of explicit fake content appearing in Search.
'The updates we've made this year have reduced exposure to explicit image results on these types of queries by over 70 percent. With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images,' Google Product Manager Emma Higham wrote in a blog post.
The company would also demote sites which have received a high volume of removals for fake explicit imagery.
'These changes are major updates to our protections on Search, but there's more work to do to address this issue, and we'll keep developing new solutions to help people affected by this content,' Higham added.
'And given that this challenge goes beyond search engines, we'll continue investing in industry-wide partnerships and expert engagement to tackle it as a society'.
Copyright(c) 2024 RTTNews.com. All Rights Reserved
Copyright RTT News/dpa-AFX
© 2024 AFX News