The Australian eSafety Commission said Google's report provided the world's first insight into how users can exploit technology to create harmful and illegal content.
eSafety Commissioner Julie Inman Grant said this highlights the importance of companies developing AI products building and testing the effectiveness of safeguards to prevent the production of these types of harmful, fake content.
Google used matchmaking — an automated system that matches newly uploaded images to known images — to identify and remove child abuse content created by Gemini. However, the Australian eSafety Commission said Google did not use the same system to remove terrorist or violent extremist content created by Gemini.
Under current Australian law, tech companies must periodically provide the eSafety Commission with information about their efforts to reduce harm or face penalties.
Most recently, companies must report information to the commission between April 2023 and February 2024. The Australian eSafety Commission has fined Telegram and Twitter — later renamed X — for shortcomings in their reporting.
X has already lost an appeal against the A$610,500 ($382,000) fine, but the social network plans to appeal again. Telegram also plans to appeal the fine against the channel.
Since the popular chatbot ChatGPT by OpenAI (USA) exploded into public awareness in late 2022, regulators around the world have called for more effective measures to ensure that bad actors do not abuse AI to incite terrorism, commit fraud, create fake pornography and other abusive behaviors.
Source: https://nhandan.vn/google-phan-mem-ai-gemini-bi-lam-dung-de-tao-noi-dung-khung-bo-gia-mao-post863325.html
Comment (0)