The Australian Electronic Safety Commission (ESSC) assessed that Google's report provides the world's first insight into how users can exploit technology to create malicious and illegal content.
Commissioner Julie Inman Grant of the Cybersecurity Commission stated that this underscores the importance of companies developing AI products building and testing the effectiveness of safeguards to prevent the production of harmful, fake content.
Google used a matching feature – an automated system that matches newly uploaded images with known images – to identify and remove child abuse content created by Gemini. However, the Australian Electronic Safety Commission stated that Google did not use the same system to remove terrorist or violent extremist content created by Gemini.
Under current Australian law, technology companies must periodically provide the Electronic Security Commission with information on their harm reduction efforts or face penalties.
Most recently, companies were required to report information to this commission for the period from April 2023 to February 2024. The Australian Electronic Safety Commission fined Telegram and Twitter – later renamed X – for deficiencies in their reporting.
X has already appealed its AUD 610,500 (USD 382,000) fine once, but the social media platform is expected to appeal again. Telegram also plans to appeal the fine imposed on the channel.
Since the "sensational" chatbot ChatGPT from the US company OpenAI exploded in public awareness in late 2022, regulatory agencies around the world have called for more effective measures to ensure that malicious actors do not misuse AI to incite terrorism, commit fraud, create fake pornographic content, and engage in other abusive activities.
Source: https://nhandan.vn/google-phan-mem-ai-gemini-bi-lam-dung-de-tao-noi-dung-khung-bo-gia-mao-post863325.html






Comment (0)