The panel discussion "AI for humanity: AI ethics and safety in the new era" within the framework of VinFuture 2025 Science and Technology Week brings together scientists, politicians and inventors to discuss responsible AI development, towards humanistic values.
On the sidelines of the discussion, Professor Toby Walsh - University of New South Wales (Australia), Academician of the American Association for Computing Machinery shared about the ethical and responsible use of AI.
Responsible use of AI should be mandatory
- According to the Professor, should the responsible use of AI be voluntary or mandatory? And how should we actually behave towards AI?
Professor Toby Walsh: I firmly believe that responsible use of AI should be mandatory. There are perverse incentives at the moment, with the huge amounts of money being made with AI, and the only way to ensure good behaviour is to have strict regulations in place, so that public interest is always balanced against commercial interests.
- Can you give specific examples from different countries of responsible and ethical AI applications?
Professor Toby Walsh: A classic example is high-stakes decisions, such as sentencing and sentencing in the United States, where an AI system is used to make recommendations about a person's prison term and likelihood of reoffending.
Unfortunately, this system was trained on historical data and unintentionally reflects past racial biases that lead to discrimination against Black people. We should not let systems like this decide who gets imprisoned.
- When AI makes a mistake, who is responsible? Especially with AI agents, do we have the ability to fix their operating mechanisms?
Professor Toby Walsh: The core problem when AI makes mistakes is that we cannot hold AI accountable. AI is not human and this is a flaw in every legal system in the world . Only humans are responsible for their decisions and actions.
Suddenly, we have a new “agent” called AI, which can – if we allow it – make decisions and take actions in our world, which poses a challenge: who will we hold accountable?
The answer is to hold the companies that deploy and operate AI systems accountable for the consequences these “machines” cause.

Professor Toby Walsh - University of New South Wales shared at the seminar "AI for humanity: AI ethics and safety in the new era" within the framework of VinFuture 2025 Science and Technology Week. (Photo: Minh Son/Vietnam+)
- Many companies also talk about responsible AI. How can we trust them? How do we know that they are serious and comprehensive, and not just using "responsible AI" as a marketing gimmick?
Professor Toby Walsh: We need to increase transparency. It is important to understand the capabilities and limitations of AI systems. We should also “vote by doing” – choosing to use services responsibly.
I truly believe that how businesses use AI responsibly will become a differentiator in the marketplace, giving them a commercial advantage. If a company respects customer data, it will benefit and attract customers.
Businesses will realize that doing the right thing is not only ethical, but it will also make them more successful. I see this as a way to differentiate between businesses, and responsible businesses are the ones we can feel comfortable doing business with.
'If we are not careful, we may experience a period of digital colonization'
- Vietnam is one of the few countries considering promulgating the Law on Artificial Intelligence. What is your assessment of this? In your opinion, for developing countries like Vietnam, what are the challenges regarding ethics and security in AI development? What recommendations does the professor have for Vietnam to achieve its goals in the AI strategy - to be at the top of the region and the world in AI research and mastery?
Professor Toby Walsh: I am delighted that Vietnam is one of the pioneering countries that will have a dedicated Law on Artificial Intelligence. This is important because each country has its own values and culture and needs laws to protect those values.
Vietnamese values and culture are different from Australia, China and the United States. We cannot expect technology companies from China or the United States to automatically protect Vietnamese culture and language. Vietnam must take the initiative to protect these things.

Professor Toby Walsh warns that if we are not careful, we could be experiencing a period of digital colonization. (Photo: Minh Son/Vietnam+)
I'm mindful that in the past, many developing countries went through a period of physical colonization. If we're not careful, we could go through a period of "digital colonization." Your data will be exploited and you will become a cheap resource.
This is at risk if developing countries develop the AI industry in a way that only exploits data without controlling or protecting their own interests.
- So how to overcome this situation, Professor?
Professor Toby Walsh: It's simple, invest in people. Upskill people, make sure they understand AI. Support AI entrepreneurs, companies, support universities. Be proactive. Instead of waiting for other countries to transfer technology or guide us, we have to be proactive and own the technology.
More importantly, we need to strongly advocate for social media platforms to create a safe environment for users in Vietnam, while not affecting the country's democracy.
In fact, there are numerous examples of how social media content has influenced election results, divided countries, and even incited terrorism.
- AI is very developed in Vietnam. In recent times, Vietnam has had many policies to promote AI, but Vietnam is also facing a problem, which is fraud caused by AI. So, according to the Professor, how should Vietnam handle this situation?
Professor Toby Walsh: For individuals, I think the simplest way is to verify the information. For example, when we receive a phone call or email, for example from a bank, we need to check it again: we can call that phone number back or contact the bank directly to verify the information. Nowadays, there are many fake emails, fake phone numbers, even Zoom calls can be faked. These scams are very simple, inexpensive and do not take much time.
In my family, we also have our own security measure: a “secret question” that only family members know, such as the name of our pet rabbit. This ensures that important information stays within the family and is not leaked out.
- Thank you very much./.
Professor Toby Walsh is an ARC Honorary Scholar and Scientia Professor in Artificial Intelligence (AI) at the University of New South Wales Sydney (UNSW). He is a strong advocate for setting limits to ensure AI is used to improve people’s lives.
He is also a Fellow of the Australian Academy of Science and has been named to the international list of "influential people in AI."
(Vietnam+)
Source: https://www.vietnamplus.vn/doanh-nghiep-su-dung-ai-co-trach-nhiem-se-mang-lai-loi-the-thuong-mai-post1080681.vnp






Comment (0)