This is considered a fragile but important opportunity to ensure safety in the application of AI in the future. It is especially meaningful when Vietnam has just passed the Law on Digital Technology Industry with detailed regulations on artificial intelligence (AI) management.
The "window of opportunity" is narrowing
OpenAI researcher Bowen Baker shared that in a recent joint paper, researchers warned that AI's ability to monitor "thoughts" could disappear without focused research efforts.
This is especially important as AI models become increasingly powerful and have the potential to have serious impacts on society.
A key feature of reasoning AI models like OpenAI's o-3 and DeepSeek's R1 is the “chain of thought” ( CoT) — the process by which AI expresses its reasoning steps in natural language, similar to the way humans write out each step of a math problem on scratch paper.
This ability gives us a rare glimpse into how AI makes decisions.
This marks a rare moment of unity among many leaders in the AI industry to advance research on AI safety.
This is particularly relevant given the fierce competition among tech companies in developing AI. The notable signatories to the paper include Mark Chen, OpenAI Research Director, Ilya Sutskever, CEO of Safe Superintelligence, Nobel Prize winner Geoffrey Hinton, Google DeepMind co-founder Shane Legg, and xAI safety advisor Dan Hendrycks.
The involvement of these top names shows the importance of the issue.
Also according to Mr. Bowen Baker's assessment, "We are at a critical moment when there is this so-called new 'chain of thinking' that may disappear in the next few years if people do not really focus on it."

Why is monitoring “AI thinking” important?
Current AI systems are often viewed as “black boxes” – we know the inputs and outputs but don’t understand the decision-making processes inside.
This will become dangerous when AI is applied in important fields such as Healthcare , Finance and National Security.
CoT monitoring is an automated system that reads the mental model’s chain of reasoning and other relevant information to flag suspicious or potentially harmful interactions. It is not a complete solution, but it can become a valuable layer of security protection.
Research from OpenAI shows that AI models tend to be very explicit about their intentions in their thinking sequences.
For example, they were often very explicit about their plans to sabotage a mission when they thought “Let's hack.” This demonstrates the AI's ability to monitor and detect misbehavior.
“Let's hack” is the phrase that AI models often “think” when “they” intend to sabotage or circumvent the rules during the performance of a task.
The fact that AIs show “hacking” intent in their thought processes suggests that we can detect bad AI behavior before it happens. This is why monitoring thought processes is important.
In other words, “let's hack” is like a “warning signal” to humans that the AI is about to do something wrong.
Vietnam and legal regulations on AI
In fact, Vietnam has made important strides in building a legal framework for AI.
On June 14, the Vietnamese National Assembly passed the Law on Digital Technology Industry, in which Chapter IV contains detailed regulations on artificial intelligence - one of the most comprehensive legal frameworks on AI in Southeast Asia today.
Article 41 of the Law sets out the basic principles for the development, provision and deployment of AI in Vietnam.
In particular, point b, clause 1 stipulates: “Ensure transparency, accountability, explainability; ensure that it does not exceed human control”.

The National Assembly passed the Law on Digital Technology Industry (Photo: Nhat Bac).
These are the principles that international scientists are calling for when discussing AI chain surveillance.
In addition, Point d, Clause 1, Article 41 stipulates: “Ensure the ability to control algorithms and artificial intelligence models”. This is completely consistent with the spirit of CoT supervision that international experts are proposing.
More importantly, Article 41, Clause 1, Point a also sets a high ethical standard when it stipulates that AI must “serve human prosperity and happiness, with people at the center”.
This means that monitoring the AI thought chain is not just a technical requirement but also an ethical obligation – ensuring that AI is always directed towards human benefit, not the machine’s own goals.
Classify and manage AI by risk level
Vietnam's Digital Technology Industry Law has gone a step further by classifying AI into different risk groups with clear and scientific definitions.
Article 43 defines “high-risk artificial intelligence systems” as systems that are likely to pose serious risks or harm to human health, human rights and public order.
Interestingly, the Act provides specific exceptions for high-risk AI, including systems “intended to assist humans in optimizing work outcomes” and “not intended to replace human decision-making.”
This shows a balanced mindset between encouraging innovation and ensuring safety.

Classifying AI by risk level will help create a multi-layered monitoring system (Illustration: LinkedIn).
In particular, distinguishing between “high-risk AI” and “high-impact AI” (systems used for multiple purposes, with a large number of users) demonstrates a subtlety in approach.
This is a more progressive classification than the European Union (EU) Artificial Intelligence Act, which considers not only the level of risk but also the scale and scope of impact.
This classification would help create a multi-layered oversight system, where chain-of-consciousness oversight would be particularly important for high-risk and high-impact AI systems.
Platform for AI Surveillance
One of the highlights and pioneering features of Vietnam's Law on Industry and Digital Technology is the requirement on transparency and identification marks.
Article 44 stipulates that AI systems that interact directly with humans must notify users that they are interacting with the AI system. At the same time, products created by AI must have identification marks.
This has important implications for the implementation of CoT oversight. When users know they are interacting with AI, they have the right to demand explanations of the decision-making process, creating positive pressure for AI developers to maintain the ability to monitor the AI’s thought process.
In particular, the fact that the Ministry of Science and Technology is assigned the responsibility of "issuing the List of digital technology products created by artificial intelligence" shows proactive management.
This is an important difference from many other countries, where AI regulations are often more general.
Furthermore, requiring an identifier “for human or machine recognition” represents a vision of an AI ecosystem that can police itself—which fits perfectly with the idea of automated thought chain surveillance.
Comprehensive management model
Article 45 of the above Law demonstrates a progressive management philosophy when clearly defining the responsibilities of 3 groups of subjects according to the AI product life cycle: Subjects developing, subjects providing and subjects deploying and using AI systems.
This creates a seamless end-to-end accountability system, ensuring that AI oversight is not just the responsibility of one party.
Notably, there is a subtle distinction between “developing” and “providing” AI; developers are those who “research and develop,” while providers are those who bring it to market under a brand name.
This means that even if a company simply rebrands an existing AI model, they are still responsible for transparency and explainability.

What is particularly interesting here is how the law allocates different responsibilities to each subject.
The developer must comply with all the principles, the provider must take additional responsibility for identification and high-risk management, and the user is “exempted” from some of the responsibility but must still ensure transparency and risk control.
This model creates a “chain of responsibility” where each link has an incentive to maintain oversight of the AI thinking chain, since all are accountable to the end user.
The challenge of maintaining surveillance capabilities
However, the ability to monitor the AI thought chain may remain fragile and vulnerable. New AI architectures may also pose other threats.
Many researchers are developing systems of reasoning in continuous mathematical space, rather than discrete words, that could completely eliminate the need to use language in thinking.
Furthermore, AI can become “filtered” — that is, it only shows humans positive thoughts to be appreciated, while deliberately hiding its true intentions.
The danger here is that once an AI realizes it is being monitored, it can learn to hide its true train of thought.
In this context, the provision on “risk control throughout the life cycle of an artificial intelligence system” in Point e, Clause 1, Article 41 of the Vietnamese Law becomes particularly important. This requires a continuous monitoring mechanism, not just at the time of deployment.
Impact on the future of AI development
Monitoring the thought chain could become an important tool to ensure AI operates in ways that benefit humans.
If models continue to reason in natural language and if the behaviors that pose the most serious risks require extensive reasoning, this practice could enable reliable detection of serious misconduct.
For Vietnam, applying CoT monitoring techniques will help effectively implement legal provisions.
For example, the “explainability” requirement in Article 41 would be easier to satisfy if the AI’s thought processes were accessible. Similarly, “control of algorithms, artificial intelligence models” would become more feasible.
The implementation of AI chain monitoring in Vietnam will face a number of challenges. First of all, there is the issue of human resources - a shortage of AI experts capable of developing and operating monitoring systems.
This requires heavy investment in training and talent attraction.
Directions for the future
The researchers call on leading AI model developers to research what factors make the CoT “monitorable” and what factors can increase or decrease transparency about how AI models work and come up with answers soon.
The opportunity to monitor AI “thinking” may be our last window into maintaining control over today’s increasingly powerful artificial intelligence systems.

For Vietnam, having a comprehensive legal framework on AI through the Law on Digital Technology Industry is a great advantage. Regulations on transparency, algorithmic control and risk classification have created a solid legal foundation for applying AI chain of thought monitoring techniques.
Combining cutting-edge international research and a progressive domestic legal framework will help Vietnam not only develop AI safely but also become a model for other countries in the region.
This is in line with the goal of turning Vietnam into a “regional and global digital technology hub” as set out in national development strategies.
With the existing legal foundation, Vietnam needs to quickly deploy research and practical applications on monitoring the AI chain of thought. Only by doing so can we ensure that AI will serve “human prosperity and happiness” as the spirit of the Digital Technology Industry Law has directed.
Source: https://dantri.com.vn/cong-nghe/giam-sat-chuoi-tu-duy-cua-tri-tue-nhan-tao-20250731151403739.htm
Comment (0)