Youths risk ‘brainlessness’ with extreme AI dependence

At 2 a.m Thu Phuong, 23, in Hanoi was still hunched over her keyboard. Following an argument with her boyfriend, she was asking a virtual assistant for advice on whether she should break up with him.

After two hours of pouring her heart out only to receive generic responses, she proceeded to prompt the AI to cast a digital horoscope to determine the fate of her relationship. “For over a year now I’ve asked AI before doing anything,” she says.

She uses the tool for everything, from choosing outfits that match her astrological element and planning her grocery list, to drafting text messages to her boss.

This dependency landed her in the hospital one time.

When she had a fever caused by stomach ulcers, instead of seeing a doctor, she entered her symptoms into an AI chatbot and asked for a prescription. “The app gave me a standard cold prescription based on the signs I described. After two days of self-treatment, my condition worsened,” she says.

Thu Phuong (23, Hanoi) asking AI to tell her fortune. Photo by T.T

The reliance on AI is seeping deep into academic environments, extending beyond personal lives. Bao Minh, a sophomore at a Hanoi university, says his world has now shrunk to a single input box.

Instead of visiting the library, he opts to feed all his term paper prompts into machines. The 19-year-old says: “With just one good prompt, I get a complete essay instantly. It’s so fast that it makes me lazy to think.”

He admitted to receiving a warning for submitting an assignment filled with data fabricated by AI.

At his part-time job too colleagues have noticed something is off: Minh’s messages consistently sound formulaic, robotic and devoid of emotion.

Many young people like Phuong and Minh are falling into the “AI Brainlessness” syndrome.

Instead of viewing it as a supportive tool, they are outsourcing full decision-making authority to artificial intelligence models, not just using AI to write code or emails, but also for “Asking” (seeking decisions) and “Expressing” (confiding or venting).

Data from mobile app analytics firm Sensor Tower shows that in the first half of 2025 Vietnamese users logged 7.5 billion sessions on AI applications.

On average, a person engaged in 75 interactive sessions per month, far exceeding the global average of 50. A digital economy report published by Google in March also indicated that Vietnam leads Southeast Asia in AI adoption, with 81% of surveyed individuals interacting with it daily.

Behind this enthusiasm lies a tangible risk. A four-month study by the Massachusetts Institute of Technology in the U.S. found that frequent users of AI chatbots show signs of memory decline and reduced cognitive flexibility.

Chu Tuan Anh, director of Aptech International Programmer Education System, says this overreliance leads to “brainlessness” across three levels: cognitive laziness, where users completely forget the content they just processed shortly afterward; loss of skills, characterized by complete dependence on the tools; and the most dangerous level, “cognitive blindness” or the inability to distinguish right from wrong, blindly believing whatever the machine spits out.

“If machines are left to decide everything from what to eat to how to confess love, within just 6 to 12 months, young people will fall into a state of cognitive blindness, feeling their lives are meaningless due to a loss of personal identity,” he warns.

Dinh Ngoc Son, a lecturer at the Hanoi School of Business and Management, Vietnam National University, calls this a negative shift in cognitive structure, explaining that humans are transitioning from a cycle of “self-collection and self-analysis” to one of “asking questions and selecting options.”

“They fall into a state of knowing how to ask but no longer knowing how to think. The aspects that help humans mature, such as love, conflict and failure, are being scripted. If we consult a machine for everything before actually living it, we will gradually lose our ability to learn from life itself.”

On a broader scale, the consequences could be a declining workforce and a drop in productivity.

Anh says: “Without the right direction, in three to five years we will have a generation that knows how to use AI but does not know how to do anything else, losing our competitive edge against nations that have preserved original thinking.”

Some AI applications created on phones are Copilot, DeepSeek, Gemini, AI Hay, ChatGPT, Grok. Photo: Luu Quy

Some generative AI applications on mobile phones: Copilot, DeepSeek, Gemini, AI Hay, ChatGPT, and Grok. Photo by Read/ Luu Quy

To remedy this, experts propose the “3T” rule: Think first (try to find the answer on your own beforehand), Tool – not Tutor (treat AI strictly as an instrument, not a teacher), and Teach back (express the learned information in your own words).

This is exactly the principle followed by Anh Duc, 29, a communications manager who uses AI daily to optimize his workload.

He always establishes a clear “cognitive framework” before typing a prompt. Upon receiving the output, he makes sure to cross-check the information. “You can’t ask AI to build a house if you don’t even know what a brick looks like,” he says.

Experts agree there are absolute “no-go zones” that should never be handed over to AI, typically moral decisions, marriage or core emotional experiences.

Technology might be able to draft a flowery love confession, but it cannot replace the genuine flutter of a human heart, they point out. “Technology is only truly progressive if humans use it to broaden their horizons, not to shut down their thinking,” Son adds.

Comments are closed.