Connect with us

Brand Stories

Is AI making us stupid?

Published

on


In Singapore, unlike many other countries, there is no significant resistance to the adoption of artificial intelligence (AI). 

 

The society at large, despite occasional misgivings about jobs, privacy, and skill gaps, has been generally supportive of the use of AI for both work and play. 

 

The government’s updated Smart Nation policy also focuses on the extensive use of AI to build a better future for Singapore. 

 

But it is important to add a caveat here.  

 

While the use of AI has grown, there is also a growing realisation, both in government circles and among the public at large, that, as powerful as it is, AI is not a silver bullet that can solve all of humanity’s problems, and it has several downsides. 

 

These downsides include the possibility of bias in AI programmes, wrong information output due to what is known as AI hallucinations and the ever-present risk of the unethical use of the technology by malicious actors. 

 

While these negatives are well-documented, some academic studies and expert groups are focusing on a less discussed area that has come to the fore, especially since the advent of generative AI (GenAI) assistants.  

 

The premise of these studies is simple: Is the use of AI, more particularly GenAI, making us stupid? In other words, even as productivity increases due to the use of AI, are we turning into idiot savants? 

 

At a first reading, this would appear to be something straight out of pop psychology.  

 

How can the ability to use something which has been described as the most important technological breakthrough in human history make us stupid? Are not governments around the world scrambling to get more people to use AI for their daily tasks? 

 

Unfortunately, it is not pop psychology. 

An impairment of brain functions? 

 

What these studies are saying is that the unrestricted and extensive use of ChatGPT, Gemini and other GenAI large language models (LLM) for routine work could impair our brain functions. 

 

The study that everyone has been talking about, and which has generated a considerable amount of controversy, was done by the Massachusetts Institute of Technology (MIT) and published last month. 

 

It shows that those who extensively use GenAI to write essays and other academic work have lower brain activity than those who just use their own thoughts to do similar work. 

 

The study notes that LLM users saw a trade-off that limited their neural, linguistic, and behavioural levels, a result that the researchers conclude raises concerns about the long-term educational implications of LLM reliance. 

 

Another study, which was published in Societies, shows that younger participants showed higher dependence on AI tools and lower critical thinking scores compared to older participants. 

 

Yet another study, this one by Microsoft and Carnegie Mellon University on the use of AI by knowledge workers, has this to say in its conclusion: “Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.  

 

“When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship.” 

 

To subscribe to the GovInsider bulletin, click here 

 

The same broad conclusion in three different peer-reviewed studies: Extensive use of GenAI LLMs could result in lowering our native critical thinking ability. 

 

One popular analogy used to describe this is that you go to the gym and sweat it out lifting weights to build muscles. If you get a robot to lift the weight for you, you may lift heavier weights for a longer period but that doesn’t help your muscles. 

A case of cognitive offloading 

 

Extensive use of GenAI and LLMs can result in a phenomenon known as cognitive offloading. This happens due to an atrophy of cognitive skills if individuals stop practicing them.  

 

According to this research paper, cognitive offloading refers to the “extensive reliance on external tools, particularly AI, [that] may reduce the need for deep cognitive involvement, potentially affecting critical thinking”. 

 

Truth be told, this is not just a problem that has surfaced with AI and GenAI.  

 

If one were to do a thought experiment, how many of us can instantaneously say that 12 multiplied by nine would equal 108 without reaching for a calculator? 

 

Interestingly, all those who had to rote memorise the multiplication table during their school days, like this writer, would also remember that there was a point of time in their lives when the answer would instantaneously pop out. 

 

The extensive use of calculators, even in examination halls for certain subjects, has resulted in the cognitive offloading of our mental math skills to a machine. 

 

One could argue that with AI, this cognitive offloading has become turbocharged even as we become more productive and efficient in our daily work. 

Not everyone thinks AI makes us stupid 

 

Not everyone thinks using GenAI models makes one stupid. Nvidia CEO, Jensen Huang, is one of them.  

 

When asked on CNN to comment on the MIT study, Huang said: “I have to admit, I’m using AI literally every single day. And, and, um, I think my cognitive skills are actually advancing”. 

 

During the interview, he made a very important point: “You have to think when using AI”. 

 

He added that the reason behind his skills improving was that he was not asking the model to think for him.  

 

Instead, “I’m asking it to teach me things that I don’t know. Or help me solve problems that otherwise wouldn’t be able to solve reasonably,” he said. 

 

The Microsoft study cited earlier also had this to say: The solution was the proper use of technology so that our cognitive faculties are “preserved”.

 

The researchers said AI tools should be designed to encourage critical thinking in humans rather than replace it. 

 

However, an article on the study notes that AI development is pricey, and anything that adds to the bill is possibly not going to be considered as tech companies wrangle to make a buck and gain market domination. 

Government policy could help 

 

What Nvidia’s Huang and the Microsoft study say makes sense; there is a need to teach users, including public sector officers, how to intelligently use AI and realise it is just another tool in their toolbox and not a panacea for all problems.  

 

Unlike many other countries where the use of AI has been pioneered by the private sector, with the public sector playing catch-up, Singapore government agencies have been a pioneer in the use of AI. 

 

The government’s push to encourage the use of AI tools by public officials is well-documented

 

AI tools such as AI assistant Pair, an AI writing assistant SmartCompose and a GenAI tool called AI Bots help public officials in their daily jobs. These tools are complemented by department-specific AI chatbots and LLMs used for various other tasks, including customer-facing ones. 

 

Singapore is acknowledged as a leader in developing clear guidelines in the ethical use of AI.  

 

Maybe it is time to look at drafting guidelines on the correct way to use AI and how these programs need to be tuned to ensure that the critical thinking abilities of users remain intact.  

 

The objective should be not to replace humans with programs but to complement them. 



Source link

Continue Reading
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Brand Stories

AI isn’t just for coders: 7 emerging non-tech career paths in artificial intelligence

Published

on

By


7 emerging non-tech career paths in artificial intelligence

Artificial intelligence is no longer the future. It’s already shaping how we live, work, and learn. From smart assistants to personalised learning apps and automated hiring tools, AI is now part of everyday life. But here’s something many students still don’t realise — you don’t have to be a computer science genius to build a meaningful career in AI.In 2025, AI needs more than just coders. It needs people who understand ethics, design, communication, psychology, policy, and human behaviour. Whether you’re studying law, liberal arts, design, economics, or media, there is space for you in this fast-growing field. These emerging roles are all about making AI more responsible, more human, and more useful.Here are seven exciting non-tech career paths in artificial intelligence that you can start exploring now.

AI ethics specialist

AI systems make decisions that can affect real lives — from who gets hired to who receives a loan. That’s why companies and governments need experts who can guide them on what’s fair, what’s biased, and what crosses a line. Ethics specialists work closely with developers, legal teams, and product leaders to make sure AI is built and used responsibly.Best suited for: Students from philosophy, sociology, law, or political science backgroundsWhere to work: Tech companies, research institutes, policy think tanks, or digital rights NGOs

AI UX and UI designer

AI tools need to be easy to use, intuitive, and accessible. That’s where design comes in. AI UX and UI designers focus on creating smooth, human-centered experiences, whether it’s a chatbot, a virtual assistant, or a smart home interface. They use design thinking to make sure AI works well for real users.Best suited for: Students of psychology, graphic design, human-computer interaction, or visual communicationWhere to work: Tech startups, health-tech and ed-tech platforms, voice and interface design labs

AI policy analyst

AI raises big questions about privacy, rights, and regulation. Governments and organisations are racing to create smart policies that balance innovation with safety. AI policy analysts study laws, write guidelines, and advise decision-makers on how to manage the impact of AI in sectors like education, defense, healthcare, and finance.Best suited for: Public policy, law, international relations, or development studies studentsWhere to work: Government agencies, global institutions, research bodies, and policy units within companies

AI behavioural researcher

AI tools influence human behaviour — from how long we scroll to what we buy. Behavioural researchers look at how people respond to AI and what changes when technology gets smarter. Their insights help companies design better products and understand the social effects of automation and machine learning.Best suited for: Students of psychology, behavioural economics, sociology, or educationWhere to work: Tech companies, research labs, social impact startups, or mental health platforms

AI content strategist and explainer

AI is complex, and most people don’t fully understand it. That’s why companies need writers, educators, and content creators who can break it down. Whether it’s writing onboarding guides for AI apps or creating videos that explain how algorithms work, content strategists make AI easier to understand for everyday users.Best suited for: Students of journalism, English, media studies, marketing, or communicationWhere to work: Ed-tech and SaaS companies, AI product teams, digital agencies, or NGOs

AI program manager

This role is perfect for big-picture thinkers who love connecting people, processes, and purpose. Responsible AI program managers help companies build AI that meets ethical, legal, and user standards. They coordinate between tech, legal, and design teams and ensure that AI development stays aligned with values and global standards.Best suited for: Business, liberal arts, management, or public administration studentsWhere to work: Large tech firms, AI consultancies, corporate ethics teams, or international development agencies

AI research associate (non-technical)

Not all AI research is about coding. Many labs focus on the social, psychological, or economic impact of AI. As a research associate, you could be studying how AI affects jobs, education, privacy, or cultural behaviour. Your work might feed into policy, academic papers, or product design.Best suited for: Students from linguistics, anthropology, education, economics, or communication studiesWhere to work: Universities, research labs, global think tanks, or ethics institutesThe world of AI is expanding rapidly, and it’s no longer just about math, code, and machines. It’s also about people, systems, ethics, and storytelling. If you’re a student with curiosity, critical thinking skills, and a passion for meaningful work, there’s a place for you in AI — even if you’ve never opened a programming textbook.TOI Education is on WhatsApp now. Follow us here.





Source link

Continue Reading

Brand Stories

Google AI Mode is getting a bigger AI brain from Gemini

Published

on



  • Google has upgraded its AI Mode with the advanced Gemini 2.5 Pro
  • AI Mode has also added Deep Search, which can now run hundreds of background searches
  • A new calling tool built into Search lets Google call businesses on your behalf

Google is continuing to try to get you to use its AI Mode when searching online with new and enhanced AI tools. The conversational search tool has made Google’s Gemini 2.5 Pro AI model available in AI Mode, along with the long-form report writing tool Deep Search.

Google AI Pro and AI Ultra subscribers in the U.S. who are also part of the AI Mode experiment in Search Labs will now see an option to choose Gemini 2.5 Pro when asking tough questions as well.



Source link

Continue Reading

Brand Stories

Teachers gather to talk artificial intelligence in the classroom

Published

on


HUNTSVILLE, Ala (WHNT) — Our world is constantly evolving, and lately, a lot of that evolution has been in the form of artificial intelligence.

“This is the future,” Kala Grice-Dobbins said. “It’s not going away, and we want our teachers to be informed, but also our students to be informed.”

Woman facing animal cruelty charges after 29 dogs were seized from Lauderdale County home

Grice-Dobbins is a cybersecurity teacher with the Madison County School System.

Thursday, more than 150 teachers from across North Alabama gathered to talk about AI and the use of it in the classroom.

“It’s clearly a novel technology– new for kids, new for teachers, and they’re trying to figure out how to use it,” Randy Sparkman said. “So we’re just trying to bring resources and bring these, Madison County districts, particularly, together to talk about strategies for using AI in the new school year.”

Sparkman is a part of Mayor Tommy Battle’s AI task force. They put on the AI in education event.

Gov. Ivey announces more than $3.7 Million in Rebuild Alabama Funding for local road projects across Alabama

Grace-Dobbins said she uses AI for help with things like lesson plans and recommendation letters.

“All of us use templates every day,” she said. “Why can’t it be our template to start with, and then we edit it because nothing’s perfect when it comes out.”

She said it’s easier than you think to spot students plagiarizing by using the tool.

“It’s not going to be your top of the line type paper,” she said. “It’s not going to be written in their kind of language. It’s not going to have their kind of thoughts involved, and so the more you know your students, you’re going to know this is not you.”

Angela Evans is also a teacher. She said she’s already been using AI in her classroom for years.

She has a message for those who may be skeptical. What she’d tell people.

“Don’t be scared because change is nature,” she said. “We are going to progress our humanity. Our intelligence is going to continue to progress.

Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

For the latest news, weather, sports, and streaming video, head to WHNT.com.



Source link

Continue Reading

Trending

Copyright © 2025 AISTORIZ. For enquiries email at prompt@travelstoriz.com