In Singapore, unlike many other countries, there is no significant resistance to the adoption of artificial intelligence (AI).
The society at large, despite occasional misgivings about jobs, privacy, and skill gaps, has been generally supportive of the use of AI for both work and play.
The government’s updated Smart Nation policy also focuses on the extensive use of AI to build a better future for Singapore.
But it is important to add a caveat here.
While the use of AI has grown, there is also a growing realisation, both in government circles and among the public at large, that, as powerful as it is, AI is not a silver bullet that can solve all of humanity’s problems, and it has several downsides.
These downsides include the possibility of bias in AI programmes, wrong information output due to what is known as AI hallucinations and the ever-present risk of the unethical use of the technology by malicious actors.
While these negatives are well-documented, some academic studies and expert groups are focusing on a less discussed area that has come to the fore, especially since the advent of generative AI (GenAI) assistants.
The premise of these studies is simple: Is the use of AI, more particularly GenAI, making us stupid? In other words, even as productivity increases due to the use of AI, are we turning into idiot savants?
At a first reading, this would appear to be something straight out of pop psychology.
How can the ability to use something which has been described as the most important technological breakthrough in human history make us stupid? Are not governments around the world scrambling to get more people to use AI for their daily tasks?
Unfortunately, it is not pop psychology.
An impairment of brain functions?
What these studies are saying is that the unrestricted and extensive use of ChatGPT, Gemini and other GenAI large language models (LLM) for routine work could impair our brain functions.
The study that everyone has been talking about, and which has generated a considerable amount of controversy, was done by the Massachusetts Institute of Technology (MIT) and published last month.
It shows that those who extensively use GenAI to write essays and other academic work have lower brain activity than those who just use their own thoughts to do similar work.
The study notes that LLM users saw a trade-off that limited their neural, linguistic, and behavioural levels, a result that the researchers conclude raises concerns about the long-term educational implications of LLM reliance.
Another study, which was published in Societies, shows that younger participants showed higher dependence on AI tools and lower critical thinking scores compared to older participants.
Yet another study, this one by Microsoft and Carnegie Mellon University on the use of AI by knowledge workers, has this to say in its conclusion: “Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort.
“When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship.”
To subscribe to the GovInsider bulletin, click here.
The same broad conclusion in three different peer-reviewed studies: Extensive use of GenAI LLMs could result in lowering our native critical thinking ability.
One popular analogy used to describe this is that you go to the gym and sweat it out lifting weights to build muscles. If you get a robot to lift the weight for you, you may lift heavier weights for a longer period but that doesn’t help your muscles.
A case of cognitive offloading
Extensive use of GenAI and LLMs can result in a phenomenon known as cognitive offloading. This happens due to an atrophy of cognitive skills if individuals stop practicing them.
According to this research paper, cognitive offloading refers to the “extensive reliance on external tools, particularly AI, [that] may reduce the need for deep cognitive involvement, potentially affecting critical thinking”.
Truth be told, this is not just a problem that has surfaced with AI and GenAI.
If one were to do a thought experiment, how many of us can instantaneously say that 12 multiplied by nine would equal 108 without reaching for a calculator?
Interestingly, all those who had to rote memorise the multiplication table during their school days, like this writer, would also remember that there was a point of time in their lives when the answer would instantaneously pop out.
The extensive use of calculators, even in examination halls for certain subjects, has resulted in the cognitive offloading of our mental math skills to a machine.
One could argue that with AI, this cognitive offloading has become turbocharged even as we become more productive and efficient in our daily work.
Not everyone thinks AI makes us stupid
Not everyone thinks using GenAI models makes one stupid. Nvidia CEO, Jensen Huang, is one of them.
When asked on CNN to comment on the MIT study, Huang said: “I have to admit, I’m using AI literally every single day. And, and, um, I think my cognitive skills are actually advancing”.
During the interview, he made a very important point: “You have to think when using AI”.
He added that the reason behind his skills improving was that he was not asking the model to think for him.
Instead, “I’m asking it to teach me things that I don’t know. Or help me solve problems that otherwise wouldn’t be able to solve reasonably,” he said.
The Microsoft study cited earlier also had this to say: The solution was the proper use of technology so that our cognitive faculties are “preserved”.
The researchers said AI tools should be designed to encourage critical thinking in humans rather than replace it.
However, an article on the study notes that AI development is pricey, and anything that adds to the bill is possibly not going to be considered as tech companies wrangle to make a buck and gain market domination.
Government policy could help
What Nvidia’s Huang and the Microsoft study say makes sense; there is a need to teach users, including public sector officers, how to intelligently use AI and realise it is just another tool in their toolbox and not a panacea for all problems.
Unlike many other countries where the use of AI has been pioneered by the private sector, with the public sector playing catch-up, Singapore government agencies have been a pioneer in the use of AI.
The government’s push to encourage the use of AI tools by public officials is well-documented.
AI tools such as AI assistant Pair, an AI writing assistant SmartCompose and a GenAI tool called AI Bots help public officials in their daily jobs. These tools are complemented by department-specific AI chatbots and LLMs used for various other tasks, including customer-facing ones.
Singapore is acknowledged as a leader in developing clear guidelines in the ethical use of AI.
Maybe it is time to look at drafting guidelines on the correct way to use AI and how these programs need to be tuned to ensure that the critical thinking abilities of users remain intact.
The objective should be not to replace humans with programs but to complement them.
You must be logged in to post a comment Login