Facebook in 2004. YouTube in 2005. X, formerly Twitter in 2006. WhatsApp in 2009. Instagram in 2010. A slew of social media platforms made their way into the lives of people. They made the world flat for the ordinary person. Anywhere on the planet. These platforms evolved and took on many different avatars in the last decade. We are still trying to cope with their influence on our social and economic conditions.
Today, we are again in a similar situation. A new technology – Artificial Intelligence is having a similar impact with a larger magnitude. For the world, it was an overnight surprise in November 2022 when OpenAI announced the launch of its AI agent, ChatGPT. Not to be left behind in the race, tech giants and AI startups announced the launch of their agents. Microsoft launched Co-Pilot. Google announced Gemini. Anthropic announced Claude. Perplexity became the new kid on the block.
Unlike social media, these large language model (LLM) based AI agents have the ability to think for us. Conceptualized and trained in the world of robotics, unmanned transport, super computers – the LLM models trained on vast data understand, summarize, translate, and generate human language. They mimic these cognitive processes of human thinking to provide answers to human queries.
A cut above search engines, these generative AI tools function as modern day encyclopedias. Augmented encyclopedias with critical thinking capabilities.
Labeling these AI agents a tool is a misnomer. Tools are a means to complete a task. AI performs the task. Having passed the Turing test, these AI agents are machines working with us to produce improve results. To achieve improved results, requires humans-in-the-loop and controlling the thinking and decision making processes. Abdicating the control to machines is a dangerous choice. In the era of social media, loss of control led to addiction.
Artificial intelligence is not so benevolent. Losing control to AI models results in the deterioration of basic cognitive function. A study by Microsoft and Carnegie Mellon on impact on generative AI on critical thinking further expands on it. It found people self reported reduction in cognitive efforts and confidence levels.
From my vantage point, LLM-based AI agents are the new Wikipedia. Tasks are addressed directly pasting the task description, client requirement, or a platform issues on LLM based AI agents. Students submit AI-generated assignments. Social media users ask AI agents for context instead of exploring information themselves. There is no individual curiosity and thought to understand the issue and build experience.
The expertly crafted responses to the prompts are viewed as authoritative outcomes. There is no attempt made to check, verify, and authenticate the automated responses. Relying extensively on automation, the effort to put in the work understand, comprehend, and resolve or perform is on the decline. It creates the perception that AI is a lot smarter than person’s human intelligence.
On the flip side, while AI can can improve efficiency, it does not automatically improve productivity. Efficient use of AI demands more effort, not less — evaluating, editing, and aligning responses with real requirements. The empathetic tone and precise language can lull even experienced workers to trust AI generated responses. When operating under such assumptions without having pre-requisite know how can create information blind spots. It can lead to or produce unintended outcomes.
Be it the kids in school or the working age adults, the outcomes are the same. Neuroscientist and educator Dr Jared Horvath giving testimony before the US Senate Committee on Commerce, Science, and Transportation presented similar findings as present in the Microsoft – Carnegie Mellon study. High levels of exposure to screen based learning are associated similar decline in cognitive abilities. Research by academia into education technology (ed-tech) since the 1960s concluded that introduction of technology into classrooms deteriorated cognitive abilities.
Here is a laundry list of those cognitive abilities – basic attention, memory, literacy, numeracy, executive functioning, and general IQ.
Like algorithms of social media platforms designed to increase your engagement, generative AI models respond to queries in an empathetic tone. The proffered empathy lulls users into safety. Feeling safe, AI users create dependencies and rely on AI, transferring their cognitive function to machines. The brunt of this technology is acute in the field of education.
In the first phase, the edtech boom digitized education. Presented as an alternative to the classroom and a way to diffuse knowledge, they destroyed attention spans and writing skills. With the introduction of AI, we have accelerated it. From attention span and writing, technology is aiding in the loss of reading, learning, thinking and decision making. Diminished cognitive function does not just inhibit child’s educational prowess. Such skill deficiencies make it harder to gain employment and build a live for themselves.
To counter this, nations like Sweden and Denmark overhauling their education policy. Sweden is investing heavily to return to the conventional pen and paper pedagogy. Officials cited a decline in reading proficiency and foundational skills as the reason for this shift. The country is spending $60 million (104 million euros) reversing this decision. The new policy aims reintroduce printed textbooks to focus on improving handwriting, reading, and physical book use in classrooms. The plan to have one textbook per subject for classroom learning.
The idea is to curb dependency on technology and reduce screen time. Denmark has banned mobile phones in many schools. Restricted the use of tablets and laptops for children under the age of 6 to ensure children learn to read and write on paper. A significant change is happening on university campuses in America. The rampant use of AI in homework and examinations has forced professors to bring back pen and paper tests.
With varied levels of global optimism regarding adoption on AI, it behooves us to take a pause. We have a few questions to ponder upon. Are we repeating the mistakes we made adapting to social media again? Taken in by the new wonderkid on the block, are we utilizing the new technology without understanding the associated pitfalls? Social media created a crisis of privacy. AI might create a far more dangerous crisis. A crisis of people unable to access knowledge and information. Such machines require judicious use by setting guardrails before we end up outsourcing our brains.
Discover more from The Critilizers
Subscribe to get the latest posts sent to your email.