Image Credit - Freepik

Human Algorithm AI Models Now Shape Our Future

The Human Algorithm: Navigating Our Future with Thinking Machines

Artificial intelligence permeates modern life, sparking both wonder and profound unease. As creatives, academics, and everyday citizens grapple with its burgeoning capabilities, a critical question emerges: how do we harness AI's potential without sacrificing the essence of human experience, creativity, and connection? The debate intensifies as these systems become more sophisticated, promising unparalleled efficiency while simultaneously challenging established norms in work, art, and even human interaction.

A growing chorus of voices urges caution, questioning the unchecked proliferation of generative AI. These systems, trained on vast datasets of existing human output, raise fundamental concerns about originality, intellectual property, and the very definition of creativity. The allure of machine-generated content clashes with a deep-seated human desire for authenticity and the unique spark of individual insight. This tension defines the current AI moment, a period of rapid advancement demanding equally swift ethical consideration and societal adaptation. The choices made now will undoubtedly shape the human-AI relationship for generations.

The Unsettling Mirror: When AI Misrepresents Reality

Ewan Morrison, a novelist, personally experienced the sometimes humorous yet disturbing unreliability of generative AI. Curious about its grasp of his own work, he prompted ChatGPT to list his published novels. The AI confidently presented twelve titles. Morrison has, in fact, written nine. The system, in its eagerness to fulfil the request, invented three entirely fictitious books. Among them was the provocatively named "Nine Inches Pleases a Lady." Morrison found that this concocted title borrowed its key wording from an evocative poem by Robert Burns. This experience solidified his distrust in such systems for factual accuracy. He humorously notes his latest actual book, "For Emma," investigates technology's societal impact, picturing chips implanted in the brain using AI, a theme that continues to resonate as his novel was published in March 2025. Morrison consistently voices concerns about AI's impact on truth and its application within creative fields.

The Resistors: A Growing Movement of AI Sceptics

Morrison is not alone in his reservations. He observes the capabilities of systems like OpenAI’s ChatGPT but consciously avoids integrating them into his personal life or professional work. He belongs to an expanding group of individuals deliberately pushing back against the widespread embrace of generative AI. This collective encompasses those profoundly concerned about artificial intelligence's capacity for damage and its unconstrained influence, who are unwilling to contribute to its development by feeding it more data. Others have simply found current AI iterations to be unreliable or more cumbersome than beneficial. A significant segment fundamentally prefers human interaction and creation over automated alternatives. This resistance is not necessarily born of technophobia, but rather a critical evaluation of AI's true worth and its conceivable effects on society.

Online Discourse: Luddites, Hipsters, and Handcrafted Words

Online, proponents of AI often label individuals spurning it as uninformed traditionalists, or more bitingly, as self-satisfied trend-followers. Navigating these discussions reveals a spectrum of engagement with AI. While some friends praise ChatGPT's utility for child-rearing guidance, while professional colleagues incorporate it widely within their consulting practices, many individuals, after initial experimentation following its 2022 launch, have stepped back. The process of crafting articles through organic thought and human language, even if humorously depicted as coming from a bespoke creative workspace (or, frankly, a bed), offers a satisfaction that AI generation lacks. Direct conversations with interview subjects, person to person, build rapport and provide deeper, more subtle understanding than an AI might extract by scouring social platforms and academic documents. These personal interactions, sometimes punctuated by the delightful interruption of a pet, often spark laughter and genuine understanding – elements AI, even if used for transcription, cannot replicate.

The "Decel" Taunt and the Reality of AI Inaccuracy

On X, the platform where Ewan Morrison occasionally debates AI proponents, a frequent jibe aimed at doubters is "decel," an abbreviation of decelerationist. Morrison finds this amusing, particularly when people suggest he is the one failing to keep pace. He argues that no factor impedes "accelerationism" more thoroughly than not fulfilling pledged outcomes. Encountering a significant roadblock, he observes, is a very efficient way to decelerate. Studies show that artificial intelligence often provides incorrect answers to a substantial portion of questions. For instance, while some generative AI models can perform tasks like providing driving directions with apparent precision, they may lack a true underlying understanding of the environment, leading to failures when faced with unexpected changes or disruptions. This highlights a critical issue: AI's impressive outputs do not always equate to coherent understanding.

Superintelligence: A Venture Capital Fantasy?

His first foray into the AI discussion, Morrison recalls, was prompted by what he currently terms "exaggerated anxieties" regarding the possibility of super-intelligent and runaway AI. However, the deeper his investigation, the more he concluded that this narrative serves a specific purpose. He views the notion of artificial superintelligence as a narrative presented to worldwide investors, motivating them to channel vast sums into this pursuit. Indeed, substantial venture capital funding flows into generative AI, with significant growth projections for the industry. This endeavor, he thinks, represents a delusion, an outcome of venture capital funding that has become excessive.

Human

Image Credit - Freepik

Copyright Concerns and Algorithmic Stagnation

Beyond the grand narratives, practical threats loom. Infringements of copyright deeply trouble authors such as Morrison and Emily Ballou, his screenwriter wife. Systems of generative AI learn from pre-existing works, frequently without consent, thereby threatening their professional stability. Governments, including in the UK, have been examining proposals to clarify copyright laws concerning AI training, aiming to balance creator protection with AI innovation. However, many in the creative industries feel such proposals, particularly those leaning towards "opt-out" systems, unfairly burden creators and favour tech industry profitability. Within the entertainment sector, Morrison notes an alarming development: algorithmic processes increasingly determine project approval. He contends this confines creative individuals to reproducing previous achievements because algorithms naturally suggest "additional similar content"—their sole capacity. This algorithmic preference for repetition stifles originality.

A Litany of Grievances: From Job Losses to AI Weapons

The catalogue of Morrison's grievances concerning AI has expanded substantially in the past few years. He highlights the possibility of widespread unemployment, a worry reflected in forecasts such as the one from Bill Gates, who envisioned AI facilitating a work schedule of two days weekly. Tech addiction, the significant ecological impact of AI's computational demands, and detriment to educational frameworks—where student AI deployment is now common for activities like evaluations—are also key issues. How technology firms monitor individuals to tailor AI experiences presents another ethical problem for Morrison, alongside the appalling deployment of AI-assisted armaments in hostilities such as those in Ukraine and Gaza. He finds this application "ethically revolting." The global debate on lethal autonomous weapons systems continues, focusing on accountability and the morality of machines making life-or-death decisions.

The Environmental Toll of Digital Thought

Anxieties of a similar nature are voiced by April Doty, who narrates audiobooks, especially concerning the ecological burden of AI. Formidable computational energy is needed for every AI query and its corresponding answer. She expresses intense frustration, saying, "It's maddening that disabling AI summaries in Google searches isn't possible," underscoring how routine internet actions now play a part in harming the environment. This has led her to explore alternative search engines. Doty expresses frustration at the increasing ubiquity of AI and the lack of an "off switch." In situations where she has the option, she consciously chooses to avoid using AI. This proactive disengagement reflects a growing awareness of AI's hidden costs and a desire to mitigate its negative externalities, particularly the carbon footprint associated with training and running large AI models.

Human

Image Credit - Freepik

The Robotic Narrator: AI in Audiobooks

In her professional domain of audiobook narration, a profound concern for Doty is the quantity of literary works being vocalised by automated systems. The major audiobook provider Audible, under Amazon's ownership, declared not long ago its intention to let publishing houses produce audiobooks with its AI tools. Doty comments, "No one I'm aware of desires a mechanical voice for their stories," worrying this development will diminish the listening quality so much that individuals cease using audiobook services. Although AI has not yet displaced her professionally, some peers have experienced this, and she perceives it as a growing menace. People who narrate do not merely utter phrases; they perceive and convey the emotions beneath them, an ability artificial intelligence, without years of human existence, cannot emulate.

The Absence of an Author: Reading in the Age of AI

Emily M. Bender, a linguistics professor at the University of Washington and joint writer of "The AI Con," presents multiple justifications for shunning extensive language models (LLMs) such as ChatGPT. Her main counterargument is her lack of desire to peruse material not authored by any individual. She further explains her reading motivation: to comprehend an individual's perspective on a topic, noting the absence of any "individual" within machines that mechanically generate text. For Bender, AI-generated text is merely a collage of words from many different people, lacking a singular, authentic authorial voice. This perspective challenges the notion of AI as a creator, reframing it as a sophisticated aggregator at best. Her critique underscores the value of human authorship as a conduit for understanding and connection.

Left Behind? Or Questioning the Destination?

Professor Bender chuckles when faced with the assertion from AI supporters that she is falling behind. She replies, "Absolutely not," and then poses her own query: "To what destination is everyone headed?" Her implication is clear: the destination may not be desirable. According to Bender, choosing synthetic media over genuine content made by humans results in diminished human bonds. This impacts individuals by diminishing their personal enrichment from engaging with others, and it weakens communities by eroding shared human experiences. She suggests this change in technology might represent a deliberate corporate plan to separate people, channeling all their engagements via company offerings. This highlights a concern that AI could foster dependence on platforms rather than direct human relationships.

Human

Image Credit - Freepik

AI in Academia: A Deprivation of Learning

Despite Professor Bender's well-publicised critiques of LLMs, she has, incredibly, encountered students submitting AI-generated work. She describes this as "very sad." Her focus is not on policing or blaming students, but on helping them understand a crucial point: using an LLM robs them of a significant chance for education. Genuine learning happens through the struggle with tasks, investigation, and the formulation of personal ideas. The widespread use of AI among students is evident, yet support for developing AI skills and clear institutional policies often lag. Institutions grapple with formulating clear AI guidelines, and while many students acknowledge their existence, concerns about academic misconduct and the reliability of AI results persist.

Boycott or Individual Choice: The Path of Resistance

Regarding whether individuals ought to collectively refuse generative AI, Professor Bender reflects on the idea. She contemplates that "boycott" implies a structured political move, adding, "Certainly, that's a possibility." However, beyond collective action, she strongly maintains that individuals generally find themselves in a more favorable situation if they abstain from their use. This reflects a dual approach: recognising the potential for systemic change through organised efforts, while also empowering individuals to make choices that enhance their own well-being and intellectual development. Her stance encourages a conscious decoupling from technologies that may not serve genuine human needs, advocating for a more discerning engagement with the digital world. This perspective champions critical thinking over passive consumption of AI-generated content.

The Reluctant Adopter: AI in the Workplace

Some individuals, despite initial resistance, find themselves reluctantly considering AI adoption due to professional pressures. An IT professional in government service, Tom, refrains from employing AI for his technological duties. However, he discovered colleagues leveraging it for other purposes, such as writing annual appraisals that contribute to promotion decisions. A supervisor, whose remarkable self-assessment Tom respected, disclosed using ChatGPT to finish it within ten minutes, not several days. This supervisor recommended Tom follow suit, intimating that his professional advancement could otherwise be hindered. To Tom, such AI utilization seems dishonest, yet his concern is that abstaining currently puts him at a clear professional deficit, presenting a moral quandary where principles may need to yield to career aims.

Controlled AI Use: The "Grunt Work" Assistant

Others, while harbouring misgivings, adopt a strategy of limited and specific AI use. Steve Royle, a professor specializing in cell biology at Warwick University, utilizes ChatGPT to handle the laborious aspects of generating computer scripts for data interpretation. However, he strictly limits its application. He clarifies, "My preference is not for it to create programming from the beginning. Allowing that results in significantly more time spent correcting errors later." In his estimation, assigning excessive responsibility to AI proves to be an inefficient use of time. Beyond practical concerns, he worries that over-reliance on AI could cause his coding skills to atrophy. He rejects the assertion from supporters that specialized knowledge will eventually become obsolete for everyone, stating with conviction, "That is not a view I share."

Writing as Thinking: Rejecting AI for Text Generation

A substantial portion of Professor Royle's work entails drafting research documents and grant submissions. For these critical tasks, he declares with certainty, "Under no circumstances will I employ it for textual creation." Expanding on his rationale, he says, "For myself, the act of writing involves shaping thoughts; subsequent revision and refinement truly solidify the intended message." Having a machine undertake this intellectual labour, he believes, fundamentally misunderstands the purpose of writing. For Royle, writing is not merely transcribing pre-formed thoughts but is an integral part of the thinking process itself, a way to refine and solidify complex ideas. This perspective champions the cognitive benefits of human authorship.

Human

Image Credit - Freepik

AI: A Societal Misstep Hollowing Out Humanity?

Justine Bateman, who creates films and writes, provides a blunt evaluation: generative AI represents "among the most regrettable concepts society has produced." She detests, in her opinion, its effect of diminishing human capability. Tech firms, Bateman contends, are "attempting to persuade individuals of their inability to perform tasks accomplished effortlessly for many years – composing electronic mail, preparing presentations." Even straightforward, intimate actions, such as a mother or father concocting a sleep-time tale about young dogs for their offspring, could be delegated to AI.

She depicts a bleak outlook where people are reduced to "merely a biological shell—skin, organs, and bones, with nothing further," lacking understanding and consistently informed of their ineptitude. This, she contends, is the antithesis of what life offers. She notes people already cede decisions on holidays, attire, dating, and diet to AI, and warns that even grief might be outsourced, with AI enabling video calls with deceased loved ones. She is apprehensive this will result in a systematic emotional depletion of individuals well in advance of any worldwide disaster.

The Blender Analogy: AI as Theft and Regurgitation

No inclination towards this AI-led path is shown by Justine Bateman, who declares it "the utter antithesis of my aspirations as a motion picture creator and writer." She compares generative AI to a "food processor – one inputs countless instances of the desired item, and it yields a "Frankensteinian portion" of the mixture." This operation, she maintains, constitutes at its core intellectual property theft and simple repetition. She states with absolute certainty, "No truly novel creation can emerge from it, due to its inherent characteristics." Bateman believes that any artist who uses generative AI is, in effect, "stopping their creativity." Her critique challenges the notion of AI as a creative partner, instead framing it as a derivative tool that homogenises artistic expression. This might result in a "mental block," complicating the ability of human artists to conceive their own unique concepts after exposure to AI-generated proposals.

Resisting AI in Film: The Credo 23 Initiative

Although certain animation houses, such as Studio Ghibli, have made public declarations against AI use, others seem keen to adopt it. Jeffrey Katzenberg, a co-originator of Dreamworks, made a contentious forecast in 2023: AI would reduce animated motion picture expenses by up to 90 per cent. Conversely, Bateman anticipates that consumers will ultimately grow weary of material generated by AI, likening it to unhealthy processed food—initially attractive to a few, yet finally insubstantial, causing many to disengage. As a countermeasure, she founded Credo 23, an entity and cinematic event focused on presenting motion pictures produced free of AI involvement. She equates Credo 23 with an "organic certification for cinema," guaranteeing viewers that artificial intelligence played no part in the film's production. Individuals, Bateman foresees, will "crave experiences that are unadulterated, authentic, and deeply human." The Credo 23 Film Festival aims to platform human-driven filmmaking, respecting the traditional process.

Human

Image Credit - Freepik

A Parallel Universe: Selective Tech Engagement

Within her private life, Justine Bateman consciously endeavors "to exist in a separate reality, striving to circumvent [AI] to the maximum extent." She stresses this is not an anti-technology stance. She explains, "My background is in computer science; I appreciate technology. I also enjoy salt, but I refrain from adding it to all foods." This nuanced approach advocates for mindful and selective technology adoption, rather than wholesale integration of every new innovation. It underscores the importance of human agency in deciding which tools serve genuine human purposes and which might detract from essential human experiences or values. Her perspective encourages a critical assessment of technology's role, rather than passive acceptance.

Technophiles with Conscience: A Common Thread

Interestingly, all individuals interviewed express a degree of technological inclination. April Doty characterizes herself as "quite technologically progressive," yet she places higher value on interpersonal human relationships, which she perceives AI as endangering. She bemoans, "Our collective drift resembles that of unthinking automatons towards a future that virtually no one genuinely desires." Professor Royle develops software and manages server systems, while concurrently defining himself as a "principled dissenter regarding AI." The expertise of Professor Bender is in computational linguistics; Time magazine in 2023 recognized her among the leading 100 figures in the AI field.

She states, "My identity is that of a technologist, yet my conviction is that communities ought to develop technology to serve their objectives, not large enterprises for corporate aims." She also remarks, laughing, "The historical Luddites were remarkable! That is a label I would proudly accept." Similarly, Ewan Morrison expresses regard for the Luddites, viewing them as "individuals who courageously defended the livelihoods crucial to their households and the vitality of their local groups." This common thread reveals that scepticism towards specific AI applications can coexist with a broader appreciation for technological advancement, driven by a desire for technology to serve humanity, not the other way around.

Do you want to join an online course
that will better your career prospects?

Give a new dimension to your personal life

whatsapp
to-top