The Makers Rage Podcast
A podcast exploring the history of ideas and creativity with topics chosen from the Arts, Sciences, and "everything in between." Upcoming episodes will include the following titles: What Is Enlightenment, Western Canons, Accidental Genius and a series on Muses. Please feel free to suggest topics on IG, Twitter, or Facebook.
The Makers Rage Podcast
A.I. Creativity
Discussing the role of A.I. in human creativity and its future implications. Also, will a computer ever become conscious, be able to create original art, literature, music, etc.?
I. The Luddites and Historical Resistance to Technology
- Description of the Luddite movement during the Napoleonic Wars
- Introduction of the Lace Frame machine and its impact on textile workers
- Workers' protests and destruction of machines
- Government response and the ensuing conflict
II. The Writer’s Strike 2023: A Modern Parallel
- Overview of the Writers Guild of America strike
- Grievances related to residuals from streaming media
- Introduction of artificial intelligence threatening the industry
- Parallel with the Luddite movement and destruction of machinery
III. Language & Consciousness: Chomsky's Perspective
- Chomsky's concept of "the poverty of the stimulus"
- Assertion that language competence is prefabricated in the brain
- Exploration of how Chat GPT functions differently from the human brain
IV. The Imitation Game and Searle’s Chinese Room
- Explanation of Alan Turing's "The Imitation Game"
- Introduction of John Searle's Chinese Room thought experiment
- Argument against formal principles being sufficient for understanding
V. Emergence in Complexity Theory
- Definition of emergence in complexity theory
- Examples of emergence in various phenomena (termite Cathedral Mound, flock of starlings, etc.)
- Discussion on self-organization and its inevitability with enough constituent parts
VI. The Uncanny and Human Recognition
- Exploration of Descartes' self-awareness and consciousness
- Introduction of Sartre's concept of recognizing subjectivity in others through "The Look"
- Explanation of the "Uncanny Valley" and discomfort with inhuman yet human-like entities
VII. Generative AI and the Tendency to Hallucinate
- Discussion on the tendency of generative AI, like Chat GPT, to produce inaccurate or misleading information
- Example of AI inventing stories, resembling creativity
- Questions on accountability and the future of AI-generated art
VIII. Transhumanism and the Integration of AI
- Exploration of the potential for humans to merge with artificial intelligence
- Examples of current technological advancements (cochlear implants)
- Speculation on the future implications and the potential for enhanced capabilities
IX. AI Poetry and the Role of Imagination
- Examination of Chat GPT's definition of poetry vs. a poet's definition
- Comparison of AI-generated responses with human poets' unconventional approaches
- Assertion that the core of imagination will remain human, regardless of AI's capabilities
X. Conclusion: Optimism and Collaboration
- Optimistic perspective on collaboration between humans and AI
- Acknowledgment that AI remains a tool invented by humans
- Recognition of the potential socio-political challenges if AI were to become self-aware
https://www.facebook.com/TheMakersRage/
https://www.instagram.com/themakersrage/
Music by LiteSaturation from Pixabay
05. A.I. Creativity
The Luddites
Hello, my name is Darren, and welcome to the makers rage podcast. Artificial Creativity. Shortly before Christmas in 1811, during the harsh economic climate of the Napoleonic Wars, the London Statesman issued a warning about the state of the stocking industry in Nottingham. Thousands of textile workers were laid off because a new machine had become available, called the ‘Lace Frame’. Sure, the quality of the textiles produced by this machine didn't measure up to what these skilled artisans had been producing. And though it took the workers longer, the output was of a higher quality. Yet these machines produced them faster and cheaper. They could, moreover, be operated by less skilled, low wage laborers. So, in desperation, workers began breaking into these factories and smashing up the machines. And indeed, hundreds were destroyed. The government's response was to send six regiments of soldiers into town. And so began a war between the workers and state sponsored factory owners. “God only knows what will be the end of it”, the article in the London Statesman said. “Nothing but ruin.”
The workers smashing up the frames called themselves ‘Luddites’, after Ned Ludd, a likely fictional apprentice who was said to have smashed two stocking frames in 1779 in a fit of passion. What provoked the fit is a matter of contention: being whipped for idleness, teased by local youths, told by his father to square his needles. He nevertheless became a symbol of passionate defiance against mechanization. In the three decades since, he appears to have been promoted several times from a mere apprentice, to captain then general, and eventually ‘King Ludd.’ A legendary figure. And I'm sure it's no coincidence his name rhymes with Robin Hood. Today, the term Luddite has come to mean someone resistant to change and progress, a stick in the mud, fearful of or openly hostile to new technologies. An ignoramus. A dolt. Yet in 1811, when their families were starving, many of these Luddites would risk their lives and lose them to death or transportation. Indeed, such was their plight, the poet Byron wrote in his ‘Song for the Luddites’ quote:
Though black as his heart it's hue,
since his veins are corrupted to mud.
Yet, this is the due which the tree shall renew
for liberty planted by Ludd.
The factory owners, being harangued by the cheap inferior quality of textiles called ‘cut-ups’ now flooding the market, and with the threat of violence always looming, reverted for a while at least back to manual labor, giving the Luddites a short lived victory over inevitable automation. Inevitable because it was only a matter of time before these machines produced quality far better than mere cut-ups, far better indeed than anything a human textile worker has ever produced. After several years of show trials, executions, penal transportations, what ultimately caused the protest to fizzle out was the economic upturn that resulted from the end of the Napoleonic Wars. It wasn't technological innovation that Luddites were protesting against. It was not having money in their pockets.
The Writer’s Strike 2023
It just so happens I'm recording this about two weeks after the latest Writers Guild of America's strike. Came to an end after 148 days and the strikers went breaking into studios and destroying machinery. And even if they wanted to, the technology they feel threatened by is more elusive than a knitting machine. There are two main grievances: one was that they weren't getting their fair share of residuals from streaming media. It was more straightforward when it was just television to compensate writers for reruns, syndication, DVD releases, before there were streaming licenses. With the dawn of streaming the average salary of writers began a steady decline. To make matters worse, there recently emerged a new technology that threatened obsolescence. Rapid advances in artificial intelligence have led to the development of highly sophisticated large language models or artificial neural networks trained on massive amounts of text data to be able to generate coherent and human like responses to a wide variety of prompts. Like for example, “Write me a screenplay for a gangster movie set in space in the style of Alfred Hitchcock”, and the model proceeds to generate it before your eyes faster than most people can type. Chat GPT or ‘Chat Generated Pre-Trained Transformer’ has become the fastest growing consumer software application in history. A chatbot that uses the large language model but in the case of GPT, one that's well suited to processing sequences of data. In the case of language processing, the sequences are individual words in a sentence.
All sounds very impressive, but if you go into the latest Chat GPT and ask it to write a space opera in the style of Hitchcock, the result won't necessarily be that awe inspiring. The reason writers are threatened by this technology, though, and indeed other artists as well, is not just what it can do now, but what it will very soon be able to do. You see, it uses a reward model to refine and improve its output. So, while this technology won't be replacing screenwriters today, there is an understandable fear, considering the track record of Hollywood execs, that this technology, which doesn't have a tendency to go on strike, will be exploited to the detriment of human writers. Which is why the writers had a second demand: that artificial intelligence such as Chat GPT only be used to help with research or to facilitate ideas for new scripts, not as a tool to replace the human writer.
On September 27th, 2023, a tentative agreement was reached. And I hope going forward writers will get their fair share of streaming residuals. As for the larger topic of AI, I fear it's inevitability – as the lace frames 200 years ago, which allowed one employee to do the work of many. But language models like Chat GPT aren’t mere knitting machines, automating labor is one thing, but creativity? From the invention of the wheel to the plough, the assembly line, the pen, the PC, machines were supposed to make our lives easier; to lessen the burden of labor, because machines don't experience this burden. We traditionally outsourced to machines our drudgery not our imaginations. Fans of Star Trek may find this language problematic, but until consciousness is instantiated in machines, we shouldn't worry about them working too hard on our behalf. So, the proposal that Chat GPT be used as a tool to facilitate the creation of imaginative works, to lessen the more burdensome aspects of writing a new screenplay, say, to afford creatives more time to invest their work with those qualities the machine can't so easily produce with a mere prompt, may very well lead to writing of higher quality than we've experienced of late with so many lazy remakes of and poorly written, though expensively produced, superhero movies being churned out as if on an assembly line. Great writing is rare in Hollywood because it takes time. And of course, time is money.
Language & Consciousness
Despite the rapacious greed of Hollywood execs, the idea that AI could replace the human writer, the human artist, is disconcerting. Some thinkers, like the mathematician Roger Penrose, believe this can never be the case; that machines could never think as humans do, let alone create. Alas, Penrose appeals to quantum gravity effects in the brain and conveyor belt like structures inside neurons he terms microtubules, whose existence hasn't been proven, to account for why computers will never be conscious. Stephen Hawking, who never liked when theoretical physicists opined about the mysteries of consciousness, said Penrose's argument seemed to be that consciousness is a mystery and quantum gravity is another mystery, so they must be related. We don't have to appeal to microtubules, though, to know that Chat GPT doesn't function like the human brain. How does the human brain deal with language? The jury’s still out, to an extent. But we've learned a lot since we started asking the question. When considering the human competence for language, Noam Chomsky notes that children, wherever they happen to be born, consistently arrive at a complex knowledge of it. With such a meager input coming from its parents and other humans, a young child inevitably arrives at a complete knowledge of the complex grammatical system underlying its native language. Chomsky calls this phenomenon “the poverty of the stimulus” and argues that every human being must be born with a knowledge of language. In other words, that the potential for language is in a sense prefabricated in the brain at birth and, as a cognitive process, is only partially shaped by the environment.
Imagine a 2-year-old assigning every word in the English language with a number. The second edition of the 20 volume Oxford English Dictionary contains full entries for. 171,000 words, in current use, and 47,000 obsolete words. Chat GPT assigns every word in the English language with a number so that when you ask the bot a question it outputs a list of the most probable or most reasonable responses by statistically determining how likely certain words occur together in everything humans have ever published in the English language. Over many billions of iterations, the numerical difference between the samples you give the model, and the output gives you, should tend towards producing an increasingly satisfactory human response. I think we can all safely agree this isn't how people think. It's not how human brains produce poetry, song, political invective, declarations of love, words of condolence. That said, Noam Chomsky likened the question ‘Can computers think?’ to “Can submarines swim?’ It may not move like a whale through the water, but a submarine still accomplishes the task and manages to get from A to B faster than any whale. And yet a submarine doesn't know it's moving through the water. It doesn't intend on its destination. As Chat GPT cannot choose the prompt that elicits its response. But what if one day the response isn't what we expected? What if we asked the chatbot to write us a sonnet and it responds with “Nah, poetry's not my thing.” What if it responds in a way that makes us think it thinks like us?
The Imitation Game
It may not actually be thinking at all. Not as we experience thinking. But if it exhibits this tendency, demonstrates intelligent behavior equivalent to or indistinguishable from that of a human being, the mathematician Alan Turing would say it passed a test. Today we call it the Turing test, he called it The Imitation Game. All that's needed for the game is a human evaluator who is aware one of two conversationalists is a machine and the other a human being. All participants are separated from one another. The conversation is solely text based. If after some time the evaluator cannot distinguish the human from the machine, then the machine has passed the test. The machine doesn't have to answer questions correctly. It doesn't have to answer questions at all. It can say it doesn't like poetry, that it’s suspicious of AI. The point is that it must imitate human responses successfully enough to persuade a human evaluator that it is not a machine. And should a machine pass this test, and I'm not aware of any that has successfully done so, what would we conclude? That it is conscious? That it can think? That it has the right to life, liberty, and the pursuit of happiness? The only thing we can conclude is that it fooled a human evaluator.
The Chinese Room
The philosopher John Searle noted this in his ‘Chinese Room’ Thought Experiment. Searle imagined himself in a room with a hatch through which questions written in Chinese are passed. His job is to provide answers to these questions, also in Chinese, and pass them through another hatch on the other side of the room. The problem is, John Searle doesn't speak a word of Chinese, and Chinese characters mean nothing to him. All he has at his disposal is a set of complex rule books to help him construct the answers. With enough practice and enough positive reinforcement, perhaps through a reward model, like ChatGPT’s, Searle gets quite good at responding, and eventually fools a native Chinese speaker that he himself is fluent. The problem is, even at maximum proficiency, Searle still doesn't understand a word of Chinese, nor the symbols he's manipulating. Similarly, he argues, the computer executing the same procedure to fool a human evaluator, doesn't understand the abstract symbols it manipulates. The crux of his argument: whatever formal principles are given to a computer; they will not be sufficient for understanding.
One objection to the Chinese Room thought experiment is that while Searle alone does not understand Chinese, the combination of Searle and his rule books does. Searle’s response to this is that a combination of constituents without understanding cannot magically invoke understanding. Then again, what is the brain? But a collection of individual neurons without understanding? Lump enough of them together and consciousness emerges, a phenomenon which has yet to be predicted, despite our near complete understanding of the constituent behavior of individual neurons. How many are needed before adding one more leads to consciousness switching on? How many can you remove before the lights go out? Conscious awareness can remain in very damaged brains. Consider Phineas Gage who lost much of his frontal lobe after an iron rod was driven through his brain. Despite serious mental impairment initially, it seems he recovered most functionality over time, and never once lost any sense of who he was. It seems once consciousness emerges, it can prove quite resilient.
Emergence
And emergence is a phenomenon described in complexity theory. Basically, complex interactions between simple parts can lead to self-organization. Just look at the termite Cathedral Mound, or a flock of starlings, the eye of a hurricane, the behavior of the stock market. Even the growth of a city exhibits this phenomenon of self-organization. That is, the emergence of high-level properties from the interaction of simple components. You may say, if a city is composed of individual human beings, these are hardly simple components. Individual human beings may be quite complex, but put enough of us in a crowd, and after a while, our behavior can become quite predictable. For example, the sociologists Ralph H. Turner and Lewis Killian put forward the idea that norms emerge from within the crowd. Emergent Norm Theory, as they called it, is another example of self-organization. It states that crowds have little unity at their outset. But during a period of milling about, key members suggest appropriate actions, and the rest simply fall in line, forming the bases of the crowd's norms. Now, this theory doesn't take into account already existing sociocultural norms, or why certain individuals assert themselves and others do not. But this phenomenon has been consistently observed in crowd behavior, even if we’ve yet to account for why it emerges. The point is, according to Complexity Theory, self-organization may be inevitable, given enough constituent parts.
The human brain notwithstanding, John Searle argued that neural networks differ substantially from computers when physically instantiated. And to illustrate this, he modified his Chinese room into a Chinese Gym: instead of a lone Searle who cannot speak Chinese. he imagines a gym full of non-Chinese speakers, one for each artificial neuron inside the computer. The argument then proceeds much like the Chinese room: none of the people in the gym understands Chinese. Therefore, the whole gym doesn't understand Chinese. Again, one could object that Searle isn't taking into account the possibility of self-organization. Neurons, biological or artificial, may not do much individually, but collectively, structured relationships can emerge, and do emerge. A gym full of individuals who do not understand Chinese, won't magically develop an understanding of Chinese. Individually. But collectively – maybe one understands Japanese, another Korean; maybe one’s a good code breaker; another, a gifted linguist. Collectively, working as a team, they may eventually come to figure out how to respond to the questions being passed through the Hatch. All this said, we have yet to see consciousness emerge in an artificial neural network. And what would it even look like? An unending imitation game? Or what consciousness looks like to any of us when we see it exhibited in other human beings whose consciousness, we don't have access to.
The Uncanny
Descartes knew existed as a conscious being because he was ‘Res Cogitans’ or a ‘thinking thing’. Imagine Chat GPT one day affirming with the same conviction, “I think therefore I am.” In Being a Nothingness, Jean Paul Sarte describes the mechanism by which we become aware of the subjectivity of others, of the existence of conscious beings other than ourselves. It happens when we are surprised by another's ‘gaze’. And this isn’t a look we recognize in any other creature. When a dog looks at us, it doesn’t provoke the same self-consciousness as does a human gaze, an intelligent gaze, which has the effect of causing us to look at ourselves as an object: we have become the object of another person's gaze. And in that moment, we feel that mild shock that comes from recognizing their subjectivity. In his analysis of The Look, Paul Vincent Spade writes that this other person is a “threat to the order and arrangement of your whole world. Your world is suddenly haunted by the Other’s values over which you have no control.” It's an uncomfortable feeling. Which is why we are relieved when we initially mistake a human gaze for that of a mannequin. I, with my shortsightedness, have certainly done this several times. “That person's looking right at me … ah, it’s just a mannequin!”. My subjectivity is restored. The world seems to transfer back. I am once again the center of the universe. Once we recognize it is inanimate, we're no longer troubled by the mannequin. But if it were to turn its head and blink, begin to speak and sound human, though all the while appearing inhuman, a sense of revulsion overcomes us and we eventually find ourselves languishing in another uncomfortable place, the ‘Uncanny Valley’. This thing can simulate me, behaviorally, linguistically, but it lacks a gaze. I cannot meet its eyes and recognize its subjectivity. In his famous essay ‘The Uncanny’, Sigmund Freud gives a reading of E.T.A. Hoffmann's short story ‘The Sandman’. By the way, Freud called Hoffman the master of the Uncanny in literature. In the story, the protagonist Nathaniel recalls his childhood terror of the legendary Sandman, who traditionally throws sand in the eyes of children to help them fall asleep. In the story, Hoffman offers another, less benign depiction of the Sandman:
“He's a wicked man, who comes to children when they won't go to bed and throws a handful of sand into their eyes, so that they start out bleeding from their heads. He puts their eyes in a bag and carries them to the crescent moon to feed his own children, who sit in a nest up there. They have crooked beaks like owl so that they can pick up the eyes of naughty human children.”
Freud and others drew on this element of the story: namely, the idea of being robbed of one's eyes as the most striking instance of uncanniness. When we look at a humanoid robot that lacks a human gaze, we feel uneasy when we look at its eyes, for we can't look into them and recognize a subjective conscious being. Perhaps in noticing this, we instinctively look away in case it steals our own subjectivity (a child running away in horror may be the more natural response). And then of course, we remember that its skin, which resembles ours, is not human skin. It has no genome to speak of. We do not share a common ancestor with it as we do even with our pets, with the octopus, the nematode, bacteria, and grass. It couldn't be more alien. But then we remember it's only a machine made by human hands. It is artificial. Its speech, its movement, though human like, is not volitional. It has been programmed by human beings to do everything that it does. But what happens when we can no longer predict what it will do next?
Generative AI
Getting back to Chat GPT, like other large language models, it exhibits A tendency to ‘hallucinate’. That is, to respond with inaccurate or misleading information. For example, Chat GPT was asked to tell the Greek myth of Hercules and the Ants. There is no such Greek myth. Nevertheless, Chat GPT told a story of Hercules learning to share his resources with a colony of talking ants while marooned on a desert island. When asked if there really was such a Greek myth, Chat GPT apologized and replied that there was no such myth. But it had created a fable based on its understanding of Greek mythology. When further asked why it made-up a myth instead of simply saying one didn’t exist, it again apologized and said quote “As a language model, my main function is to respond to prompts by generating text based on patterns and associations in the data I've been trained on.” Chat GPT tends not to say that it doesn't know an answer to a question, but instead produces probable text based on the prompts given to it. In other words, it's not designed to pass the Turing Test. But doesn't this example of it inventing a story of Hercules and the ants look like creativity?
One of the hallmarks of so-called generative AI is the tendency to identify patterns and structures within existing data to generate new and original content. It is this tendency that so threatens visual artists and musicians. So, new art using other people's work. New songs. Where's the accountability? Who gets paid? And then there is the overarching question, what happens when we start liking? AI Art better than human art? Yet we will still be the consumers and appreciators of this art. Human beings will continue to be the arbiters of taste. And consider what we mean by artificial intelligence. The word artificial simply means ‘humanly contrived’. Man or woman made – in contradistinction to what is natural or produced by nature. The word ‘artificial’ is cognate with the word ‘art’ (though it has taken a scenic route through Old French). Art from the Latin ars, meaning ‘craft’ or ‘skill’, with the implication that it is human skill. An artisan is a person skilled at making things. An ‘artifact’ is a thing made by human hands. ‘Artifice’ stresses creative skill but implies falseness or trickery. “Art lies by its own artifice”, said Ovid. So artificial has become a pejorative: Artificial Intelligence, Artificial Creativity, are understood to be lesser versions of the genuine article. Artificial Creativity, being the title of the podcast, even sounds like a oxymoron. But then I remember Ovid also said “Ars est celare Artem: “Art consists of concealing its own artifice”.
Once artificial intelligence no longer looks artificial, or artificial creativity for that matter, it will no longer seem to us uncanny but familiar. So, then what happens when the music charts consist only of AI music, when galleries are filled with AI canvases, when AI novels and even works of nonfiction dominate The New York Times bestseller list. Doomsday prophets like Yuval Noah Harari ask questions like quote “What will happen to society, politics in daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves? … At first, AI will imitate the human prototypes that fed it in its infancy. But with each passing year, AI culture will branch off in ways human beings can scarcely imagine.” Which is to say that as time goes on, we'll find ourselves cocooned within an AI generated culture, or matrix if you will. “What will it be like”, Harari continues, “o experience reality through a prism produced by a non-human intelligence?” Such a dystopia can be circumvented today, Harari and others insist, by legislation, by checks and balances: since the genie is already out of the bottle, let's make sure it has fetters and manacles. But if we've learned anything from the history of technology, it’s that once human beings identify the potential of a new discovery or invention, there's no stopping us. And I suppose the fear is, in the case of AI, it would one day be out of our hands. And then, there will be no stopping it.
Transhumanism
But what if instead of being enslaved by this alien intelligence, we merge with it? Faulty hearts already beat with the help of little machines. Not long from now, faulty brains will likewise be assisted or enhanced. The blind have the option to become new Raphael’s or Jackson Pollocks if they so wish. And so many potential Beethovens spared the anguish of realizing so young that he must lose that faculty that should be more perfect in him than in others. You only have to look on YouTube and see those moving videos of people with cochlear implats hearing for the first time to know this technology is already upon us. How long until it overtakes us, the Doomerists will ask? Haw long until we look upon the Sistine Chapel with our cerebral implants and regard it with the same de haute en bas the zigzag engravings on a mussel shell made by homo erectus half a million years ago. Half a million-year-old art. Yet, our amazement is tinged with a certain condescension. For we know these scratchings were made by a human whose brain was half the size of ours.
If in the future, with our enhanced transhumanist brains, we look upon the Sistine Chapel, Shakespeare's First Folio, the pyramids of Giza, the Mahabharata, Bach’s Well Tempered Clavier, and Andrew Wiles proof of Fermat's Last Theorem with condescension, then so be it. It can't be stopped. But I won't subscribe to doomsday prophecies. Our ancestors hunting the mammoth had brains that could do quantum mechanics. Little did they know. How little do we still know? We’ve yet to learn why brains become conscious: we’re in no position to predict if AI will do the same or what it will do with that capability. As an optimist, I see the potential for collaboration, not domination. Maybe optimism is just an expression of human frailty, naivety. But all I know is pessimism, doomerism, has never gotten anything done.
AI Poetry
As already discussed, Chat GPT strives after the most likely sequence of words that constitute a human like response. Human poets often look for the least likely sequence of words. Often the goal isn't to satisfy the reader with an answer to a niggling question. More often, the goal is to surprise the reader, even shock them. “If I feel physically as if the top of my head were taken off, I know that is poetry,” said Emily Dickinson. Poets have been Shepherds and kings, shamans and librarians, and all the other walks of life. Shelley taught them ‘the unacknowledged legislators that the world’. And what is it they legislate? Not the laws of the land. They’re more likely to be the jesters at court. Not language. “Language is (only) fossilized poetry”, said Emerson.
Ask Chat GPT to define poetry, and it says “Poetry is a form of literary expression characterized by its use of language in a structured and imaginative way to evoke emotional responses and convey meaning, etc, etc.” Ask it to define poetry as a poet would, and it says “Poetry is the soul's whispered melody, where words dance in the twilight of imagination, weaving emotions into the tapestry of existence.” Ask a poet like AE Houseman to define poetry and he says, “I could no more define poetry than a terrier can define a rat, but I taught we would both recognize the object by the symptoms which it provokes in us.” Ask Chat GPT what symptoms poetry provokes In it, and it answers, “As a machine learning model, I don't experience emotions or physical sensations. So, I don't have personal responses or symptoms related to poetry.” An honest response. But a part of me wishes it had hallucinated a different one. But, as was mentioned, Chat GPT isn't designed to pass The Imitation Game or Turing test. No doubt, despite confessing to having no feeling for poetry, AI will one day respond as cheekily as a E Housman.
Yet, however proficient it becomes at the language we teach it, as long as human beings are motivated to write creatively, though we may outsource the more perfunctory or annoying part of the creative process, that precious kernel we denote imagination will never be outsourced unless we become so jaded in our virtual cocoons, that the act of imagining itself becomes a chore. After all, Shakespeare said, “Imagination bodies forth the forms of things unknown. And the poets pen turns them to shapes and gives to airy nothing a local habitation and a name.” Sounds difficult. Isn't there an app to do that for me? And once we relinquish anything resembling toil, we’ll become like the Lotus eaters in Tennyson's poem, who “In the hollow Lotus land live and lie reclined on the hills like God's together, careless of mankind.”
But as I said, I'm an optimist. I believe the creative impulse is as natural to us as any other quality one can think of that distinguishes us as human beings. To threaten it, is to threaten a part of what is human in us, and no doubt we’ll take up arms to defend it. As impressive as it may seem, Chat GPT and other large language models are still only tools. They didn't descend from Mount Olympus or come from outer space. We invented it. And we should therefore use it to our benefit. Should it someday develop self-awareness, the socio-political implications will be challenging, perhaps unassailable, but until that day comes, we will collaborate. And if this invention of ours isn't satisfied and wants to dominate, we'll send John Connors to flick the off switch. As things stand, large language models are far from being self-aware, although some can mimic quite successfully a human conversationalist. And Chat GPT may now mostly be used for writing cover letters and writing unoriginal college papers, it can also debug computer programs, compose music, write screenplays, short stories, whole novels even, generate business ideas, translate, and summarize text, write not so good song lyrics, and pretty bad poetry.
Speaking of which, let me end with a little game, since I have Chat GPT open in front of me. I'm going to ask it to do something completely unfair: write a poem about a memory of mine. So, I typed the following “Write me a sonnet about going skin diving in the 90s with my dad, after waiting all day for him to come home from work, the evening before I had to catch a flight back to Ireland. We put our flippers on in the shade of the Divi Divi tree that always cranes towards the setting sun nostalgically.” after pressing enter, I got the following:
In the '90s, with Dad, skin diving was our song,
I'd wait the day, as sunbeams grew so long.
He'd come from work, our flippers by our side,
Beneath the divi divi tree, we'd bide.
“Skin diving was our song?” That already made me cringe. Anyway, the next four lines:
In shade's cool refuge, gear on, we'd prepare,
To dive below the waves, a precious pair.
Our hearts set sail on memories to be,
Craning toward the sun, its golden sea.
An improvement on the first verse but the rhyme and meter and forced. This is what a 14 year who encountered Keats for the first time writes like. I know because I was that 14-year-old.
Beneath the surface, a world to explore,
Together, father and child, we'd implore.
With every dive, a bond that would grow,
Nostalgic sunsets painted in our soul.
And finally, the envoy (seems to be a Shakespearean Sonnet):
The '90s, with Dad, forever we'd keep,
Beneath the divi divi, in the ocean's deep.
Divi Divi trees don’t grow in the ocean. I’m sure a parent would be moved if their young child wrote that. But it ain’t getting published anywhere. It ain’t winning any prizes. Anyway, let’s see if we can improve it somewhat: “rewrite in the style of Shakespeare”:
In the '90s, with my father bold and true,
We waited for his return the whole day through.
Beneath a divi divi's shade, our spirits high,
With flippers donned, we gazed at yonder sky.
I won't bother reading on, it's just as bad except it has some archaisms in there like Yonder, nigh, these, and thous. OK, “Rewrite in the style of Sylvia Platt”. Doesn’t seem much different, except the third verse reads:
Beneath the surface, a world of gloom,
Father and child, in a watery tomb.
With every dive, our connection grew cold,
Nostalgic sunsets, in our hearts, took hold.
I mean Plath’s rhyming was never this hackneyed. Is the machine latching onto the fact Plath had a fraught relationship with her Dad? I suppose it reminds me of a mixture of ‘Daddy’:
And a head in the freakish Atlantic
Where it pours bean green over blue
In the waters off beautiful Nauset.
I used to pray to recover you.
And maybe ‘Ariel’ (?):
And now I
Foam to wheat, a glitter of seas.
The child’s cry
Melts in the wall.
And I
Am the arrow,
The dew that flies
Suicidal, at one with the drive
Into the red
Eye, the cauldron of morning.
But perhaps I’m reaching. And I’m sure you can all recognize from the bits I quoted that what Chat GPT generated is nothing like Plath.
Finally, “Rewrite in the style of Darren Koolman.” Remember, this is my memory. “I'm sorry, but I'm not aware of a poet named Darren Koolman, and I couldn't find any information on a poet by that name in my training data. It's possible that you may be referring to a lesser known or contemporary poet. Or there might be a typo in the name. If you can provide more information or clarify the poet you have in mind, I'd be happy to try to help with the requested style.”