What can a Learning Experience Designer bring to your content that no one (or nothing) else can? Our resident respected writer and LXD Andrew investigates whether tech can do better in exactly 1000 words and 1 Nick Cave reference.
The year is 2023 and Artificial Intelligence (AI) has arrived.
AI has been a sci-fi staple for far longer than I’ve been alive, promising everything from sentient cars to robots growing human batteries, so AI seemed as fanciful as teleportation or time travel to me. The chatter around Chat GPT had made me sceptical but curious so I had to try it out.
I began by asking a question to help me research my current project. Within a few seconds, I understood why there had been so much excitement. With widening eyes and a creeping smile I saw the content I needed being created in a matter of seconds. It was well structured, comprehensive, convincing, and written using clear language. Most of the content was what I was expecting, but there were some points I hadn’t considered, or even heard of before.
My excitement turned to dread as I began to contemplate the unthinkable. If Chat GPT can research and write about a subject this quickly, will it replace me before it’s time to retire?
Fortunately for me, AI has a few limitations, so I won’t be consigned to the dustbin of history without a fight.
Firstly, the quality of the answers given depends entirely upon the data being used. Incorrect, incomplete or misleading datasets will lead to inaccuracy. Bias in datasets, even when unintentional, will give biased and even discriminatory results. For example, systems from leading companies IBM, Microsoft and Amazon misclassified the faces of Oprah Winfrey, Michelle Obama, and Serena Williams, while having no trouble with white males.
It has been reported that the current designers of many AIs are mostly white males under 40 without disabilities, who grew up in high socioeconomic areas, often with similar educational backgrounds. The resulting AIs are populated with narrow datasets that aren’t representative. For instance, a US government dataset of faces collected for training AIs contained 75% men and 80% lighter-skinned individuals. The AI developers most likely didn’t notice because they had no experience of diversity themselves. While the datasets will inevitably become more comprehensive and inclusive, and the results from AI will improve, it seems that understanding nuance is a uniquely human trait.
In the above example, it’s fair to assume there was never any intention to discriminate or harm; the misclassification can be attributed to the technology being in its infancy. Everyone involved – and the machines themselves – are still learning. However, there are already many examples of AI datasets being deliberately corrupted to promote ideological agendas. As with so many innovations, we have more to fear from the people who may abuse it than from the technology itself.
So any information from a chatbot needs to be checked against other sources, just as it would if you had found it using a search engine, or even books. Perhaps even more importantly, it’s up to you to interpret that source material. Does it ring true for you? Does it reflect your own experiences? Does it confirm, or cause you to question, your own beliefs? Chatbots don’t have meaningful experiences or beliefs, and never will, so this is insight only a human can provide.
This leads us neatly on to storytelling, another of the things that make us human. You can instruct Chat GPT to write a story, about a particular subject or in a particular style, and it will look like a story. There will be a beginning, middle and end. There will be some sort of plot and characters that perform their functions as expected. However, it will always miss the point. It will always be an artificial plant, a clip-on tie, a market-stall Rolex. Stories tell of love and heartbreak, joy and tragedy, journey and adventure. Experiences and feelings that a computer cannot ever understand, and because of that, can never effectively communicate. AI can attempt to emulate a great writer, but as humans we know the difference. Sometimes it's hard to say why or how, but we just do. Nuance.
“What Chat GPT is, in this instance, is replication as travesty… Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer.”
NICK CAVE’s reaction to a song created ‘in the style of Nick Cave’ by Chat GPT.
I wanted to explore a couple of the shortcomings of AI not to disrespect or devalue it, but to try and uncover the areas where it will be useful. As I mentioned earlier, I was able to create a structured, comprehensive text in moments. I will use that structure. I will check those facts. I will add the ‘human touch’ and ultimately it will mean I have produced a better piece of work in a shorter time. If you haven’t already, I would encourage you to try it yourself, to be curious and experiment, to find out if there are ways it can be useful to you*.
I believe we are approaching a golden age of AI, where it can be used to enhance our work and improve our productivity, if we always keep the limitations – and there are far more than I’ve identified here – in mind.
In the future, as AI becomes even more credible and convincing, we could be tempted to treat it as infallible and leave out the ‘human touch’. As writers, we must evolve with the technology and become ever more adept at scrutinising our sources. If we don’t, the word of the machine will become the new gospel and it’s only a matter of time before AI judges us to be redundant and starts sending bodybuilders back in time to stop us from ever existing.
*If you’d like an idea to get started, try typing “How will AI affect learning writers in 1000 words” into Chat GPT and compare the results to this piece. You may find you prefer the AI piece to mine but I tried my best. I’m only human.
Talk to us today if you would like to find out how we can help you with your L&D offering!