In a tech world saturated with hype and bluster, the recent arrival of ChatGPT feels like that rare, precious thing – the real deal. If you have spent any time on the internet in the last 10 days or so, you will have seen people playing around with this remarkable tool, executing often highly complex tasks with implausible speed and accuracy.
For the uninitiated, ChatGPT is an AI-powered chatbot developed by a US company called OpenAi (which counts Elon Musk among its founders). It’s been trained on a vast amount of data to generate human-like text, answer questions, provide information and create its own content. Judging from the excitement of IT professionals, it also seems unfathomably good at solving a wide variety of coding challenges – again, in a matter of seconds.
Now, an AI-powered chatbot is not new, but this iteration feels like a step change. Set against the chatbots used in areas like corporate customer services, it feels like comparing a tricycle to Concorde. No wonder tech entrepreneur Aaron Levie described ChatGPT as ‘one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward’.
Unsurprisingly a slew of blogs and articles have already appeared asking whether ChatGPT is coming for your job. Well, it depends, but clearly the ability of the bot to not just retrieve salient information, but answer complex questions and produce realistic text has all manner of implications for the labour market – not least for low-level software jobs.
Everything from legal work to corporate strategy, marketing and all manner of clerical jobs could be in the bot’s sites. Journalists, too, feel the AI’s hot breath on their necks, as it produces irritatingly life-like prose about pretty much any subject under the sun.
Ask the bot ‘Why did Napoleon lose the Battle of Waterloo?’, say, and it will produce a brisk, well referenced and balanced summary of why hubris, strategy and conditions conspired against the French Emperor. Little wonder academics and teachers are wondering whether this spells ’the end of essays’.
Of course, it’s not perfect. ChatGPT itself warns users that it ‘may occasionally generate incorrect information’. When I asked it to compare Tony Blair and Liz Truss, for instance, it told me that ‘Both Blair and Truss are members of the Labour Party’.
It’s a bot’s capacity for error, incidentally, that is one of the reasons Google hasn’t released its own, apparently even better chatbot, LaMDA – for the company is understandably wary of damaging its reputation by releasing a product that confidently gives out incorrect information.
The text ChatGPT produces can also be a bit stilted and unengaging. We know that first hand, because we got ChatGPT to write a piece for CapX on why capitalism is so great. It was a worthy effort, but unlikely to claim the Orwell Prize any time soon, if we’re honest.
There have also been concerns about its ability to produce biased, harmful or ethically dubiouscontent, though I can’t say I’m massively concerned about that. Who, after all, is seriously seeking moral counsel from a chatbot?
As for the time-honoured arguments about destroying jobs and disrupting society, it’s obviously too early to make hard and fast predictions. But viewing it as a zero-sum ‘AI vs human’ game strikes me as wrongheaded. Even in this relatively early incarnation, ChatGTP looks like absolute supercharging productivity in some crucial parts of the economy by getting otherwise time-consuming tasks done in moments and leaving the bigger picture stuff to humans.
While pundits pore over the potential implications of the tech, I would implore you to take some time to just play around with this remarkable new bit of kit. Because as well as being transformational, disruptive and, yes, kind of scary, it’s also really rather fun.
Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.
CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.