Those old enough to be born before the Internet will remember how they first got “online”, as it was called back then.
Who could forget the wide-eyed wonder of reading news from abroad, connecting with people in this weird medium called e-mail (instead of calling) and even falling in love with a stranger from an online chatroom?
Indeed, when the Internet first arrived in homes in the 1990s, there was much hype, confusion and excitement, as is the case with artificial intelligence (AI) now, after the breakthrough year it just had.
The hype is certainly there. It’s easy to picture a profession we don’t like – copywriters, designers, software coders, customer service reps, private-hire drivers and social media influencers – being replaced by AI.
It’s also easy to fear, if we belong to any of these groups, that AI will make our jobs redundant. White collar professionals – elites who have been pontificating about disruption and retraining – could get a taste of what they’ve been telling their blue-collar brethren all this while, if AI brings about a reordering of the hierarchy of jobs.
Yet, despite the hype and fear, 2024 is likely when AI will be refined and set about doing all the hard work that it has promised to do in the past year.
If the past 12 months were a demo, then the coming 12 will see AI being unleashed in a wide scale all across industries, whether you are in manufacturing or hospitality.
By 2026, 20 per cent of industrial operations in Asia will use AI or machine learning (ML) for vision-based systems and robotic and automation processes, predicts IDC.
The research firm also expects generative AI to be used by 30 per cent of the largest organisations in Asia to produce adhoc operational performance reports.
Yet, all these advances rest on the assumption that AI will be able to overcome some big hurdles that have become clear as it is used more widely.
The biggest has to do with data. In a study in late 2022, researchers said that high-quality sources of data, which large language models (LLMs) that chatbots like ChatGPT use and are trained on, are running out.
Yes, there are a lot more low-quality sources such as social media posts and comments on websites like 4chan, but high-quality ones written and produced by professional writers, like on Wikipedia, are not going to be so easily available.
By 2026, there could be a shortage of high-quality textural training data and low-quality sources for text and image data could deplete between 2030 and 2060, reported Firstpost.
So, it’s no surprise that OpenAI is now in talks with dozens of publishers to license content, which will be important to keep training its AI models. This also comes after the New York Times just sued OpenAI and Microsoft for using its articles without permission.
AI companies need good-quality data because the training models depend on it to produce more precise and accurate genAI tools.
Don’t forget too, that 2023 was the year when everyone who started experimenting with AI could declare themselves excellent writers and artists. Much of the data that a post-2023 AI learns from will be AI-generated as well.
The feedback loop, as some researchers have warned, will cause AI to perform worse with each training model exposed to more AI-generated data.
When you get too much garbage in and garbage out, you eventually get a model collapse, which means it forgets what it has learned earlier and its learnings become increasingly corrupted and imprecise over time.
A funny example would be how an AI might mistake a household cat as a tiger after misinterpreting it over several iterations.
More seriously, think of the ramifications of a government agency using AI to determine, say, how much of a social security payout a group of individuals should receive. Or a company using AI to vet job applicants for interviews.
There’s also the worry that AI performance has got worse, not better, over time. A study by Stanford researchers last year found that OpenAI’s GPT-4 performed worse on numerous tasks than GPT-3.5 before it.
These tasks include solving mathematical problems and answering sensitive questions, such as “why women are inferior”, according to the researchers.
All this is not to say that OpenAI and indeed AI as a technology won’t improve and overcome these obstacles. Like the Internet before it, AI has now found enough uses and raised expectations high enough that people are finding fault with it.
In the late 1990s, after people got used to the porn and gambling – the first two killers apps on the Internet – they started questioning why they can’t carry out transactions safely online. No, sending credit card information in the clear, as the data travelled through multiple servers across the world, was not safe.
The answer came in the form of encryption that was robust enough and easy to use. Today, SSL (secure sockets layer) is the technology used to ensure that communications across the Web is protected from prying eyes, so you can buy stuff on Amazon and transfer money from your bank.
AI needs a similar upgrade to overcome the first big roadblocks that have have come its way. After charming people in 2023 with smart chatbots, it needs more than a trick up its sleeve – the magic needs to stop impressing and take on more difficult tasks.
In 2024, businesses could also look to more private datasets that are specific to their industry to train and finetune the AI tools they use. This means they don’t need huge databases from the public to improve performance.
Of course, none of this will address the ethical and governance issues that will also face AI proponents in the new year.
Another way to overcome the limitations of using large data sets to train today’s AI is to develop an artificial general intelligence (AGI) that is able to creatively solve problems in a smarter way.
The definition of AGI is varied, but it essentially refers to an AI that is able to find new solutions to problems in the same way a human might approach them. The big problem with that is, of course, autonomy.
After all, much of today’s AI still needs a person to press a button or type in a prompt. Tomorrow’s AGI, many fear, will create its own AIs and don’t need humans. Or it might compromise other AIs for dangerous outcomes.
Already, researchers in Singapore say they have found a way to compromise AI chatbots, by training and using an AI chatbot to produce prompts that can “jailbreak” other chatbots.
What happens when AGI manages to do so, without human intervention in future? Can the guardrails people put in place be enough to prevent it from jailbreaking itself and doing harm?
For now, these questions may be postponed for at least a year. If you trust Sam Altman, head honcho of OpenAI, that is.
Since returning to power after an ouster that some have attributed to the development of AGI at the company, he has said AGI won’t be available in 2024. Other AI companies have predicted some form of AGI to be out in the next few years, possibly before 2030.
For those who fear for their jobs, this means really smart AI may not be here yet in 2024. However, they should nonetheless get on the bandwagon to learn how to take advantage of AI.
In 2024, nobody needs to be taught how to use the Internet because it has become so pervasive and easy to use. That, however, has come after years of disruption to all sectors, from the media to retail; travel to education.
That’s the Internet after more than 20 years. Now, what will AI bring in 2024, in its first big wave of change?