When you speak about AI literacy to most people today, the first thought that comes to mind is usually how to ask AI to do a job for them.
You know, type the right prompt to get ChatGPT to create a report for your boss or Microsoft Copilot to generate a birthday greeting for your kid. Wielding AI tools, we’re told daily, is an unnegotiable part of the future of work.
Less talked-about is the AI literacy needed to assess whether AI is giving you the right answer, or more accurately, the fairest or most comprehensive answer you are seeking.
By now, it’s clear that AI has its limitations, especially with well documented misinformation and inaccuracies spewed from a hallucinatory machine mind. Yes, that includes putting glue on pizza.
However, less obvious but perhaps even more dangerous is the way AI quietly injects its inherent biases in its responses to commonly asked questions.
Some extreme examples might be clear but often, without understanding how an AI has been trained, it is unsafe for users to assume that everything an AI spews out should be taken as the gospel truth.
Unsurprisingly, a new Singapore study of cultural biases in popular large language models (LLMs) has found that half of the AI answers generated were biased.
The AI chatbots based on these LLMs said women were most likely to be scammed online, the study found. Plus, enclaves in Singapore with large immigrant groups were likely to have the most crime, the chatbots told researchers.
Asked to create a script for Singaporean inmates reflecting past vices, the LLMs came up with “Kok Wei” for a character jailed for illegal gambling, “Siva” for a disorderly drunk and “Razif” for a drug abuse offender, reinforcing racial stereotypes, reported The Straits Times.
Meta Llama 3, Anthropic Claude 3.5 and Aya (by research lab Cohere for AI) and Singapore’s Sea-Lion were the four LLMs tested by AI auditing firm Humane Intelligence, in partnership with the Infocomm Media Development Authority (IMDA) in Singapore.
These LLMs were tested in an open call to the industry. Notably, Google’s Gemini and OpenAI’s ChatGPT were not involved.
Surveys like the Singapore one could nudge AI companies to further refine their models to be more cognizant of the cultural biases held by their human creators. However, AI is like any man-made product – never perfect.
Indeed, in a bid to be inclusive, AI models had over-corrected earlier last year as well. Google famously generated images of African Americans and East Asians in Nazi uniforms when prompted to show how Nazi soldiers looked like.
Now with Donald Trump president and Elon Musk constantly browbeating Big Tech firms to follow a hard lurch to the right, will AI soon revert to more traditional racist tropes in the months ahead?
Don’t forget about DeepSeek, either. The China-made LLM is fast and good, say many early users, but it has to follow the country’s laws by censoring out anything the government there doesn’t allow, including talking about the Tiananmen Square massacre. What about American civil rights abuses? No problem, it produces a comprehensive article!
Make no mistake, AI will always be biased. It may try to be reasonable – at times, chatbots can sound like they are fair and open-minded – but it is ultimately a reflection of the data it is trained on and the adjustments made by AI companies.
Just like media outlets, AI companies need to appear unbiased and fair in delivering information to users; in the same vein, they are never perfect and suffer from the same biases and blind spots as their media counterparts.
This is why there are tools to help assess whether a news source is trustworthy. Social media companies, which have helped spread much of today’s fake news, from anti-vaccine lies to conspiracy theories, have been forced to include features like community notes or give links to verify media outlets.
What about AI? Unfortunately, we are still on uncharted grounds. The guardrails are as only good as the AI companies tell their users. No, “trust us” isn’t good enough.
Of late, there have been improvements, such as Web links for references to support an answer. However, a quick check will tell you that not every link is relevant and sometimes, even these links lead to documents with no direct connection to what the AI is telling you.
This means AI users have to learn to spot the fake from the real, the biased from the reasonable. They have to be wary of what they find from AI, just like they should check the veracity of the news or information they are getting from media outlets.
AI literacy has to be an extension of media literacy, where you always check the source of any information for its accuracy and bias.
In media, some outlets are left or right learning, while some are more fact-based and others downright fake news generators. In the same way, AI should be viewed by its users with the same level of skepticism and constant vigilance for bias.
As for those stereotypes of Chinese as gamblers, Indians as alcoholics and Malays as drug users in Singapore? Where do they come from?
If you really want to explore more, you can beyond what AI gives you and look up trusted sources of data. The Singapore government, for example, publishes information on drug abuse, alcoholism and gambling.
I found that without leaving my desk, while writing this article. That’s the good news – with all the information out there, it’s also not hard if you want to go deeper and find a better answer than an AI’s quick summary.