At one point during a hearing on fake news yesterday, it seemed as if Singapore’s Home Affairs Minister K Shanmugam was speaking up for Facebook users.
Why hadn’t the social media giant come forward earlier to tell users they faced a data breach, he asked a Facebook representative during a three-hour discussion.
In reply, the company’s vice-president for public policy for Asia-Pacific, Simon Milner, admitted it had made a wrong call and should have let people know.
Head honcho Mark Zuckerberg had also “owned that decision” and pledged to better secure users’ data, he added.
The to-and-fro before the Select Committee on Deliberate Online Falsehoods comes right smack in the middle of the worst scandal yet for Facebook.
It finally owned up to a serious data breach last week, as The New York Times and The Guardian reported how a third-party data firm, Cambridge Analytica, had tricked users into sharing data and misused it to help swing the United States presidential election in Donald Trump’s favour in 2016.
As much as that has got to do with data protection, the incident shows how difficult it is to trust or force Facebook to root out instances of fake news or players that rely on its algorithms to target users.
Facebook today is an advertising medium that is more powerful than it wants to admit. That’s because it is struggling to prevent its platform from being weaponised for political ends.
A person who hates a political party can be targeted with messages of how unfair he has been treated in the current system. These “dark Facebook posts” – items only seen by those selected as part of an ad campaign – are effective to a fault because they play up existing prejudices with laser precision.
The same can be said of a person with strong, patriotic feelings, who can be counted on to vote another way. Keep blasting images of how great the country is doing and remind him that change is dangerous.
The crux of the issue is whether Facebook did enough to curb outfits such as Cambridge Analytica, a political consultancy linked with Trump. Under the glare of daylight, the social media giant now admits it did not.
But what else could it have done? It said it had suspended the offending company’s efforts but did not check if Cambridge Analytica had deleted the data collected earlier. Well, the data was later misused.
For every Cambridge Analytica that is found out, there may be 10 other companies effectively targeting Facebook users by simply making use of all the tools it provides – legitimately.
If you dived into your Facebook account settings, you’ll be surprised how much data you’ve been casually sharing, simply by liking a page or surfing to a site while signed on to your Facebook account.
If you hadn’t opted out of being a target of advertisers interested in your interests, then you will keep getting targeted by them. This is legitimate, by the way.
What Cambridge Analytica did was to mislead some 50 million users and harvest their information without their permission.
Perhaps Facebook could have told users earlier their data was in danger of being misused, as Shanmugam suggested. But would it make people quit Facebook or delete their accounts, as many are threatening to do so now?
That’s unlikely, because users are so invested in the social network. There are many guides to reduce one’s exposure by sharing less with advertisers, but to leave Facebook altogether is hard because of the network effects it has accrued.
Even for governments that are seeking to use legislation to rein in fake news, Facebook is a double-edged sword. It is a tool they themselves use to reach out to citizens.
Clearly, Facebook can no longer claim to be the innocent social network it once was, say, a decade ago. Today, its powerful analytical tools, plus its dominant position, mean it cannot simply say that it’s a neutral party to what people do on its platform. It has a “moral obligation”, as its regional representative said yesterday.
Some countries, such as Germany, have forced Facebook to play a more active role in combating fake news, say, by rooting out fake accounts spreading it.
Equally, there is danger in depending on a commercial entity to be the arbiter of truth. Not only is it limited in its ability to control information and accounts perfectly, it also does not have to answer to its users who have few alternatives today.
There is no better way forward than for users to be aware of what they are sharing and to discern real news from the fake themselves. Ultimately, they have to take the lead.
Whether this is through education about sharing one’s data, or boosting media literacy to spot fake news, the responsibility has to lie with users.
Governments can and should compel social media firms to be more transparent about their practices and whatever new tools they deploy for advertisers. These are just not clear enough today.
At the same time, making it too onerous for Facebook also makes the effort impractical. Neither should the social media firm be depended on to root out fake news, because it cannot do so all the time.
In yesterday’s hearing, Shanmugam also pointed out that Singapore did not have the levers that the United States or European government might have on Facebook to compel it to take on this role.
That should be the starting point when discussing how much the likes of Facebook can and should do here. With their network and reach, they have to do more, but the rest depends on users learning to be savvy.
CORRECTION at 23/03/2018, 7:23pm: In an earlier version of the article, we misspelt Simon Milner’s name. This has been corrected. We are sorry for the error.
Help swing the votes? Are you serious? Are Americans so gullible? Or are you so anti-Trump that you want to think that those who voted for Trump are ediots? Ever stopped to think about those who voted for Hillary? Are they somehow immune from such influences?