This article is more than 1 year old

Online disinformation is an industry that needs regulation, says boffin

Malaysian fake news laws didn't work well, so Big Tech should have to do better at spotting and stopping bad actors

Society should treat disinformation as the product of an industry worthy of regulation, not a crime committed by individuals, according to Dr Ross Tapsell, a senior lecturer and researcher at the Australian National University's College of Asia and the Pacific.

"Trust in media is on the decline," said Tapsell, who further iterated that this trend was worldwide, but heightened in Southeast Asia and Malaysia. The researcher added that those who do not trust the media will turn to unofficial channels to seek out information and news.

Tapsell serves as the director of ANU's Malaysia Institute – one of the world's leading academic institutions for the study of Malaysia, established both to further an understanding of Malaysian studies and to foster relationships between Australia and Malaysia.

Malaysia is considered at the forefront of digital media industry advancement and some studies say the country has the highest per-capita WhatsApp usage. With the high usage, and a cultural tendency to discuss public issues in digital realms, comes a trend of campaigning and marketing through WhatsApp and similar online platforms. And with such operations comes negative campaigning that often falls into the category of misinformation and disinformation.

In 2018, Malaysia introduced an anti-fake news bill, the first of its kind in the world. According to the law, those publishing or circulating misleading information could spend up to six years in prison.

The law put online service providers on the hook for third-party content and anyone could make an accusation. This is problematic as fake news is often not concrete or definable, existing in an ever-changing grey area. Any fake news regulation brings a whole host of freedom of speech issues with it and raises questions as to how the law might be used nefariously – for example to silence political opponents.

Tapsell described problems he saw with the bill from the ground level in Malaysia, including citizens threatening others online with police action and a deterioration of public discourse.

The law was repealed in 2019 after becoming seen as an instrument to suppress political opponents rather than protecting Malaysians from harmful information. But early in 2021 Malaysia used emergency powers to again make spreading fake news about COVID-19 an offence punishable with fines of up to RM100,000 (US$23,800) and possible imprisonment.

Having seen this all play out before, Tapsell argued that disinformation should not be treated as an illegal act, but rather an industry – with players having their own motives, entrepreneurial or otherwise. And like any industry, it should be regulated.

"Rather than adopting the common narrative of social media 'weaponisation', I will argue that the challenges of a contemporary 'infodemic' are part of a growing digital media industry and rapidly shifting information society," wrote Tapsell in the description of his seminar.

But how to regulate such a thing? The ANU researcher said solutions are not found in laws and increased police intervention, but rather "through creating and developing a robust, critical and trustworthy digital media landscape".

One example would be within the electoral process, where watchdog agencies have taken over roles of digital campaigns during election periods. Tapsell called on such efforts to be more properly funded in the same way TV ads are regulated.

As for Big Tech's role in calling out fake news, Tapsell believes as an entity it could be more effective than other existing sources.

"It's not a bad thing to name and shame bad actors, and it's a good thing Facebook is starting to do these things," said the boffin. He explained that while local journalists and researchers could be retaliated against for exposing fake news purveyors, it was harder to retaliate against a large company and platform, and therefore safer for Big Tech to take on the role.

He recognized the efforts social media platforms make in identifying and stopping coordinated behavior, and working to make such actions less lucrative, but felt there was still more to do.

"They still rely too much on independent researchers in the field to expose these," said Tapsell. "It's not our job to do this – it's the job of the big companies in Silicon Valley." ®


Similar topics

Similar topics

Similar topics


Send us news

Other stories you might like