



Over 80% of Singaporeans encounter harmful online content, majority back stronger regulation
share on
More than 84% of Singapore residents have encountered harmful online content in the past year, while one in three (33%) reported experiencing harmful online behaviour, according to the Ministry of Digital Development and Information (MDDI).
The findings, drawn from the "Perceptions of digitalisation survey" and the "Smart nation policy perception survey" conducted earlier this year, revealed that about two-thirds (62%) of respondents supported stronger regulation to protect users from online harms, even if it meant less online freedom.
The most frequently encountered harmful content included materials supporting illegal activity such as scams (33%), followed by sexual content (26%), violent content (24%), cyberbullying (20%), and content causing racial or religious tension (16%).
Don't miss: IMDA: Social media platforms should do more to protect children from harmful content
Respondents most often encountered such content on Facebook, YouTube, and Instagram, as well as on messaging apps such as WhatsApp and Telegram.

Among those who reported harmful online behaviour, catfishing was the most common, mostly occurring on WhatsApp (56%) and Facebook (41%). Other frequently reported behaviours included unwanted sexual messages and online harassment.

Despite the high exposure, most users chose not to report harmful content or behaviour. Around 82% skipped or closed the content, while nearly a quarter (23%) took no action. In cases of harmful online behaviour, nearly 4 in 5 (79%) blocked the user, and about half (46%) reported the content or user to the platform.
MDDI noted that such inaction may stem from past experiences where platforms were slow to act on user reports. According to the Infocomm Media Development Authority’s (IMDA) "Online safety assessment report 2024", most social media services took an average of five days or more to respond to harmful content reports, significantly longer than stated in their public commitments.
To tackle online harms, the government has rolled out several measures, including IMDA’s Code of Practice for Online Safety – Social Media Services introduced in July 2023, and a new Code of Practice for Online Safety – App Distribution Services, which came into effect in March 2025.
These codes require designated social media services and app stores to implement safeguards such as content moderation systems and age assurance measures to protect young users.
A new Online Safety (Relief and Accountability) Bill will also be tabled by the first half of 2026 to establish an Online Safety Commission, which aims to provide timely help to victims and hold perpetrators accountable. MDDI added that public education efforts will continue, including workshops and webinars with community and industry partners to promote safer digital habits.
Major platforms are also tightening safeguards as regulatory scrutiny around online safety continues to grow. Earlier this month, Google announced at its Safer with Google event that it will roll out age assurance solutions across its products in the first quarter of 2026. The new feature is designed to better distinguish between younger users and adults, ensuring that those under 18 receive more age-appropriate experiences.
Related articles:
Fake social media accounts target political parties ahead of GE2025
Survey: 80% of Gen Z worry about sexist rhetoric on social media
MOE warns parents of fatal social media challenges
share on
Free newsletter
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.
subscribe now open in new window