A large majority of Singapore residents have come across harmful material or faced troubling behaviour online in the past year, according to new findings from the Ministry of Digital Development and Information (MDDI).

The twin surveys, released on Friday, October 10, reveal deep concerns about digital safety and a growing demand for stronger regulation, even at the cost of reduced online freedom.
Widespread exposure to harmful online content
More than four in five Singapore residents — 84 percent — reported encountering harmful online content in the past year. The ministry’s Perceptions of Digitalisation Survey found that much of this exposure happened repeatedly, at least a few times a month.
The most common type of harmful content was material that supported illegal activity, including scams or the sale of prohibited items.
About a third of respondents, or 33 percent, said they came across such content frequently. Sexual and violent material followed closely at 26 and 24 percent, respectively. Cyberbullying affected one in five, while 16 percent encountered content stirring racial or religious tensions.
Facebook, YouTube among top sources
Respondents said they came across harmful material across a wide range of platforms. Facebook was named the most common site where people encountered such content, at 57 percent. YouTube and Instagram followed with 46 and 41 percent, respectively.
Short-form platforms also featured, with TikTok cited by 36 percent, X by 15 percent, and Reddit by 13 percent.

Messaging apps were not spared. WhatsApp was identified by 38 percent of respondents as a channel where they encountered harmful content, while Telegram was mentioned by 22 percent of the responders.
MDDI noted that the prevalence could reflect the wide reach of these services, but said the spread of harmful material across nearly all popular platforms remained a serious concern.
Catfishing tops list of harmful online behaviours
About one in three people said they had personally experienced harmful online behaviour in the past year. The most common of these was catfishing — a form of deception in which someone creates a fake identity online to trick others, often to gain money, emotional control or romantic attention.
According to the survey, 71 percent of those who experienced harmful online behaviour said they were victims of catfishing. The practice was most common on WhatsApp, where 56 percent reported it, and Facebook, where 41 percent did.

Experts say catfishing thrives on emotional vulnerability. Victims often believe they are speaking to someone genuine, only to discover that the person’s profile picture, name or entire persona was fabricated. The experience can be emotionally distressing, leading to embarrassment, loss of trust and, in some cases, financial exploitation.
Unwanted sexual messages were the next most common harmful behaviour, experienced by 27 percent of respondents, while online harassment was reported by 16 percent. Other incidents included identity theft and threatening messages.
Few users report harmful content
While harmful content and behaviour are widespread, many users do not report them. More than four in five respondents — 82 percent — said they simply skipped or closed the harmful content. Nearly a quarter took no action at all.
Only around one in three said they reported the content or user to the platform. Among those who experienced direct harm, such as catfishing or harassment, blocking the offending user was the most common response, done by 79 percent.
Past experiences with slow or ineffective responses may have discouraged users from reporting. The Infocomm Media Development Authority’s (IMDA) Online Safety Assessment Report 2024 found that most major platforms took five days or longer to act on user reports of harmful content—much slower than the timelines they publicly claim in their annual reports.
Push for stronger online safety rules
Public sentiment now leans towards tougher laws. The Smart Nation Policy Perception Survey found that about two in three respondents, or 62 percent, support stronger regulation to protect users from online harm — even if it means less freedom in the digital space.
MDDI said the government, industry and community are already working together to create a safer online environment. The IMDA introduced the Code of Practice for Online Safety – Social Media Services in July 2023, requiring major social media platforms to set up systems to shield users, especially children, from harmful material.
In March 2025, a new Code of Practice for App Distribution Services came into effect, requiring app stores to reduce users’ exposure to harmful content and implement age assurance systems. These safeguards are to be fully operational by March 2026.
A new Online Safety (Relief and Accountability) Bill is also expected to be tabled by the first half of 2026. The law will establish an Online Safety Commission to help victims obtain timely assistance and to hold perpetrators accountable.
The Perceptions of Digitalisation Survey was conducted from November 2024 to February 2025, while the Smart Nation Policy Perception Survey ran from March to May 2025. Each survey involved 2,008 Singapore citizens and permanent residents aged 15 and above, and was representative of the resident population by gender, age and race.