Singapore’s scam landscape looked better on paper in 2025 than it had in years. Total scam and cybercrime cases fell by 24.8% to 41,974 in 2025, and funds lost dropped from S$1.1 billion in 2024 to S$913.1 million. But beneath that headline figure, a more concerning trend was visible, where the median loss per case rose from S$1,389 in 2024 to S$1,644 in 2025.
For Pamela Ong, Country Manager of Singapore and Asia at ESET, that divergence between volume and severity was not a contradiction. It was a signal.
“What we’re seeing is not necessarily a reduction in attacker activity, but a change in approach,” Ong said in an email interview with HardwareZone on the sidelines of Gitex AI Asia 2026. “Instead of relying purely on volume, attackers are focusing more on precision. They are using better reconnaissance, more convincing impersonation, and increasingly AI-generated content to target victims more likely to yield higher returns.”
Her comments came as ESET showcased its upcoming AI protection capabilities at Gitex AI Asia 2026, held in April in Singapore. The new features, first demonstrated at RSAC 2026 and set to launch later this year, were designed to address a threat landscape that Ong described as evolving faster than most organisations were prepared for.
Phishing remains a key entry point, but AI enhances its effectiveness.
Pamela Ong, Country Manager of Singapore and Asia at ESET
Photo: HWZ
Ong was careful to point out that the fundamentals of the threat landscape had not changed as dramatically as some of the technology headlines might suggest. “Most successful breaches still begin with socially engineered emails and malicious links,” she said. ESET’s own data showed that phishing remained the leading cause of corporate breaches in Singapore, accounting for close to a third of detected threats. “This tells us attackers are still relying on familiar entry points but are executing them more effectively.”
What had changed substantially was the quality of those attacks, and AI was the reason. In the first five months of 2025, a third of phishing emails were found to contain high volumes of text, indicative of large language model generation. Ong said the implication for how people assess whether a message was legitimate was significant and largely uncomfortable.
“Large language models have made it much easier for attackers to produce well-written, coherent, and contextually relevant messages at scale. As a result, many phishing emails today look professional and credible, which removes one of the most obvious warning signs that people previously depended on.”
The practical advice she offered was a shift from visual assessment to behavioural interrogation. “Instead of focusing on how a message looks, it’s more important to focus on what it is asking you to do. Is it creating urgency? Is it asking you to click a link, download an attachment, or share sensitive information? Does the request match the sender’s usual behaviour?” Verification, she said, was now the more reliable defence, not grammar or formatting.
AI-powered scam campaigns are scaling and becoming harder to contain.
ESET’s H2 2025 Threat Report had flagged a range of AI-assisted threats, including the Nomani scam network, which used AI-generated videos to spread fraudulent content across multiple platforms simultaneously. Ong described the scale of automation behind these campaigns as one of the more striking findings in recent threat intelligence.






