INFORMATION TECHNOLOGY | Contributed Content, Singapore
Jeff Hurmuses

For Singapore's Businesses Tomorrow, it's AI vs. AI


In 2017, famous physicist Stephen Hawking said that AI could be the last big event in human history unless we learn how to avoid the risks. Like him, I too am concerned about the risks of AI going awry and the dangers of it.

Make no mistake – AI is transformative and can help improve business operations on multiple fronts, particularly in automating work processes, detecting fraud, or running analyses. As Singapore works towards becoming a Smart Nation, many initiatives are already in place across different industries to encourage or improve the use of AI. The AI Singapore (AI SG) national programme was set up in 2017 where up to $150m will be invested over five years to catalyse and boost the AI capabilities of the nation; come 2023, all ministries and their related agencies will be on board with least one AI project. Singapore has indeed come a long way in the development of AI technology – in the healthcare industry, services such as deep learning models have been developed to pinpoint the roots of cancer; there are also trial tests on self-driving cars on the roads, as well as using AI in detecting drowning incidents, fraudulent activities with regards to SkillsFuture, and even speech recognition and chatbot systems, which we have become so familiar with in daily life.

In the field of cybersecurity, AI can help organisations with advanced solutions that better identify, quarantine, and remediate malware and other threats without having to put a strain on IT resources. Take for instance, AI has proven effective in helping a local private healthcare institute in avoiding 90% of targeted attacks involving malware or abnormal behaviour within its systems, and we expect such technology to gain popularity and become more holistic in the years to come.

All these advancements are very optimistic and I truly believe that AI can change our society in a very positive way. But there are always two sides to every coin, and we need to be ready to handle both. That’s why we need to identify the potential risks of AI, employ the best possible practice and management, and prepare for its consequences in advance.

When AI goes awry: identifying the risks and consequences
There are three potential risks associated with the use of AI technology.

  1. Stability: Problems with stability may quickly arise when machine learning bots pick up ‘bad’ behaviour from unsupervised learning and create unfavourable results. It is not surprising that cybercriminals will leverage on this vulnerability to locate the weakest links in security vendors and tools in order to trick detections into incorrectly identifying threats. If they successfully take over the system, this damages the vendor’s reputation in the market; in the event that their attempt remains undetected, it could successfully train the platform to churn out false positives.
  2. Social engineering: At present, cybercriminals already utilise AI technology to do social engineering by churning out fake news and deep fakes, which could result in the prevalence of more convincing spam or phishing campaigns.
  3. Privacy: Concerns with data privacy also arise when bad actors gain control of systems to keep track of every activity and every move by users of AI-enabled devices, such as the breach on the exposed Orvibo server, or the devices from Google and Amazon that listen in to user conversations.

If and when cybercriminals unlock the backdoor to use AI for malicious activities, we could expect to see harder-to-detect malware, more precisely targeted threats that can lead to very destructive, network-wide malware infections, more convincing clickbait, as well as more cross-platform malware.

Preparing for the possible implications of AI-weaponisation: best practices and management
Preparing for a common threat requires a common approach where all parties play an active role. This includes governments, international regulators and industry bodies, security experts as well as businesses of all types.

Artificial intelligence strategies, frameworks and R&D resources need to be put in place. Several countries, including the US, have already developed an AI strategy that aims to promote research and development, within the safe confines of protecting the safety and privacy of the people. In Asia, Singapore is amongst the first to introduce a framework to guide private sector organisations on how they use AI solutions ethically.

Whilst this is a good start, there is no clear path forward outlined that shows technical standards that will minimise AI risks, and there are no international standards to promote and protect those priorities. With countries like US or Singapore looking to lead the way, a more robust plan for developing standards and regulation must be provided to avoid attacks on trusted systems.

The private sector ought to look towards upskilling or hiring talent that are adept at both the technology behind AI as well as cybersecurity know-hows to manage incidents caused by AI. This will enable businesses to fully leverage on the potential benefits of AI, be it to develop smarter voice assistants, more sophisticated cybersecurity threat detection, enhance automation, and many other uses as this technology advances.

Overall, we need to step up on our efforts and be ready for the possible implications of AI-weaponisation before it turns into the next global threat. Until that day comes, we just have to keep on learning and fine-tuning the AI playbook along the way. 

The views expressed in this column are the author's own and do not necessarily reflect this publication's view, and this article is not edited by Singapore Business Review. The author was not remunerated for this article.

Do you know more about this story? Contact us anonymously through this link.

Click here to learn about advertising, content sponsorship, events & rountables, custom media solutions, whitepaper writing, sales leads or eDM opportunities with us.

To get a media kit and information on advertising or sponsoring click here.

Jeff Hurmuses

Jeff Hurmuses

Jeff Hurmuses is the Area VP for Asia Pacific at Malwarebytes. He leads the Malwarebytes' Asia Pacific team in proactively protecting people and businesses against dangerous threats such as malware, ransomware, and exploits that often escape detection. With 30 years of experience in the IT industry, he understands the unique security needs of Small & Medium Enterprises (SMEs) in the Asia Pacific region, especially in the context of a rapidly evolving threat landscape that faces emerging threats such as under-the-radar malware and Artificial Intelligence.

Contact Information