In 2017, famous physicist Stephen Hawking said that AI could be the last big event in human history unless we learn how to avoid the risks. Like him, I too am concerned about the risks of AI going awry and the dangers of it.
Make no mistake – AI is transformative and can help improve business operations on multiple fronts, particularly in automating work processes, detecting fraud, or running analyses. As Singapore works towards becoming a Smart Nation, many initiatives are already in place across different industries to encourage or improve the use of AI. The AI Singapore (AI SG) national programme was set up in 2017 where up to $150m will be invested over five years to catalyse and boost the AI capabilities of the nation; come 2023, all ministries and their related agencies will be on board with least one AI project. Singapore has indeed come a long way in the development of AI technology – in the healthcare industry, services such as deep learning models have been developed to pinpoint the roots of cancer; there are also trial tests on self-driving cars on the roads, as well as using AI in detecting drowning incidents, fraudulent activities with regards to SkillsFuture, and even speech recognition and chatbot systems, which we have become so familiar with in daily life.
In the field of cybersecurity, AI can help organisations with advanced solutions that better identify, quarantine, and remediate malware and other threats without having to put a strain on IT resources. Take for instance, AI has proven effective in helping a local private healthcare institute in avoiding 90% of targeted attacks involving malware or abnormal behaviour within its systems, and we expect such technology to gain popularity and become more holistic in the years to come.
All these advancements are very optimistic and I truly believe that AI can change our society in a very positive way. But there are always two sides to every coin, and we need to be ready to handle both. That’s why we need to identify the potential risks of AI, employ the best possible practice and management, and prepare for its consequences in advance.
When AI goes awry: identifying the risks and consequences
There are three potential risks associated with the use of AI technology.
If and when cybercriminals unlock the backdoor to use AI for malicious activities, we could expect to see harder-to-detect malware, more precisely targeted threats that can lead to very destructive, network-wide malware infections, more convincing clickbait, as well as more cross-platform malware.
Preparing for the possible implications of AI-weaponisation: best practices and management
Preparing for a common threat requires a common approach where all parties play an active role. This includes governments, international regulators and industry bodies, security experts as well as businesses of all types.
Artificial intelligence strategies, frameworks and R&D resources need to be put in place. Several countries, including the US, have already developed an AI strategy that aims to promote research and development, within the safe confines of protecting the safety and privacy of the people. In Asia, Singapore is amongst the first to introduce a framework to guide private sector organisations on how they use AI solutions ethically.
Whilst this is a good start, there is no clear path forward outlined that shows technical standards that will minimise AI risks, and there are no international standards to promote and protect those priorities. With countries like US or Singapore looking to lead the way, a more robust plan for developing standards and regulation must be provided to avoid attacks on trusted systems.
The private sector ought to look towards upskilling or hiring talent that are adept at both the technology behind AI as well as cybersecurity know-hows to manage incidents caused by AI. This will enable businesses to fully leverage on the potential benefits of AI, be it to develop smarter voice assistants, more sophisticated cybersecurity threat detection, enhance automation, and many other uses as this technology advances.
Overall, we need to step up on our efforts and be ready for the possible implications of AI-weaponisation before it turns into the next global threat. Until that day comes, we just have to keep on learning and fine-tuning the AI playbook along the way.
The views expressed in this column are the author's own and do not necessarily reflect this publication's view, and this article is not edited by Singapore Business Review. The author was not remunerated for this article.
Do you know more about this story? Contact us anonymously through this link.
Jeff Hurmuses is the Area VP for Asia Pacific at Malwarebytes. He leads the Malwarebytes' Asia Pacific team in proactively protecting people and businesses against dangerous threats such as malware, ransomware, and exploits that often escape detection. With 30 years of experience in the IT industry, he understands the unique security needs of Small & Medium Enterprises (SMEs) in the Asia Pacific region, especially in the context of a rapidly evolving threat landscape that faces emerging threats such as under-the-radar malware and Artificial Intelligence.