With more than 1 in 4 companies experiencing business email compromise scams in the past year, Singapore was found to be the most targeted country in this area across Southeast Asia. Such scams involve attackers initiating or intercepting communication with a company decision-maker who can release funds or conduct wire transfers.
Today, in most of these financial fraud incidents, cybercriminals engage directly with human beings to convince them to divulge sensitive information, send money, or compromise their credentials. This is referred to as ‘social engineering’ and it’s one of the most effective techniques criminals use. In fact, in the past year, there’s been a noticeable shift toward financially motivated cybercrime (80%), with 35% of these occurring as a result of human error.
Imagine being your company’s financial controller. You’ve made sure that you’ve paid a vendor, only to meet that vendor later on and be told that payment hasn’t been received. A few weeks ago, you updated the vendor’s payment details and made a transfer after being informed to do so via phone and email. You suddenly realise that you’ve been an unsuspecting victim of a multi-phased social engineering attack.
Another scenario would be an email from your boss saying: “Hey Jasmine, I just got a call from Amanda, our contact at Singapore Sporting Goods. They didn’t get our cheque and are withholding the shipment we talked about last Monday. I need you to transfer the money today. Attached is the amount and wire details. Know you have a spin class tonight, but you need to get this done before you leave.” With personal references and familiar language, you assume the email is legitimate and promptly transfer the money.
To address this situation, the Monetary Authority of Singapore recently issued a set of guidelines aimed at protecting users of electronic payments from fraud, errors and security threats. However, as these incidents continue to happen, we often ask, who should be held responsible? What could have been done to prevent them? We’ll aim to address these questions below.
All the user’s fault?
It’s always easy to berate a user for falling prey to social engineering attacks. But in truth, no one – not even cybersecurity experts – can assuredly say that they are immune to social engineering. It’s therefore more important to consider the context behind why these things happen.
Cybercrime continues to evolve and become a full-fledged industry on its own. Criminals targeting firms in Singapore, for example, no longer just reuse exploit kits from attackers in the US or Europe. Just like how an ambitious enterprise would find a local partner to expand overseas with, global syndicates now collaborate with local parties to create phishing websites in native jargon or cash out wire transfers locally. In addition, creative criminals are always going to find a way to obtain the information they need to initiate hyper-relevant communications.
Are we leaving employees to sink or swim?
As we go into the need for awareness, we have observed that many of such education programs are training users on what to be afraid of, but not necessarily how to be confident in doing their jobs.
Companies must analyse and pinpoint which processes produce loss. If the accounting department gets an email from a vendor that says: “I have changed my bank account number,” what are the internal processes to verify that this is true? Why would an accountant assume the message came from the vendor when it might have been from an imposter?
Rather than immediately reacting to an urgent wire transfer request at 10pm that used familiar language, the accountant should be trained in an established process that verifies payment credentials. It might take 10 to 15 additional minutes, but it’s much better than trying to chase down thousands of dollars that you probably won’t get back.
Organisations must therefore teach users the way to do things right and practice risk management. You cannot simply throw users into a position where they can cause damage or injure themselves, leaving them to sink or swim. We’d sum up this necessary risk management process with a scuba diving training analogy:
• Verify that the people have received the adequate level of training
• Put people in a safe environment and test out their skills before letting them out into open water
• In open water, you limit how much damage they can do by only taking them to safe areas
• Ensure you have extra assistance on hand to monitor students
• Have insurance in case accidents happen
Should we blame the security team?
If a single user’s error can cause significant damage and financial loss, then that error is a symptom of the problems throughout the security architecture. In the same way security teams don’t solely rely on firewalls to keep out attacks, they can’t rely on users to be the ‘human firewall’. A layered approach that incorporates the following must be considered:
• Endpoint systems that can detect phishing or malware attacks
• Limiting unnecessary user access to avoid putting them in the position of being able to compromise information
• Monitor the company’s critical databases and ‘crown jewels’ for misuse and abuse
• Pre-planning and preparation in the event of an incident to put out the fire as quickly as possible
Security teams must even muster the courage to tell their c-suite executives what they need to know. For example: “We need your approval to implement monitoring software around your system to reduce the likelihood of a successful attack. Whilst you might not like these tighter controls, we need to protect the company’s most important assets and you are a way to access these assets.”
A good organisation would have established processes and controls to ensure the user is not in a position to make a mistake or take the bait. If the user is in a position to make mistakes due to the nature of their roles, the company must then specifically lay out how they should do their jobs and the associated penalties of not doing so.
At the end of the day, the prevention of financial crimes using social engineering attacks is a collective responsibility that involves integrating technology, process and the right type of awareness training. Organisations that get this right will find that cybersecurity isn’t a showstopper, but an enabler for better businesses instead.
The views expressed in this column are the author's own and do not necessarily reflect this publication's view, and this article is not edited by Singapore Business Review. The author was not remunerated for this article.
Do you know more about this story? Contact us anonymously through this link.
Ira Winkler, CISSP, is President of Secure Mentem and Author of “Advanced Persistent Security”. He is considered one of the world’s most influential security professionals and was named “The Awareness Crusader” by CSO magazine in receiving their CSO COMPASS Award. Ira has designed, implemented and supported security awareness programs at organizations of all sizes, in all industries, around the world. He is an established speaker at major industry events, having addressed “30+ Years of Security Awareness Efforts: What Have We Learned?” and “Holistically Mitigating Human Vulnerabilities and Attacks” at RSA Conferences in San Francisco and Singapore.
Etay Maor is an Executive Security Advisor at IBM Security, where he leads security and fraud fighting awareness and research. Previously, Etay was the Head of Cyberthreat Research at RSA Labs where he managed malware research and intelligence teams and was part of cutting-edge security research. He is an established speaker at major industry events like RSA Conference in Singapore, leading key discussions like “White Hats' and Cybercriminals’ New Tool: Artificial Intelligence” and “Facing Financial Fraud Head On: Cybersecurity Best Practices”.