If my first day at Black Hat was all about insider threats and ransomware, my second was focused on attackers. I had the opportunity to speak with a number of experts about how attackers are hitting systems and - arguably more importantly – why they are doing so.
So what motivates cybercriminals? Experts from Deloitte told me that, unsurprisingly, they are motivated by money. When I asked about penetration of critical infrastructure systems, an expert from FusionX said that adversaries may be “preparing the battlefield” in case they ever go to war with the United States.
And when I asked why an adversary would release information stolen during the DNC hack, an expert theorized that those adversaries were trying to send a message to Hillary Clinton – rather than trying to discredit her. It was truly an eye-opening day.
The day’s briefings covered social engineering attacks, which are based around tricking people into giving the attacker information or access, rather than using purely technical means to achieve those same goals.
For example, take telephony scams - robocalls that trick people into giving up payment information by posing as a legitimate caller, like a financial institution or a political campaign. Scams like this reportedly cost Americans about $320 million every year.
Aude Marzuoli, a presenter at Black Hat, developed a machine learning algorithm that helps determine which robocalls were scams, and even what scam campaign they were a part of, based solely on the audio content of the call itself. Arguably the most interesting conclusion Marzuoli found using this technique is that 51% of all malicious robocalls originate from just 38 distinct scam infrastructures.
There was also a discussion of spear-phishing, a type of attack that uses tailored messages to trick individuals into opening infected attachments or clicking on infected links. Once that happens, the spear-phishers are usually able to gain a foothold in the target system in order to pursue their goals.
Why, exactly, do people click on these links in the first place? One presenter, Arun Vishwanath, found that while companies pay a great deal to train people not to click strange email attachments, a significant percentage still will. He found that the key is not a lack of knowledge, but rather differing cognitive approaches towards work behaviors. Concepts as varied as personal beliefs about cybersecurity, cognitive shortcuts people use when assessing new information, and even work habits all play a role in determining how susceptible people are to spear-phishing attempts.
This information was fascinating in its own right, but it was especially interesting when combined with one of the most interesting briefing I saw at Black Hat: “Weaponizing Data Science for Social Engineering.”
In this briefing, two researchers – John Seymour and Philip Tully - demonstrated how they built a fully automated spear-phishing bot that selects and attacks targets through Twitter. Essentially, the bot identifies people of relatively high value using their engagement metrics on Twitter, reads their past tweets and uses that information to create a new tweet that would be sent directly to the target.
This new tweet would be about a topic that the target cared about — it was based on the target’s own tweets, remember — and would contain a shortened link that could be used to deliver a malware payload. It effectively combined the success rate of spear-phishing with the ease of regular phishing.
Twitter as a spear-phishing platform was new to many people in the audience – including me - but they had people tweet to the bot and it generated some extremely convincing tweets right in front of us.
If there’s one thing I learned from my two days at Black Hat, it’s that you really can’t trust anything on the Internet.