Author: Anup Ghosh

Home/Articles Posted by Anup Ghosh
anup star trek picture for blog 244 x 100

Expecting to Get Hacked? A Strategy to Turn the Tide

anup star trek picture for blog

In a sign of the times, Dark Reading published an article “Most Companies Expect to Be Hacked in the Next 12 Months”, which describes the results of a survey of enterprise security professionals. This survey is another data point in the trend of rising disillusionment and defeatism in security that is worth noting, but more importantly, addressing it head on by changing the way security operations does its business and the underlying security technology base it uses.

Three points jumped out from the survey:

  1. The level of pessimism in industry about being breached is at an all-time high. 52% of security professionals responded they are likely to be breached in next 12 months.
  2. An ever-creeping sense of defeatism is taking over the security industry. “Security is finally waking up to the new reality that’s more of a question of ‘when’ than ‘if”.
  3. Disillusionment with traditional endpoint security technologies is at an all-time high. 67% of respondents said they are evaluating their endpoint anti-malware software, to either augment or replace them altogether.

Clearly the level of pessimism and rising defeatism is symptomatic of a broken security technology base. This particular study put the number at 70% of companies surveyed that say they were successfully breached in the last 12 months with 22% saying they were successfully hacked 6 or more times.

The traditional security paradigm in the large enterprise space is Prevent, Detect, Respond. Prevention via traditional security approaches is widely acknowledged as a failed strategy. This in turn has given birth to the Detection industry, which has only succeeded in producing prodigious alerts and data dumps that under-staffed and over-worked security teams now have to wrestle with. Naturally following the failure in Prevention and Detection, successful breaches have ensued, which drives the real money-grabber – Response. Response teams, either organic or out-sourced, are a huge expense in security operations. Even worse, Response comes after — and typically long after — a breach has occurred and potentially loss of key data, such as customer records or proprietary plans.

Fundamentally, what I believe this survey points to, is the need to change the traditional paradigm of Prevent, Detect, and Respond to Contain, Identify, and Control.

Re-thinking the Security Strategy: Containment, Identification, Control

Containment is a core architectural strategy to mitigating damage from successful exploits of applications and compromised devices. Much like submarine design is architected to compartmentalize hull breaches, containment strategies can compartmentalize successful network breaches. Like a hull is designed to withstand only so much force before it breaks, software applications and users are the weak links on enterprise networks that will often break under pressure and exploitation. Containment limits the damage from software exploits and users that fall victim to spearphishing and other online attacks. The key measure of success in containment is limiting compromise of applications and devices so no sensitive data is successfully breached.

Detection is really a euphemism for collection. Most detection strategies today simply collect voluminous data and put the onus of interpretation and analysis on humans who are ill-equipped to do so. In fact, what security ops teams really care for is not detection, but rather rapid identification of threats in their enterprise that have evaded traditional prevention mechanisms. Empowering security teams to rapidly identify compromised devices and adversaries on their networks is far more useful than simply collecting data. The key measure of success in identification strategies is minimizing adversary dwell time on networks from weeks and months to minutes.

Response is inherently a reactionary and expensive activity. In the realm of security activities, response is the most expensive dollar spent in security and almost always unbudgeted. Without adequately trained response teams, enterprises will need to out-source incident response, which means incredibly expensive rates being paid to outside firms for only temporary benefit in cleaning up after an incident. Rather than chase adversaries that have been colonizing the network for long periods of time, strategic security ops teams seek to gain control over network breaches by staying in front of the adversary as they attempt to move laterally in the network. Control strategies works hand-in-hand with Containment and Identification strategies to rapidly eradicate identified threats on the network.

Calling All Change Agents

We know that change is hard to implement in organizations. But we also know that doing the same thing over and again and expecting a different result is Einstein’s definition of insanity. Clearly the survey referenced in the article points to despair and rising defeatism in security teams. We need to heed the warning signs and fundamentally re-think how we do security at an enterprise level and the aging security technology base in use today. Today’s strategic security leaders and change agents are adopting Containment, Identification, and Control strategies to enable security teams to regain the upper ground.

As always your comments and feedback below are welcome.


Anti-Vaxxers & Preventable (Cyber) Infections



Anti-Vaxxers & Preventable (Cyber) Infections

Many of us are acutely aware of the measles outbreak in the US that started in Disneyland and has spread eastward. The title image from Vaccine Preventable Outbreaks Are Real shows historical data where measles outbreaks (in red) outside the US dominated measles outbreaks inside the US. Below is a 2015 heat map of measles outbreaks alone.

Clearly the measles outbreak in the US is dominating measles outbreaks worldwide in 2015.

For those of us that grew up with the Measles, Mumps, Rubella vaccine (MMR) as mandatory and standard vaccine for school-age children, we have been perplexed by the spread of measles in what was once considered a conquered disease in the United States, along with Polio.

The Twitterati and public opinion has cast blame at parents who have chosen not to vaccinate their children against preventable diseases like MMR. The so-called anti-vaxxers refuse to vaccinate their children largely out of fear, uncertainty, and doubt of potential adverse side-effects of vaccines. The effect is unvaccinated children, when exposed to highly contagious diseases like measles, contract it and spread it putting the community at risk.

Vulnerable Apps & Preventable Infections

While many of us feel placed at risk by parents who choose not to vaccinate their children against preventable diseases, the same phenomena occurs in cyber security today.

The most common example most security professionals relate to is patching known vulnerable software. The analogy seems apt at first blush. Take for example the distribution of exploits against vulnerable apps shown in the Kaspersky pie chart below.

The first thing that jumps out is unpatched (vulnerable) Oracle Java and browsers collectively account for 87% of all exploits Kaspersky sees (as reported in Kaspersky Security Bulletin 2014 report). Incidentally, Kaspersky notes that unpatched Java alone accounted for 90% of the exploits in 2013.

These unpatched vulnerable applications could be protected against known exploits by patching them, similar to vaccinating people can be effective in preventing disease. Likewise, we understand network compromise occurs when an intruder compromises a machine through a vulnerable app, and then uses this beach head to find other vulnerable machines on the network to compromise. In other words, one vulnerable member of the community of machines puts the others at risk, similar to unvaccinated members in human communities.

In the reality of enterprise networks, patching Java and browsers on patch Tuesday every month is simply not feasible for most organizations. Have you ever wondered why your company is running IE8 and Java 1.6? How hard can it be to patch them? It turns out that patching (vaccinating) Java and IE often times creates adverse effects such as breaking enterprise apps. The anti-vaxxers in this case have a legitimate point about not vaccinating these apps with patches because of adverse reactions in breaking critical enterprise apps. Regardless of how you feel about security, you still want payroll to process even if that means running unpatched Java.

Protecting Your Network Against Preventable Infections

We know that patching Java and Internet Explorer is hard in the enterprise. We also know that in many cases a window of opportunity exists where vulnerable apps are being exploited when patches are not available. For instance, since January 2015,three known 0-day exploitsi.e., previously unknown vulnerabilities, are being actively exploited against Adobe Flash plug-ins in browsers. These exploits are already incorporated in exploit kits like RIG, Angler, and HanJuan making for easy work for cyber criminals using malvertising on popular sites to deliver ransomware.

The Ponemon Institute recently published a study where they found:

Users’ insecure web browsers cause the majority of total malware infections. The web browser is a common attack vector that can severely impact their organization’s security posture. On average, a user’s insecure web browser is the cause of 55 percent of the total malware infections.

We also know that these infections are wholly preventable, both exploits against known vulnerabilities as well as exploits against unknown vulnerabilities (0-days). As proof, the FessLeak campaign Invincea tracked since October 2014 consisted of exploits against known vulnerabilities and 0-day exploits delivered via malvertising. Invincea detected and blocked these exploits from users running Dell Protected Workspace Powered by Invincea. Over 1.8 million Invincea users worldwide are protected against these known and unknown exploits and the infections they cause using a range vulnerable apps.

21st Century Cyber Security Anti-Vaxxers

So it’s the 21st century. You vaccinate your children against preventable diseases because the technology exists. In turn, you expect your neighbors to do the same for their school age children with the expectation that if enough members of your community vaccinate, herd immunity will develop limiting the spread of preventable infectious diseases.

It’s the 21st century in cyber security too. Are you still using 20th century anti-virus technology to protect your endpoints? Are you still waiting on patches or beholden to unpatchable enteprise apps? Are you still relying on signatures and IOCs that trail the threat by days and sometimes weeks?

It’s the 21st century. We have technology that solves this problem, just like we have vaccines that prevent MMR. More than 1.8 million users are now vaccinated against cyber threats against known and unknown exploits. In turn they are protecting the other members of their community on their networks and through the threat intelligence they produce using Invincea.

Now it’s your turn. You can help build herd immunity by adopting 21st century security technology that inoculates you against preventable infections such as Java, IE, Flash exploits through web-based drive-bys, malvertising, and spear-phishing. There is no doubt in my mind that this technology will be adopted on a mass scale that is following the typical adoption curve of new technology. The question is do you wait for the herd to adopt, or do you lead your herd?


Game Change: Three Reasons Why #SonyHack Will Change Security


UPDATED: on 12/19/14, the FBI officially declared North Korea to be the aggressor behind the Sony Pictures Entertainment hack. The evidence published is circumstantial and probably would not stand up to scrutiny in a court of law. However, we do not know what other out-of-band information, such as SIGINT, HUMINT, and intelligence from other nations’ intelligence agencies may have played into this determination. We do know it is highly unusual to conclusively determine attribution of an attack, especially this soon after the attack has occurred. A good treatment on this topic is covered in this KrebsOnSecurity post.

Let’s be honest. As wild as this year has been in InfoSec, none of us, and I mean nobody, anticipated the events that unfolded this week with the Sony hack:

  • A major studio cancels a theatrical release with big time Hollywood actors because hackers threatened violence in movie theaters.
  • The White House declares it’s a national security issue and leaks intelligence that North Korea is behind the attack.
  • An attack that began as an extortion attempt against Sony for money becomes an international incident, where now the White House and Congress are talking about “proportional responses”.

Really? It sounds like a Hollywood script, a movie in a movie type movie.

As this was still unfolding, I was asked to appear on Fox Business with Maria Bartiromo on 12/17/14 — prior to Sony canceling the theatrical release and prior to the White House declaring it was North Korea on background — to comment on Cyber Terrorism aspects of this. For what it’s worth, I don’t think this rises to the level of cyber terrorism. See the interview here:

Tactics Not Malware Are the Story

The #SonyHack was not a run-of-the-mill corporate hack, like we see every week in just about every sector. Most corporate hacks we see are focused on either customer data (credit cards, medical records, social security numbers, passwords, bank account information, email addresses) or company proprietary documents. The former for fraud, the latter for corporate espionage.

The #SonyHack was that unusual bird, a black swan if you will, that was designed to destroy the Sony brand via name & shame tactics. Tactics that played brilliantly to the media, an industry that is all too eager to publish salacious details, no matter how inappropriate, to draw eye balls.

The #SonyHack is the equivalent of detonating a nuclear bomb on a network that employed four key stratagems: capture, destroy, extort, and publish. The hackers captured and exfiltrated hundreds of terabytes of data then torched the network with wiper malware. After extorting Sony, they leaked pre-release movies, published sensitive files and then corporate email. This is HBGary (name & shame) meets CryptoLocker (extortion) meets Edward Snowden (publish through leaks and media sensation) meets Shamoon/DarkSeoul (destroy infrastructure).

This is not to say the exploits or the malware were sophisticated — they are not. The software and components employed are readily available. No zero-days are known to have been used, no animals harmed in this production. However, the tactics employed — capture, destroy, extort, publish — combined with a savvy campaign to name & shame Sony executives, while lathering the media into a frenzy over salacious details, public extortion, leaked movies, an ever changing agenda and demands, threats of violence invoking 9/11, shows a mastery of American psyche and media while creating utter chaos for crisis management.

The story that will be told about the #SonyHack — and surely there is a Hollywood script being written as we speak — will be one of a savvy band of hackivists likely with insider help executing a well thought out plan hell bent on destroying a great consumer electronics brand.

Game Change

In a field that is still in its infancy — Information Security — the #SonyHack I believe will emerge as a Game Change moment. A defining moment that causes significant change in behavior. The last big one we had was the Mandiant APT1 report in 2012 and the RSA Security keys compromise that caused companies to stand up and notice something called APTs.

You ask, why not Target, Home Depot, and the other big breaches of the past year? Not to diminish the significance of these events, but the reality is corporations, and now the public, are conditioned to loss of customer data. Customers typically do not experience the losses themselves. Corporations absorb losses beyond insurance coverage for fraud. The awareness rises, but not enough to cause companies to change established patterns of behavior: check the box compliance-driven security.

The #SonyHack is different — the Black Swan of attacks that may become the new norm. It is different because extortion is involved. It is different because the intent was to destroy the company, not just steal its sensitive data. It is different because the networks were torched. It is different because email was leaked and executives publicly humiliated, and because ultimately Sony capitulated. Capitulation will embolden hackers. And there is little doubt in my mind this same attack can be replicated over and over again at will against other companies because:

(1) most companies are not equipped to deal with targeted attacks, let alone {capture, destroy, extort, & publish} Sony-style attack tactics,

(2) the likelihood of being properly attributed and caught is slim, and

(3) the aggressors’ efforts worked and demands were met.

The impact will be far reaching beyond Sony. I expect at the first Board meeting of the New Year at every major corporation, the question will be asked “What are we doing to make sure we don’t become the next Sony?” I suspect that checking compliance boxes won’t answer the mail. I suspect that doing the same as we’ve always done will not suffice.

Rather, CISOs, Chief Risk Officers, and Chief Executive Officers will need to think outside the box about how to protect their company, and ultimately their shareholders’ value, from Sony-style attacks. This will involve a combination of getting better talent, better intel on threats, understanding the risk for their enterprise against targeted attacks, establishing processes for incident response and crisis management, and upgrading technology to meet the threat of targeted attack.

If you are a CISO, be prepared to answer the Board how you will need to upgrade to defeat Sony style attacks in 2015 and beyond. They will be all ears


A Visual Analysis of OpCleaver in Cynomix

Security firm Cylance released a report called Operation Cleaver in early December 2014 detailing an extensive campaign by Iranian cyber forces targeting critical infrastructures of major industrial countries since 2012. While the release of Operation Cleaver last week was largely over-shadowed by the news of the Sony breach by Guardians of Peace, the #OpCleaver campaign is worth a deeper look because the targets included: airlines, airports, transportation systems, hospitals, utilities, telecom, defense firms, oil and gas, military and other critical infrastructure in Saudi Arabia and several Middle East countries, the UK, France, Germany, Canada, China, South Korea, and the United States.

A Growing Threat in Iran

The adversary they allege is a combination of private engineering companies based in Tehran and state-sponsored hacking teams they collectively call Tarh Andishan. Cyber warfare is not new to Iran. Iran was the target of Olympic Games, an alleged US-led operation to slow down Iran’s nuclear weapons development program, initially disclosed by David Sanger in the New York Times and subsequently in Confront and Conceal, a book on the Obama presidency. Stuxnet’s discovery is also recounted in Countdown to Zero by Kim Zetter. Stuxnet (2009-2010) was just the first of several cyber campaigns against Iran. Duqu (2009-2011) and Flame (2012) were both believed to be nation-state sponsored malware directed against Iran as well.

The Shamoon attack against Saudi Aramco and Qatari RasGas, believed to be perpetrated by Iran in 2012, has been spotlighted since last week’s similar destructive attacks against Sony. Since the Shamoon attacks, Iran is believed to have rapidly developed cyber capabilities.

More recently, the US concluded Iran hacked the Navy Marine Corp Intranet (NMCI) unclassified networks extensively for a period of four months discovered in September 2013. This incident, which caught the US Navy by surprise, and caused widespread downtime on NMCI, is believed to be part of #OpCleaver.

Cynomix as a Tool for Analyzing #OpCleaver Malware

Much to Cylance’s credit, they released the executables attributed to #OpCleaver to Virus Share. Virus Share made these available to download for community analysis. Invincea Researcher Giacomo Bergamo downloaded 102 samples associated with #OpCleaver and ingested them into the Cynomixanalysis engine.

Cynomix is a community analysis engine free to malware researchers everywhere to upload, analyze, or browse malware samples. Cynomix will automatically extract capabilities of the code (in English language) based on its crowd-sourcing algorithm linking code to public source documents. Cynomix will also cluster samples with like samples, based on code sharing relationships. Cynomix presents an interface to visualize the relationships of malware samples, understand its capabilities and attributes including images, IP addresses, and hostnames. Finally, Cynomix will auto-generate Yara signatures to be used in other network devices to defend against these threats.

Analyzing malware is out of the reach of most cyber security professionals because it typically involves debugging/disassembly tools and custom environments for analyzing, recording, and making sense of low-level API calls.

The goal of Cynomix is to apply machine learning and visualization to the problem of analyzing malware to put it within reach of most cyber security professionals with an interest in understanding malware. Cynomix can quickly reveal malware’s capabilities and potentially its lineage without requiring any more expertise than driving a browser.

In the following, we show how to use Cynomix for analyzing the set of malware samples associated with Operation Cleaver. This is but one view and with Cynomix there are many ways of analyzing the data. We strongly encourage you to sign up for a free account on Cynomix and see for yourself, then become a contributing member of the Cynomix community.

Visualizing the Capabilities of #OpCleaver Samples

Once a sample set is ingested and tagged in Cynomix, it is easy to literally see its capabilities. In Figure 1 below, each sample is denoted by its SHA1 signature, with the color-coded capabilities displayed to the right of each signature. Mousing over any given color bar displays the name of the capability “data transfer; uploads file”. In addition, if the capability is present in any other samples, these are highlighted for the other samples.


Figure 1: Visualizing #OpCleaver capabilities in Cynomix

By visually inspecting the samples, you can see that the samples are grouped together in like capabilities. This was configured in a Sort By grid_bin setting beneath the samples (see Video 1 below). Also note that any included image resources are also shown.

Browsing the list view of the #OpCleaver samples in Cynomix below we see the various capabilities in each sample highlighted. In addition, several of the samples include images in its resources section, which Cynomix displays. Images are often included in malware samples for icons to display to fool users into opening them or clicking on them.

Video 1: Inspecting the #OpCleaver samples’s capabilities in List view

Clicking on any sample brings up a window that summarizes the sample’s “home page” information, including its capabilities, images, and Yara signature among other information. Clicking on the capabilities will show you the confidence Cynomix has in the inference as well as links to articles from which these capabilities were inferred.

Visualizing the Clustering of #OpCleaver Samples

One of the most useful features of Cynomix is to automatically cluster like samples together. In Figure 2 below, all 102 #OpCleaver samples are shown in the CyNet view of Cynomix, which shows the shared code relationships of the sample in clusters. Each cluster represents tightly coupled code samples based on shared code.


Figure 2: Visualizing #OpCleaver samples in CyNet view

In the right pane of Figure 2 above is a series of panels that show extracted capabilities, IP addresses, and host names from all the samples. Hovering over any of the capabilities or attributes will highlight the samples that include them. Checking on a feature, such as IP address 174. 36.195.158 in Figure 2, will add a tag to the highlighted samples, then bring the list view of those samples in the panel across the bottom.

Video 2: Browsing capabilities of #OpCleaver samples in CyNet view

Video 2 above shows how the CyNet view of Cynomix shows interesting capabilities of the samples. Hovering over each of the capabilities highlights the samples which have them. For example, hovering over “remote desktop capability” highlights one cluster of samples, which are likely remote administration trojans (RATs). Other capabilities extracted are notable, such as engaging anti-virus (probably to disable them), using Tor (probably for anonymity) and some contain a Lua interpreter. Ironically, Lua came to light among malware analysts when analysis of the Flame malware revealed parts of its higher order logic was coded in Lua.

In addition, we can see samples connecting to specific hard-coded IP addresses and host names. These are automatically extracted and can be highlighted by mousing over and analyzed in the list-view pane below by clicking on them.

#OpCleaver & Community Analysis

The release of #OpCleaver — and just as importantly, its samples — allows the community of malware researchers and their constituencies to understand the nature of the threat through crowd sourced analysis. Cynomix is a tool built for community analysis by and for malware researchers to quickly analyze malware, understand its capabilities, its lineage, and extract actionable information such as hostnames, IP addresses, and Yara signatures. These in turn can be used to detect these threats elsewhere.

Cynomix only gets more useful as more users add malware samples, perform analysis, and share their insights with others. We hope you use this resource we provide to you free of charge to support the efforts of cyber defenders everywhere.



An Initial Look at the Regin Malware in Cynomix


Early last week, Symantec disclosed nearly simultaneously with other security firms a sophisticated espionage campaign dating back to 2011 against the European Commission, European Council, Belgacom (a Belgian telco), and a Belgian cryptographer. What makes this campaign particularly interesting is the speculation that the code was written by US and British intelligence (NSA and GCHQ, respectively) and that the code is described as highly sophisticated, similar to Flame and Stuxnet in complexity.

A good summary of the campaign was written by Kim Zetter, author of Countdown to Zero-Day, and Andy Greenberg from Wired: “Researchers Uncover Government Spy Tool Used to Hack Telecoms and Belgian Cryptographer”. The article definitely hypes the sophistication of the code:

The researchers have no doubt that Regin is a nation-state tool and are calling it the most sophisticated espionage machine uncovered to date—more complex even than the massive Flame platform, uncovered by Kaspersky and Symantec in 2012 and crafted by the same team who created Stuxnet.

“In the world of malware threats, only a few rare examples can truly be considered groundbreaking and almost peerless,” writes Symantec in its report about Regin.

More fascinating, Glen Greenwald’s new publication, The Intercept, published an article “Secret Malware in European Union Attack Linked to U.S. and British Intelligence” that links the campaign to Operation Socialist, a GCHQ code name operation against Belgacom in 2010, according to documents leaked by Edward Snowden. More significantly, The Intercept published an archive of 33 samples of malware associated with Regin, making them available for public scrutiny and analysis. This is a remarkable example of a publication publishing code from a prominent attack campaign, which in turn should enable crowd-sourced analysis of the malware (more on this below), and sets an example that hopefully others follow.

A Missing Link?

Operation Socialist seems to fill in a key missing piece of the campaign that the collected malware have not yet revealed: how did the hackers get onto the target networks? According to The Intercept, Operation Socialist directed Belgacom employees to spoofed LinkedIn pages, more likely through spear-phishing, than a MiTM attack. It’s pretty easy to get someone to click on a LinkedIn invite if it is spoofed from the right person (your boss’s boss, for instance). Invincea does this often in our proof-of-concept implementations (with permission) to show how easy it is to successfully spear-phish employees. It works every time.

Analyzing the Regin Malware in Cynomix

Thanks to the archive of Regin published by The Intercept, we imported Regin samples into Cynomix. Cynomix is an automated malware clustering and analysis system designed to allow researchers to upload and crowd-source analysis of malware. You can get a free account to upload, analyze or view other people’s analysis of malware at For a good introduction to using Cynomix’s capability, see the recorded webcast on Cynomix.

While we have not formally analyzed the Regin malware, we have uploaded the samples to Cynomix and can quickly see some interesting attributes of the code. Invincea Labs Researcher Giacomo Bergamo produced the following views of the Regin family in Cynomix.

Clustering & Differentiating the Regin Samples


Figure 1: Clustering Regin Samples

Cynomix automatically clusters like pieces of code based on the similarity of code from one sample to another. You can see a major cluster of code in the lower left view of 20 samples that correspond to the samples labeled as 32-bit Loaders in the Threat Intercept article. The edges between samples indicate code sharing relationships. The tight clustering means there is a high degree of code sharing from which we can infer either this code cluster performs more or less the same function or they are packed and obfuscated with the same packer which will tend to yield the same type of clustering.

Likewise, the two samples from the 32-bit Orchestrator show a code sharing relationship (shown in the upper right hand portion in Figure 1).

There are several samples in the same set of 32-bit loaders that did not cluster (seen in the upper right hand portion in Figure 1) which indicates differentiated functionality from the other 32-bit loader samples.

Viewing Capabilities of Regin

Cynomix will automatically infer capabilities from strings it extracts using a web-sourced crowd sourcing algorithm. Interestingly, Cynomix could only extract capabilities from only one of the samples in the main 32-bit loader cluster:

Sample: 7d38eb24cf5644e090e45d5efa923aff0e69a600fb0ab627e8929bb485243926 (SHA1:e0895336617e0b45b312383814ec6783556d7635)

Capabilities detected: Accesses device drivers, accesses serial device, accesses USB device

The lack of extracted capabilities from the other samples in the cluster indicates that these were probably encrypted or obfuscated, which is consistent with the reporting on the Regin malware family– one sample is run to chain decrypt the other pieces of malware in turn.

Cynomix also provides a faceted view of the samples to see capabilities automatically extracted from the sample set.


Figure 2: Extracted capabilities from Regin malware

The faceted view in Figure 2 above shows the capabilities extracted for each sample in the set. Each row represents a sample with its signature and a color coded capability. Hovering over each color bar shows the extracted capability. The length of the bar indicates the confidence in inferring the capability extracted. The first observation is only 11 of the 33 samples had any capabilities extracted at all. This likely means the rest were completely encrypted or obfuscated. Even among the samples whose capabilities were extracted it likely is a subset of the capabilities of that code because the rest of the functions were likely obfuscated.

For convenience (assuming you do not have an account yet on Cynomix), the color code is:

  • Light yellow: accesses device drivers
  • Dark blue: accesses serial device
  • Azure: engages registry
  • Red: modifies Windows services
  • Green: Paypal related
  • Light blue: accesses USB Device

Note that the bars associated with Red, Green, and Light Blue are short, which indicates poor confidence in this extracted capability.

Take Aways

The main goal of this blog is to highlight two very important developments as it relates to nation-state campaigns and more broadly breaches:

  1. Publishing the malware allows for crowd-based analysis. The Intercept set a great example for making the code broadly available. Hopefully, others will see the benefit of this and follow suit. The set of code from Regin (or Operation Socialist) is still incomplete. Over time, we hope more of the code set is uploaded. Ideally, the unpacked/decrypted/unobfuscated code is made available which will make the analysis much stronger.
  2. Cynomix provides a community platform for uploading, analyzing, and sharing code. Cynomix allows for annotation of code as analysts contribute their insight even HUMINT. As more code gets uploaded to Cynomix, e.g., Stuxnet/Flame or other APT intrusion sets, we will be able to possibly identify traceability to actors based on code lineage and re-use.

We are at the very beginnings of understanding the Regin malware set. Hopefully others will contribute both code and their analysis to the set to facilitate greater understanding and awareness in the community.


You’ve Been Breached… Now What?


The US State Department disclosed this weekend that its systems were compromised by unknown hackers. Cheers for not blaming the Russians already on background. This breach disclosure follows recent disclosures by the White House, the US Postal Service and the National Weather Service that its systems were breached.

Normally, there is a collective groan and charges of ineptitude when a breach is disclosed. Counter-intuitively, we should interpret the rash of breach disclosures in the Fed sector as a positive sign of progress by the Feds that it is improving its breach detection capabilities.

No one should be surprised that the unclassified systems of Government agencies are compromised. Almost every Government agency has unclassified networks that connect either directly to the Internet or through a proxy to the Internet. Either way, access to email and the Web is essential to Federal agencies doing their job. Email and Web also provide the most common vector of compromise too — spear-phishing and drive-by download exploits from compromised sites and malvertising.

The Feds should be congratulated for discovering the breaches on their networks, and even more so for disclosing them. It highlights they are now looking and successfully finding adversaries on their networks.

While finding the adversary is important, there are more steps on the path to better and responsible enterprise security:

1. Disclosing (or admitting) to breach is a good first step, but you can do better. Full and rapid disclosure has public good potential to protect far many more people and organizations from similar attacks. Take the opportunity to disclose as much as possible about the attack so others may be able to protect themselves. Disclosure should include the method of exploit (e.g., spear-phishing, watering hole attack, malvertising, etc), the code found, command and control and other indicators the community can use to look for similar breaches.

2. Shutting down email and Web is an old school knee-jerk reaction that often creates more harm than good. It is in fact a self-inflicted denial of service. Senior executives will often demand instantly cutting off access to the Net in an attempt to stem the bleeding, so to speak, from a compromise. Resisting the urge to pull the plug will keep your organization operational through the incident and enables you to study your adversary’s tactics and techniques to learn more about their motives and potentially their identity. While it is popular in Federal disclosures to say “no classified systems were compromised”, the reality is the unclassified systems, including email and web, are mission-critical systems of incredible importance. Shutting them down can bring agency business to a halt, impacting citizen business in addition to agency mission.

3. Every crisis is an opportunity. Use the opportunity to re-think your security strategy. If the exploit vector was email or Web (as it is in 95% of external breaches), then use the opportunity to get protection from web-based drive-bys and spear-phishing attacks. Since existing security systems (usually signature-based) did not adequately protect against the threat, implement signature-free systems to protect against targeted attacks and 0-days. Brief executive management on Internet isolation strategies that allow employees to access Internet content, while preventing untrusted Internet content from accessing the host operating system and proprietary data. Internet isolation strategies also prevent exfiltration of data from extant compromised machines by cutting off command and control and data paths from malicious code to the Internet.

Good luck in your mission. Security is not an end state, but a process of learning, adapting, and innovating against an adversary that is doing the same.



A Musical Look at Malvertising Pwn-age

In October 2014, we uncovered a concerted campaign against US Defense companies called Operation DeathClick. The infection vector was novel for targeted attacks — malvertising. DeathClick used online advertising and its ability to render ads targeted to specific IP address ranges to drop malware-laced ads onto users’ machines visiting from particular companies. These were drive-by attacks, i.e., no clicks were required.

A couple weeks later, we discovered a new trend, that was simultaneously reported by security vendor ProofPoint, of malvertising being used to drop ransomware — particularly CryptoWall. We updated the DeathClick whitepaper accordingly. Earlier, in July, I posted and made a prediction that ransomware would be a game changer for the security industry. Little did I expect at the time that malvertisers would go whole hog and start infecting people with ransomware just by visiting popular sites online.

Since then, Invincea has seen a deluge of malvertising reported in through our direct customers and also through Dell Data Protection | Protected Workspace. These reports are culled and analyzed by Invincea Director of Malware Forensics Patrick Belcher. Pat tweets these regularly under his handle @BelchSpeak on Twitter. So follow him to see which sites are infecting visitors on any given day. Bear in mind all these infection reports we get are from protected users. If it were not for Invincea/Dell Protected Workspace, these users would be compromised and their data held for a ransom.

To give you an idea of what we are seeing (in our admittedly limited sample of our users reporting infections into us), we put together this musical collage of various websites compromising their visitors by serving up malicious ads.

If you think this is just plain wrong and want to do something about it, you can: (a) protect your enterprise with Invincea (b) protect your personal machine with Sandboxie, and (c) say something to powers that be that the industry needs to do something about this. Industry being the website owners and the advertising networks that display the (malicious) ads, often unwittingly. When you use Invincea, you are actually helping to fight back by exposing an otherwise invisible threat and the website owners that are unwittingly involved.


Russia is the New Black

In case you didn’t notice, it’s cool to be hacked by the Russians. First there was the rumors — never confirmed by JP Morgan Chase — that the venerable bank was hacked by Russian government… well maybe. The rumored Russian operation managed to compromise JPMC servers by compromising an IT admin’s account.

Once the rumored Russian involvement was leaked, it seemed to open the floodgates to a series of exposures of Russian operation according to various security companies.

In mid-October, we learned from iSight Partners about the Sandworm group out of Russia that is exploiting a zero-day (since patched by Microsoft) via spear-phishing with poisoned PowerPoint attachments. The exploit then dropped BlackEnergy, a malware variant used in cybercrime.

Then there was the disclosure by Trend Micro of Operation Pawn Storm that again identified that maybe Russians were behind it. Not to be outdone, FireEye released a report on APT28, a follow-up to their ATP1 report from February 2013. While it did not provide clues on what is happening with APT 2 through 27, the report tracks exploits dating back from 2007 for ATP28 that it speculatively attributes to Russian government or Russian government-sanctioned groups, based on circumstantial evidence of time-zones, targets, and malware design.

And the coup de grace this week was the disclosure by the Washington Post of the White House network breach — once again linked to the Russians, possibly. This week’s White House breach disclosure follows a remarkably similar White House breach disclosure two years ago. The talking points then were: (1) it’s the Chinese, (2) it was an unclassified network, and (3) it was a spear-phish. Today’s news is treated very differently because allegedly it was the Russians, and therefore is sophisticated.

My take-away from the news is Russia is the New Black. If you are going to be hacked, it is best to say it was the Russians because the Chinese cyber ops are not considered sophisticated anymore.

can't we patch our way to to security

CyberSecurity Awareness Month

I’ll say it up front, your security program does not work because it is based on three common myths we hold as unquestionable truths in enterprise security:

Myth 1: We can patch our way to security

Myth 2: We can train our users to not do “stupid” things

Myth 3: We can defeat targeted attacks by sharing signatures.

Don’t act surprised. We are in a technology field that is hyper-accelerated not just by technology advancement, but by adversaries constantly shifting tactics. If you held these truths in the year 2010, it’s time to update not only your security program, but also your thinking.

If you already accept these as the myths they are, then you can stop reading now, or absorb them to dispel them with your enterprise security colleagues. If you hold these myths as truths, read on.

Myth 1: We can patch our way to security

Let’s face it — the foundation of every major enterprise security program begins and ends with patching. If you haven’t patched your software, then the conventional wisdom is you are negligent (by corporate governance, compliance, and reputationally) in your job. It makes such perfect sense: if you know you have a vulnerability and a patch exists, then patch it already. Better yet, it is the one security vulnerability and control we can measure: what percentage of vulnerabilities lie unpatched? Not only does it make it measurable — a rarity in security — but it also can be used in performance evaluations. These are the very reasons why patching is the foundation of many security programs, compliance regimes, and ridicule when your software lies unpatched.

So why can’t we patch our way to security? Start with this chart:

can't we patch our way to to security


Notice that Java 1.7x remains 42% unpatched even with 145 vulnerabilities. That doesn’t even cover all the Java 1.5, 1.6 that is running in enterprises. Is it criminal negligence on the part of CISO and IT staffs that Java remains unpatched, especially given the vast majority of exploits are Java based? If you live outside the enterprise operations space, this seems ridiculous and negligent, especially since most software is auto-updating and it represents an obvious attack surface. In fact, the numbers 1, 3, 6, 7, and 8 top un-patched software programs also correspond to the top attack surfaces today on endpoints (not a coincidence of course). Clearly if we kept these programs patched, we would have much more secure enterprise networks.

Well it turns out, this is not a human behavior or motivation problem for enterprise IT. The reality of enterprise IT is these programs are not patched because application incompatibilities with legacy enterprise server apps forces enterprises to run unpatched older versions of software on the desktop. Java is the biggest culprit of this as so many back-office applications such as payroll, timesheets, finance, CRM and intranet applications were developed on versions of Java that are no longer current and incompatible with current versions.

Two take-aways:

1. Many applications in the enterprise space, including the most exploited ones, remain unpatched for enterprise IT application compatibility reasons. We cannot patch our way to security for this reason.

2. A security strategy based on patching software is inherently flawed. Enterprises must find ways of running unpatched software such as Java, Adobe plug-ins, Adobe Reader, and Internet Explorer while mitigating the risk of this attack surface. See isolation and containerization strategies to mitigate this risk.

Myth 2: We can train our users to not do “stupid” things

Most large enterprise security programs have Security Training for users. Many have posters on walls that show them what a phishing email looks like. Computer based training (CBTs) and CBT platforms and campaigns, and the cottage industry of spear-phish training, have emerged specifically to train and test users to recognize phishing campaigns.

The popularity of security training is predicated on the myth that we can teach users to make the Internet a safer place, if only they won’t be, well, humans. And since this is Cyber Security Awareness Month, this makes me the bearer of bad news for all the CSAM people who think focusing on security this month will make our networks, oh so much more secure. And better yet, victim blaming and shaming is all the rage in security and convenient to point the finger at the user. So rather than having to put in a security program that works, we can deflect by blaming the victim — users — for doing what comes natural — clicking on links and opening attachments — and in many cases is expected in their roles.

We cannot untrain millenia of human psychology evolutionary development to get humans to ignore fundamentals that phishing and spear-phishing campaigns appeal to:

trust in other humans
fear that not acting will result in something bad
greed that by taking some action they will become richer or better
desire that leads us to do things that fulfill us in, well, other ways.
Effective spear/phishing adversarial campaigns take advantage of these human emotions/frailties to get users to click on links and open attachments. In fact, as the chart below from Verizon Business data shows, the odds of getting a user to click on a link as part of a phishing campaign approaches 100% asymptotically as the number of emails (targeted users) reaches 17.


Two take-aways:

1. Users will click on links or an attachment. It is only a question of when. Phishing and spear-phishing campaigns are always successful for this reason. You are guaranteed a click or open with little marginal cost for each additional email sent.

2. A security strategy based on training users to not click on links or opening attachments will fail. You must find a security solution that accounts for the fact that users will click on links and open attachments, because they will. Strategies that defeat attacks regardless of what links a user clicks on address this problem.

Myth 3: We can defeat targeted attacks by sharing signatures.

The final myth that most large enterprise security people don’t want to hear today is that we can defeat targeted attacks by sharing signatures. I am a big advocate for sharing threat intelligence and full disclosure on the back of breaches. However, the myth to pierce is that sharing signatures of targeted attacks will defeat these attacks against organizations elsewhere.

Let’s face it, most of the public-private consortia is based on the notion of sharing attack signatures. The anti-virus industry was predicated on the notion that an attack signature taken from one victim can be used to protect the rest of the herd that has not yet been victimized by that attack. And these are very large consortia, public policy, and an industry predicated on this one myth: that by sharing attack signatures we can defeat an adversary elsewhere.

Unfortunately, this does not apply to targeted attacks — attacks specifically targeting an organization. When organizations are targeted, exploit toolkits are used to generate unique payloads that defeat anti-virus engines as well as network based signature scanning. Capturing the signature and sharing it among other institutions will only stop the laziest or most ignorant of attackers. The rest simply develop another unique payload rendering the shared signatures useless. This is why the security industry has moved to detection of compromise (post-breach) instead of prevention, because prevention-only approaches based on signatures no longer work.

However, all is not lost from sharing. Rather, what is shared is more important. Sharing tactics, techniques, and protocols (TTPs) is more important than sharing signatures. Sharing code can be equally useful — not just the signature. Sharing human intelligence about the adversary, if any is available and vetted, is valuable. Sharing network command and control can be useful, though this is diminishing with time as domains and exploit landing pages are becoming transient with half-lives measured in hours.

Two important take-aways:

1. Sharing signatures of attacks is not particularly useful in defeating targeted attacks. If you are in a sector (finance, defense, government, healthcare, manufacturing, technology, energy, critical infrastructure, among others) that is targeted, sharing signatures of attack is insufficient.

2. Building off of point 1 above, demand useful intelligence from public/private consortia, including TTPs, code, HUMINT, and adversarial command and control infrastructure. Better yet, demand or incentivize real-time intelligence from breach victims.


What a White House Fence Jumper Teaches us About Securing our Networks

Last Friday September 19, a man jumped the fence — the outer perimeter — of the White House complex, made a sprint for the front door and made it in. The brazen yet pedestrian attack that managed to breach the vaunted defenses of the White House can teach us much about how we secure our networks.

As reported in the Washington Post, that single act collapsed five different rings of security at the White House complex designed to prevent just such a frontal attack from succeeding. The White House had five distinct layers, including a plainclothes surveillance team meant to spot fence jumpers, a gatehouse guard, a SWAT team patrol, an attack dog, and a front door guard — not to mention the physical barriers of the fence and the front door, which was not locked incidentally.

In information security, it passes as conventional wisdom that layered security is the right approach to addressing the threats we face. If you look at a well-funded enterprise security architecture, you can probably point to a router/firewall, a DMZ, a web application firewall, a network sandbox, a web proxy, an IDS, an email security solution, and maybe an application monitoring firewall all at the perimeter trying to protect against a frontal assault. On the endpoint you have a suite of anti-virus software, personal firewall, disk encryption, and maybe DLP and single sign-on solutions.

When you list it all out, you can easily come up with a dozen different layers of security in a sophisticated enterprise, some of which I have omitted above. The flaw in the conventional wisdom is that these layers are overlapping, i.e., to compromise the machine and breach the data being targeted, one must sequentially breach each of these obstacles. Does anyone besides corporate boards believe this is the case? Anyone in information security?

To illustrate how flawed the layered security model is see how a Flash exploit or a Java exploit sails through all of those layers of security with the simple click on a link by a single user on an email. Or how just visiting popular websites like going to or testing your line speed on can compromise a machine, and by extension, a network, in spite of layers of defense. We know this because in our large enterprise deployments, we are the last layer of security — the one protecting the network from the user’s actions. We see the exploits like those linked to above that have sailed past every one of these enterprise security defenses at the perimeter. We know now what the Secret Service is finding out — if the layers are non-overlapping or ineffective, it doesn’t matter how many you have. A single click on a link by a single user or a fence jumper can breach your defenses in a matter of seconds.

It’s popular in security circles to say you can’t stop a professional who is determined to get on your network. The message that is implicit from this platitude is that is also not worth trying. I often counter, “would we leave our doors unlocked and let intruders into our house and then look months later to see what they stole?” Because that is the state of security today. Choosing how you spend your scarce dollars in security needs to reflect a risk-based approach based on your threats, the cost of the investment, and the cost of what is at stake in a breach. Spending more on preventing the breach means spending less on responding to the breach — which is often unbudgeted 7-figure costs — or more. As I’m fond of saying, the most expensive dollar spent in security is the one spent on incident response. The earlier in the adversary’s kill chain you can stop the attack, the less money spent and embarrassment saved. Someone else said it better as an ounce of prevention is worth a pound of cure.

As the White House learned and the lesson security professionals everywhere should take away: having non-overlapping or ineffective security strategies and unlocked doors do not make for a good security strategy. But it can lead to significant loss of IP, jobs, fines, IR costs, and embarrassment.