Posts Tagged ‘vulnerabilities’


Paving Over the Proprietary Web: The Java Security Bigger Picture

Written by Andrew Jaquith. Posted in Blog Post


Perhaps you’ve heard about the recently disclosed Java 7 zero-day exploit. The flaw allows a remote attacker to take complete control of a computer. It has been incorporated into many exploit kits. The Department of Homeland security regards the Java exploit as sufficiently serious to recommend “disabling Java in web browsers until adequate updates are available.” Oracle’s fixes — aren’t.

Many of my colleagues at other security firms have spilled a lot of ink describing why this particular Java exploit is bad. It is indeed that bad; Apple, for example, has forced down an update that blocks the Java 7 plugin from executing in the browser at all, at least until Oracle is able to distribute an update. If you are in the habit of keeping Java switched on in your browser, you should turn it off — of course. But that isn’t always possible. Client-side Java, for example, powers GoToMeeting. Many other companies — including my own — rely on client-side Java for critical functions. So one cannot simply rip it out, or mandate that it be banned. Reality has a habit of messing up the best-intended recommendations. But make no mistake, at some point very soon Java on the client needs to go. CIOs, please take note.

Client-side Java is part of the web’s proprietary past, and its time is ending. That proprietary past also includes ActiveX and Flash, two other technologies that saw widespread adoption in the early 2000s. That all three of these technologies came of age at roughly the same time isn’t a coincidence; they all filled gaps in the web experience. ActiveX was Microsoft’s way of adding native client functionality to a then-crude web experience; client-side Java (Swing, Java Web Start etc) did the same. Flash and its cousin ShockWave provided smooth video and animations.

Since 2005, though, the native web has changed dramatically, and for the better. HTML 5, CSS and JavaScript toolkits have been the major catalysts of a revolution in web design. The canvaselement added to HTML 5, for example, allowed standards-compliant browsers to draw shapes, create and fill paths, and animate objects. This, plus the video element, freed designers from needing Flash. Cascading Style Sheets (CSS) Levels 2 and 3 gave designers increasingly pixel-perfect control over the placement and appearance of content — a task made even easier with CSS pre-processors such as LESS and Sass, and with kitted CSS assemblies such as Twitter Bootstrap. On the JavaScript front, third-generation toolkits such as jQuery made it simple to make websites dynamic and responsive. You can do all of these things for free, without needing to buy any of the various Studios from Adobe or Microsoft.

The slow-motion revolution in how the Web is made means that the raîson-d’être for proprietary web technologies is going away. Like a lumbering concrete mixer, HTML5 and JavaScript are slowly paving over the parts of the web that had previously been occupied by Flash, ActiveX and Java. Ironically, the vendors of these proprietary technologies have, in their own ways, added limestone, clay and water to the paving machine.

Microsoft, for example, turned an entire generation of web developers against it with its long, and ultimately fruitless, resistance against robust CSS support in Internet Explorer. Although modern versions of IE are highly standards-compliant, Internet Explorer did not pass the CSS Acid3 test until September 2011. Any web developer who has been working with CSS for more than 5 years, for example, can probably regale you with stories of massive hacks needed to allow older Microsoft browsers to work with standards-based websites.

The roots of Adobe Flash’s decline are a little different. Nothing was “broken” with Flash, functionally speaking1. Two related events resulted in a decline in Flash usage: Steve Job’s public refusal to add Flash support to the iPhone and successor iOS devices; and Google’s decision to convert its vast library of YouTube clips to HTML 5-compatible WebM and H.264 formats.

These actions, plus the increasing viability and efficiency of WebM and H.264, meant that you didn’t need Flash video any longer. This has clear implications for customers. For customer-facing websites, you can (and should strongly consider) retiring Flash video in favor of H.264. This is a quick win; the re-encoding process is relatively quick and painless. That said, the need is not as urgent compared to Java. Adobe’s security team (under the leadership my former @stake colleague Brad Arkin) has upped the tempo of bug fixes, adopted auto-update, and is taking security seriously enough that Flash has become less risky than it had been. Still, if you can remove a dependency on a third-party component that needs to be maintained and updated in addition to the base operating system, why wouldn’t you?

Java, on the other hand, is simply a mess. From a pure features perspective, Java’s caretaker parent, Oracle, no longer employs the kind and number of Java engineers that will keep it up-to-date — never mind put it back on the cutting edge. Most of the Java engineers and visionaries such as James GoslingJosh BlochTim BrayAmy Fowler, and Adam Bosworth — the people I learned from and looked up to while I was learning Java J2EE — left long ago to greener pastures. Although server-side Java is still widely used, nobody I know would consider it for greenfield development for use with a browser.2

From a security standpoint, it is hard to see why Oracle would be Johnny-on-the-spot with security fixes. As my other (!) former @stake colleague David Litchfield has pointed out, the company doesn’t have the best track record on security. We can reasonably assume that fixing client-side Java security holes isn’t anywhere near the top of Oracle’s priority list. And even if it becomes so because screaming customers demand it, legacy products get legacy engineers. That’s just the way it is.

The same goes for Microsoft’s ActiveX. Developers don’t use it for new web-based projects, and the company has for several years recommended that developers use other technologies3 to make dynamic websites. The risks associated with ActiveX continue to be high, no doubt because ActiveX controls are basically chunks of native code written by various vendors of varying skill, remotely triggered by websites that may or may not be under the user’s control. (What could go wrong withthat?) To be sure, Microsoft has done as much as any vendor in the industry to set the standard for responsible and secure development practices. Over the years, they have responded relatively quickly to the various ActiveX security issues that have popped up over the years. But as with client-side Java, it’s legacy technology maintained by legacy engineers.

It is much, much easier to talk about how the slow-moving concrete machine that is the modern web — HTML 5, CSS, and JavaScript — will slowly pave over the proprietary web. It is harder to state with confidence what it will mean for security. However, one may hazard a few guesses. The decline of these three technologies should increase the overall level of security over time. Logic dictates that a browser festooned with fewer proprietary plugins is a more secure browser. Put differently: migrating older websites to use CSS, HTML 5 and JavaScript support will have the effect of concentrating the attack surface by reducing the number of parties who must defend that surface. Over time, the broad public ought to be better served by having Apple or Google or Microsoft be responsible for the entire web browsing experience — including security.

But in the short term, it won’t be so clean. Based on vulnerability counts — an imprecise metric at best — the “younger guys” don’t score well. For example, the US National Vulnerability Database shows that the WebKit browsing engine had over 198 disclosed vulnerabilities last year. Internet Explorer? Just 61. Meanwhile, ActiveX, Java and Flash had 73, 169, Flash 67. I draw no other conclusions from these data, other than the simplest one — increased use of native browser capabilities is likely to increase risks in the short term, even as the decreased use of proprietary technologies decreases it. At some point the two lines will cross.

In the meantime, the cement truck keep rumbling.

1 Functionality aside, Flash’s security track record has been poor for a while.

2 Java development is alive and well on the Android platform, of course.

3 It’s fair to say that Microsoft has been all over the place on this subject over the last 10 years: DHTMLXAML/SilverLight, and now Windows 8-style apps.


The Year in Review: 2012

Written by Andrew Jaquith. Posted in Blog Post


As the song goes, It’s The Most Wonderful Time of the Year. It’s the time of the year we write out our holiday cards, buy presents, think kind thoughts of our friends and family, and wax nostalgic.

Security is a big enough deal that it, too, warrants reflection and (dare I say it), a little bit of nostalgia. It’s the gift that keeps on giving. In that spirit, let’s dig up some of the tastiest chestnuts from the preceding 11 months, and gently roast them where appropriate. Given my sense of humor it’s going to be, shall we say, a dry roasting.

Here’s what got our attention in 2012. As is customary and appropriate, we spent a lot of time worrying about malware. The cloud — with all of its opportunities and challenges — was the second most important topic on our minds, along with mobile security. As you might expect, given our customer base of over 1,800 banks and credit unions, we analyzed financial services topics in depth. A variety of other topics got our attention, notably October’s National Cyber-Security Awareness Month and Mac security.

Each of these topics take time to review. So, let’s get nostalgic.


In 2012, it was clear that malware continued to be a problem for many companies. Of all of the topics we wrote about in 2012, we wrote about malware the most. Malware concerns came in four categories: web malware, new attacks, legacy malware and administrator-targeting malware:

  • Web malware — because of the ubiquity and reach of ad networks, attackers have made it a priority to attempt to infiltrate and infect ad servers. My colleagues, analysts Evan Keizer and Grace Zeng, wrote extensively about a banner-add infection campaign that caused to inadvertently serve malware. Unfortunately there are no easy fixes for banner infections; webmasters (and their colleagues in marketing) must be extremely vigilant.
  • New attacks — the Flame malware family, which some have called the most sophisticated malware ever discovered, was discovered by our friends in May at Kaspersky and widely covered. We thought it was notable enough to write about, too. Just to show that I don’t have a monopoly on bad puns, my colleague Rick Westmoreland asked, “Flame: Is it getting hot in here?
  • Legacy malware — we saw campaigns targeting old-school programs like Symantec’s venerable PCAnywhere. (If you are asking yourself, “do they still make that?” you aren’t alone.) Malware targeting Microsoft’s RDP protocol also spread rapidly; we felt it was dangerous enough to issue an advisory.
  • Administrator targeting malware — the most insidious malware campaign we saw in 2012 was one targeting Plesk, an administrative console for website operators. This was a little scarier than most campaigns because it obviously targeted people who have a high level of privileges already — your IT guy. This is the kind of thing that presages an industrial espionage campaign, a topic I covered at length in my webinar “The Hype and Reality of APTs,” something you should watch. (Ed: I am not joking. Really, go watch this; it deflates the APT hype balloon.)
In addition, we gently ribbed the anti-virus industry in an amusing post (Ed: to me, anyway) called “The Best and Worst Data-Driven Security Reports of 2011,” where I made fun of the silliness that comes with the periodic rash of AV “threat reports,” while celebrating the genuine good stuff, such as the Verizon Data Breach Investigations Report.

Cloud security

In 2012, Cloud security topics were right up there with malware in our consciousness. Call me crazy, but to me “the cloud” is a fancy name for hosted services mashed up with virtualization, and juiced up with instant-on provisioning and elastic usage billing. It’s a new — and welcome — twist on an old concept. Companies want to use the cloud in areas where it makes sense — for hosted email, productivity, and sales automation — but they want to do it only when they can be assured that their data is secure.

My colleague, Grace wrote about a key class of cloud risks: the security of servers in the cloud. She performed experiments where she placed 12 unprotected servers in the Amazon cloud and watched what happened. The headline: on average, your new cloud servers will start seeing scans, probes and potential attacks within an hour! Scary stuff — if you haven’t already, you should read these posts.

On the positive side, Perimeter created a series of video blog posts called the Cloud Owners’ Manual that took strong points of view on how companies should think about the cloud, and what they should be asking their vendors. Looking spiffy in a suit, I spoke on camera about key customer concerns about the cloud, and gave prescriptive guidance on the cloud in general, customer fees, data protection, data privacy, contractual terms, and contract termination. As an analogy, I compared cloud security requirements to car safety belts. Did you know that since the advent of car safety technology, based on US DOT official statistics, people now drive faster and have fewer accidents? It shows how safety gear is a precondition for faster, safer driving. To put it differently: confidence requires security. And by analogy: so it is with the cloud.

Mobile security

From iPhones to iPads to Galaxies, mobile devices continued to move to the top of IT security managers’ list of concerns. Beyond the sheer proliferation of devices, we observed four key trends:

  • Bring your own device. When I was an analyst at Forrester, my then-colleague Natalie Lambert coined the term BYOD and wrote quite a bit about it. That was four years ago. Now, it’s the hottest thing in IT. What do companies do about it? For our part, Perimeter answered the bell in September when we unveiled our Cloud MDM service in partnership with AirWatch. In the service, we included strong default policies and a unique BYOD Kit that provides prescriptive guidance for all of the areas employers need to worry about: data rights, support, confiscation, and many other topics. We think the right solution to BYOD is holistic, and encompasses the domains of policy, technology and law.
  • Developer ecosystem concerns. In September, developer Blue Toad had 12 million Apple unique identifiers (UDIDs) stolen. This shined a spotlight on a fragmented, shadowy part of IT: the thousands of smallish, contract mobile app developers, very few of whom are likely following mobile app security best practices. Watch for this topic to explode in 2013 as the Mobile Backend-as-a-Service (MBaaS) category heats up.
  • Data privacy. In the first quarter, we saw a controversy erupt over the Path app, which was uploading customer address book records to their servers unbeknownst to customers. I called Path an example of “nosy apps” and characterized data privacy as the “third rail of mobile.” These kinds of negative stories had an immediate impact on handset makers. Apple, for example, added significant opt-in controls to iOS6 that require customers to explicitly authorize app access to address books, photos, calendars, tasks, FaceBook account information and much more.
  • iOS has been a benefit to security. Speaking of Apple, did you know that iOS is now over 5 years old? In that time, customers have gotten used to the idea of vendor-controlled app marketplaces, digitally signed and trusted operating system runtimes, and locked-down devices. We have Apple to thank for popularizing the concept, building on the kinds of concepts RIM and Symbian had initiated. See my in-depth 5-year iOS security retrospective for details about why I think iOS is overall an huge net win for companies and consumers alike.

Financial services

Banks, credit unions, broker-dealers and other financial institutions continue to be a significant part of Perimeter’s customer base. We noted many, many threats to financial services customers in 2012. The rash of denial-of-service (DDoS) attacks in September prompted us to issue a critical advisory to our customers. We followed up on the DDoS story in October; my colleague Rick Westmoreland called it “the new reality” for financial services firms.

In July, we inaugurated our first-ever Financial Services Threat Report for the first half of 2012, which described the most important threat trends our customers were facing in the year to date. We will be doing more of these reports, and our second-half report will be coming out after year-end. To help our credit union customers, Andrew wrote a three-part series on credit union security topics.


Beyond these four main themes, Perimeter noted several other trends. We weighed in on this newfangled concept called “cyber security,” which is what happens when government-type people get their hands on an otherwise perfectly acceptable phrase — that thing that most of us used to call “information security” — and dumb it down. I suppose cyber-security is, to paraphrase Deng Xiaoping, Security With Government Characteristics.

Whatever you choose to call it, we helped celebrate National Cyber-Security Awareness Month in October with four posts by my esteemed colleague Mr Mike Flouton:

Midway through the year, Perimeter E-Security CEO Tim Harvey and actor/entrepreneur/restauranteur Robert De Niro hosted an exclusive New York event for 75 select partners and customers. The event featured an inspiring talk by two active duty Navy SEALs about building a high-performance, elite team capable of executing the most difficult missions. Tim’s summary of the event is here — in which he describes the key ingredients for success. For the record, I spoke at the event as well, but let’s face it: De Niro and the two Navy SEALs were hard acts to follow. It was a great event, though!

Lastly, Perimeter wrote about those devices your executives and developers are probably now carrying: Macs. In October, we released a survey showing that Mac usage is up, and that security concerns are increasing. Earlier in the year, alerted customers to something rather rare but important: real-life Mac Trojan outbreak in the wild: the Flashback Trojan.

Wrapping up

As I noted at the top of this post, security is the gift that keeps on giving. That’s good and bad. It’s bad for the obvious reason because the threats, concerns and challenges that got our (and the industry’s) attention affect companies and their customers everywhere. If security were a solved problem, we wouldn’t need to spend the time, attention and effort that we do.

I choose to be positive, though. Security threats and challenges are also good things. They remind us that, as professionals, we need to keep upping our game. New business frontiers such as mobile cause us to expand our horizons, become more involved with our colleagues and take the longer view.

As we look ahead to 2013, we are thankful for the continued support of our customers, colleagues and families. We at Perimeter wish you, dear reader, all the best this holiday season.


New exploit campaign targeting Parallels Plesk — admins, beware!

Written by Richard S. Westmoreland. Posted in Blog Post

Last week, the Perimeter Security Operations Center added specific siem detection for an active exploit kit campaign dubbed RunForestRun.  An updated diary article from SANS provides a good summary ( of the campaign:

The campaign uses an underground web server called Sutra TDS that makes analysis difficult.

Two different sets of URLs are used for redirection, and successful redirection only happens when the cookies from the previous stage are set (this evades direct analysis of the final URL)

Successful exploitation via Blackhole Exploit Kit drops ZBot (ZeuS)

Unsuccessful exploitation serves Fake-AV in local language

As we dove deeper into backtracking the jumping point for this campaign, we confirmed that these exploits originated from legitimate sites with compromised JavaScript files.  The sites were compromised due to vulnerable versions of Parallels Plesk and ongoing attacks against the site management platform. The vulnerable versions are prior to Plesk 10.4. Unmask Parasites summarized the attacks as follows:

Feb-Mar 2012, attackers gained admin access to Plesk, put in backdoors and stole user databases (which contained passwords in plaintext)

June 2012, hackers use stolen credentials to modify js scripts with obfuscated code

The .js scripts are appended with a heavily obfuscated routine that causes remote content to be loaded when the site is visited:

...obfuscated code...
...obfuscated code...

In other cases, we have observed similar code prepended to HTML files. We are still verifying if this is part of the same campaign or another compromise altogether.  Other 3rd party analysis of the campaign also shows other exploit kits getting used, such as Nice Pack and RedKit.

The script generates a pseudorandom domain with a .RU suffix, and at the start of the campaign loads /runforestrun?sid=cx from that domain.  At the time of this writing the URL is now /runforestrun?sid=botnet.

Based on referral information, the page either continues a series of redirections, or displays a fake error that the domain was suspended.

Recommendations for customers

The RunForestRun campaign is particularly dangerous because it targets sites running Plesk. Who are the typical users of Plesk? Privileged administrators, of course. As such, we regard this campaign as a deliberate strategy to compromise highly privileged employees. That places it in a more serious risk category than your garden-variety malware campaign.

For IT administrators who use Plesk to manage their websites, we recommend that you:

  • Confirm you are not running versions prior to 10.4.
  • If you have been using an older version, upgrade immediately, review all your web files for compromise, and reset your accounts’ passwords.

For Perimeter E-Security managed security customers, we have implemented these protections:

  • Fortigate Antivirus is providing partial client-side protection with JS/Iframe signatures
  • Fortigate Web Content Filtering can block Unrated sites and all *.RU domains if requested
  • IDS Monitoring and 24/7 SaaS Security Monitoring benefits from SIEM correlations created to track this activity and 24/7 Security Analsyts to escalate alerts and assist with log analysis

Due to the complexity of this campaign, it is recommended that local desktop antivirus software be frequently updated. You should also use web browser plugins, such as NoScript, to block unauthorized scripts.


Five Years On, iOS Security Has Been a Huge Win for Customers

Written by Andrew Jaquith. Posted in Blog Post

Apple introduced the original iPhone in June 2007, a little more than five years ago. It’s appropriate at this point in time to ask whether Apple’s then-new, untested mobile platform has lived up to its promise as a secure platform. F-Secure’s Mikko Hypponen, one of the few consistently rational voices in the anti-virus vendor community, believes iOS has been good for customers from a security perspective. As he tweeted yesterday:

iPhone is 5 years old today. After 5 years, not a single serious malware case. It’s not just luck; we need to congratulate Apple on this.

On the opposite side of the argument is Sophos’ Josh Long. Although he concedes that the Apple’s App Store is “relatively safe,” he argues that Apple could do a better job vetting and patching, and that the risks of jailbreaking are still high:

Security researcher Charlie Miller has previously figured out how to break the App Store anti-malware model using a flaw in the iOS code signing enforcement mechanism, and there have been reports of developers working around other App Store restrictions with clever tricks; see the Security Now! episode 330 transcript and search for “vetting.”… The history of jailbreaking iPhones and iPads has provided plenty of evidence that smartphone users are being made to wait too long to get security updates for their devices.


Why I was lazy about changing my LinkedIn password

Written by Andrew Jaquith. Posted in Blog Post

I have a confession to make. I did not change my LinkedIn password until today, more than two weeks after LinkedIn disclosed that its password database had been hacked.

Some background on the attack. As you may know, an unknown attacker compromised the LinkedIn website and made off with nearly 6.5 million password hashes. These were low-sodium hashes, apparently, because they were not salted — making it easier for an attacker to perform what’s sometimes called a “rainbow table” attack. This means the attacker compares the hashes in the compromised database against a precomputed list of passwords that had been previously run through a hashing algorithm, in this case  SHA-1. Because the hashes were not salted, successfully guessing a given password was relatively trivial for commonly used passwords. Most computer security experts, including no less than SANS Institute’s Johannes Ullrich, recommend that passwords should be “salted” in addition to hashed, which makes this type of attack harder.

If you want to read an erudite, well-reasoned discussion about passwords and why naïve hashing strategies (like the one LinkedIn used) don’t work, go and read Brian Krebs’ interview with my friend Thomas Ptacek, founder of Matasano Security, charcuterie expert extraordinaire and all-around good guy. Thomas argues that as a general principle defenders need to make attackers work harder. He also argues that the typical “well, just make sure you salt your hashes” expert advice doesn’t work any more either. Salting your hash won’t work because it doesn’t add much computational time to the attempt. Here’s the key quote:

Let’s say you have a reasonably okay password and you’re using salted, randomized SHA-1 hash. I can, off-the-shelf, build a system that will try many, many tens of thousands of possible passwords per second. I can go into Best Buy or whatever and buy a card off the shelf that will do that, and the software to do it is open source.

If we were instead using something like Bcrypt, which would have just been the difference of using a different [software] library when I built my application, I can set Bcrypt up so that a single attempt — to test whether my password was “password” or “himom” — just that one test could take a hundred milliseconds. So all of a sudden you’re talking about tens of tries per second, instead of tens of thousands.

What you do with a password hash is you design it in the opposite way you would design a standard cryptographic hash. A cryptographic hash wants to do the minimum amount of work possible in order to arrive at a secure result. But a password hash wants to deliberately be designed to do the maximum amount of work.

Thomas is usually the smartest guy in any room he happens to be in, and I agree with his recommendations. What, then, should you be doing in your web applications? Using an algorithm like Bcrypt is a good idea. If you are using a reasonably modern computer programing language (.NET, Java) with good crypto libraries, you can also use PBKDF2, which stands for Password-Based Key Derivation Function #2. Both Bcrypt and PBKDF2 follow the same principle: they create an initial hash, and then (more or less) perform the hashing operation over and over an arbitrary number of times (the “iteration count”). The result is that it makes it comparatively expensive to compute the hash, but that’s ok for the typical person who is just typing to enter a password. He or she won’t mind if it takes a half-second. But if you are an attacker, even a half-second messes up your business model.

(In case you were wondering, if you protect your iPhone or iPad with a passcode, Apple’s iOS 4 and higher use PBKDF2 with 10,000 rounds of iteration to protect the pass codes. That makes me feel pretty good.)

Ok, smart guy. LinkedIn wasn’t doing any of that stuff. Why didn’t you change your LinkedIn password again?

Whoops. I got a little distracted. So, Bcrypt and PBKDF2, those algorithms I mentioned above — the ones  you should be using in your web applications? LinkedIn wasn’t using them. The company just hashed stuff. Attackers were able to run a simple rainbow-table/dictionary attack and recover a lot of the passwords. In fact, our friends at Rapid7 have created a nifty infographic showing what the most popular ones were. Passwords like “link,” “god,” “job,” and “princess” topped the list. (Princess?)

So, shouldn’t I have been worried that wily hackers cracked my password at some point in the last 2 weeks? Maybe a little. I confess, I slacked a bit in changing my password. But then again, I felt pretty sure mine hadn’t been cracked. In the spirit of full disclosure, this was my old password: 3d*f$elMZ0tK.

There is little chance an attacker could have brute-forced it — it is completely random, and fairly long (12 characters). To generate it, I had previously used a third-party password management tool called 1Password. The tool creates an encrypted vault of passwords, all protected by a master password. I use it to generate unique, long and complex passwords for every website I join or log into. As a result, none of my website passwords are shared. They are all unique. And they can’t be easily brute-forced. Some of my passwords are 36 characters long.

I don’t remember any of them, and I don’t care. I make it a rule not to remember my passwords, except for the master password.*

If you follow a strategy like this as well, when the next big website gets knocked over, you won’t have to care either.

PS. If you are interested in password-vault tools, read Elcomsoft’s paper on password managers first. Although I don’t have experience with the other password managers cited in the paper, Keeper and DataVault both seemed to score fairly highly in terms of resistance to brute-force attacks.  See also the 1Password team’s commentary.

*Actually, that’s not quite right. I have also memorized my iTunes Music Store password (16 characters, totally random). I had to memorize it because I type it in so often. But I digress.


How Vulnerable are Unprotected Servers in the Cloud? Part II

Written by Grace Zeng. Posted in Blog Post

By Grace Zeng and David Coffey

In Part I we described our experiment with 15 unprotected servers in the cloud under two configurations and showed the elapsed times of port scan, vulnerability probe and exploit. In this post, we would like to share the observations and insights we gained from this experiment and discuss several ways to protect your machines, whether they are running in the cloud or on-premise.


  1. Every machine on the Internet is scanned within minutes after connecting. It does not matter whether a machine connecting to the Internet opens ports or not — any machine will be scanned within several minutes. This is not surprising because attackers don’t know it a priori – they need a scan to figure out.
  2. More open ports means more vulnerability probes. The amount of elapsed time between the initial connection and the arrival of vulnerability probes depends on the specific services that are running. The more listening services a machine has, the sooner it will be probed, and the more risks it will be exposed to.
  3. More vulnerabilities means more exploits. It is rare that attackers send exploits blindly without first knowing that their targets are vulnerable. On the other hand, if unprotected machines have holes, chances are good that attackers will find them and attempt to exploit them. How long it takes depends on the vulnerabilities a machine has.
  4. Login attempts are more common than exploits. We observed that attempts to log in were much more frequent than vulnerability probes or exploits. On each machine, we captured dictionary attacks at ports 445 (SMB) and 3389 (RDP), attempting thousands of username/password combinations. Most attempts targeted accounts with administrator privileges. Weak or default passwords can be easily broken.
  5. Unknown exploits are rare. Even though our machines were otherwise unprotected, we saw few unknown exploits. This suggests that security products such as firewalls, IDS/IPS and AVs are likely to be effective in protecting computers from the most common attacks.

Recommendations for customers

Our experiment showed that the Internet is perilous. Attackers are constantly searching for targets. Although the Windows operating systems have become more secure, machines are still vulnerable to attacks. We do not recommend that customers follow our example — that is, putting unprotected servers into the cloud or directly onto the Internet.

To protect machines from attacks, customers should employ a multi-layered defense strategy. Appropriate defensive tactics fall into two categories: network-based and host-based. Network-based mechanisms include firewalls, IDS/IPS and anti-virus tools. (Note that in-cloud customers are not able to customize and deploy all network-based mechanisms, but do as much as you can. For instance, on Amazon EC2, you can easily configure inbound traffic rules on the management console for each machine.) Host-based methods include anti-virus software, host firewalls and host-based IDS/IPS (HIDS/HIPS) as well as security log management agents. Network-based defenses can secure your entire network perimeter and prevent threats from reaching computers inside the network, while host-based defenses safeguard individual machines from attacks. Customers can also limit the risk by taking action to reduce attack surfaces.

Network-based tactics


Firewalls are a vulnerable but important part of network-based defenses. Customers should:

  • Configure firewalls to deny all unwanted incoming traffic. As a result, attackers scanning the public-facing IP address will see most ports closed or filtered, and subsequent vulnerability probes and exploits will not be launched.
  • If some firewall ports must be open, restrict source and destination ranges. If your business requires you to accept incoming connections such as port 3389 (RDP), you should configure your firewall to restrict logins to users and IPs that are allowed to make such connections; connections from other sources will be denied.
  • If source restriction is not possible, consider rate limiting attack-prone traffic. For example, a common attack tactic is to attempt to brute-force logins on the target system. As a defensive tactic, you can implement rules that drop traffic from source IPs whenever the number of failed login attempts exceeds a threshold within a given time window. This allows externally facing machines to get a few failed logins but not an entire brute-force or dictionary attack. Similar rate-limiting rules can be set up to filter out targeted port scans and host sweeps as well.

Intrusion detection/prevention systems (IDS/IPS) and anti-virus (AV) tools

Although firewalls are generally effective in allowing legitimate traffic while blocking unwanted traffic, they work at a coarse level of granularity. They also have limited abilities to examine the content of the traffic. Customers should:

  • Install a network-based IDS/IPS. These provide more granular protection. IDS/IPS can inspect traffic to detect vulnerability probes and exploits by matching known attack patterns. These products also use heuristics to quickly pinpoint malicious communications and take necessary actions thereafter.
  • Incorporate network-based AV tools. IDS/IPS is communications centric, whereas AV tools provide visibility into files, programs and data transferred in the traffic in ways that IDS/IPS cannot. AV tools are especially capable of capturing packed and polymorphic malware in transmission.

Host-based tactics

In addition to network-based defenses, customers can add defenses on the host. Customers should:

  • Use AV software to prevent, detect and remove malware on host machines. These products are able to identify malware that propagates not only through network communications but also through channels such as USB drives.
  • Enable/Install a host firewall to allow or deny traffic that is transmitted from, or received by, particular programs or applications.
  • Install a host-based IDS/IPS (HIDS/HIPS) product on critical servers to provide full visibility into what is happening on the host — all system events happening in the memory, file system and registry. HIDS/HIPS also uses behavior-based rules to defend against unknown threats.
  • Consider utilizing log management agents to collect and monitor important system logs. Especially when HIDS/HIPS is absent, log management agents provide audit trails for further analysis and can send alerts in real time as incidents occur.

Attack surface reduction tactics

Besides the deployment of necessary defense mechanisms, customers can take additional, largely non-technical, steps to reduce the attack surfaces of Windows machines.

  • Always use strong passwords — either a combination of letters, numbers and symbols or an easily remembered but hard to guess long phrase.
  • Disable and/or uninstall unused services.
  • Restrict anonymous SMB logons.
  • If possible, change default ports such as RDP (3389) to less known ports.
  • Always keep Windows and installed software up-to-date — ideally, automatically.

To sum up, no matter whether a machine is in the cloud or on-premise, one single defensive tactic is insufficient to protect it from attacks. Customers should combine host-based and network-based defensive mechanisms as well as attack surface reduction tactics to create a multi-layered defense to protect your machines.

Andrew Jaquith and Richard S. Westmoreland contributed to this post.


How Vulnerable are Unprotected Servers in the Cloud? Part I

Written by Grace Zeng. Posted in Blog Post

By Grace Zeng and David Coffey

The Internet is a playground for opportunistic attackers. Right now, there are thousands of malware threats circulating around the Internet. Most computers today are protected by firewalls, IDS/IPS and anti-virus (AV) tools. But what happens when they do not have any protection? Previous experiments on “Time-to-Live-on-the-Network” and “Survival Time” of Windows machines were conducted quite a few years ago with test machines running old Windows operating systems. The “Four-Minute Windows Survival Time” claim in 2008 was especially criticized for using a Windows XP RTM or SP1 version in the test.

Since the time of these initial time-to-live studies, the Internet threat environment has become deadlier. Meanwhile, the Windows operating systems have become more secure. Because the state-of-the-art changes so quickly, we wanted to know how well an unprotected machine with a current operating system does in today’s threat environment. Left to its own devices, how soon will it be probed and attacked? We are particularly interested in testing unprotected machines hosted in the cloud because enterprises are increasingly turning to the cloud for various business purposes.

Experiment design

We ran our experiment with 15 machines in Amazon’s Elastic Computer Cloud (EC2) environment with two configuration profiles: “wide-open” and “out-of-the-box”. In the wide-open scenario, a machine opens all ports and emulates all possible services. This way the machine can attract as many malicious attempts as possible. In the out-of-the-box scenario, a machine runs only with default open ports and services. This scenario gives us a baseline of how many malicious attempts an unprotected machine might encounter.

Windows is by far the most popular operating system on the Internet. Its Server versions are generally exposed to more risks than Home/Professional versions. Our tests were carried out on the latest Windows Server 2008 R1 SP2 and R2 SP1. We disabled all firewall and anti-virus programs and configured the security policies so that Amazon would allow all incoming connections (TCP, UDP, ICMP) to those machines. We used Wireshark to capture packet-level traffic in real time.

To create the wide-open scenario, we installed a low-interactive honeypot named HoneyBot and disabled/changed several services to avoid interference. After the configuration was complete, we took a snapshot of the instance and created an AMI (Amazon Machine Image) for later use. We launched ten instances on EC2 using the same AMI and made sure that they were hosted in different geographical zones and were allocated different IP addresses.

For the out-of-the-box scenario, we made a clean install of Windows Server 2008 and didn’t install any programs other than Wireshark. By default, only ports 135 (RPC), 139 (NetBIOS), 445 (SMB) and 3389 (RDP) were open. We ran five such instances on EC2.

Scan, probe and exploit elapsed times

Malware infections follow a predictable pattern. Using a port scan, an attacker tests whether a port on a target machine is open. If so, a vulnerability probe gathers more information about a listening service, such as the version of the service – to identify vulnerabilities; and an exploit delivers malicious payloads to compromise the machine.

In  the wide-open scenario, after launching, on average it took about 23.4 minutes to see the first port scan, and 56.4 minutes to see the first vulnerability probe (the number for each server is shown in Figure 1).  Probes hit well-known ports such as 22 (SSH), 23 (Telnet), 25 (SMTP), 80 (HTTP), 445 (SMB), 1080 (SOCKS Proxy), 1433 (Microsoft SQL Server) and 3389 (RDP). With respect to exploit times, we observed that almost all first exploits were made within 24 hours, with the average time being 18.6 hours (Figure 2). We captured exploits on port 445 (SMB), 1434 (Microsoft SQL Monitor), 2967 (Symantec AV) and 12147 (Symantec Alert Management System 2). Almost all exploits during our month-long experiment were known threats. For example, the attack targeting port 12174 exploits a remote-code-execution vulnerability which was disclosed in 2009.

Figure 1. Scan and Probe Times for Wide-Open Servers (in minutes)

Figure 2. Exploit Times for Wide-Open Servers (in hours)

In the out-of-the-box scenario, it took an average of 13 minutes for the first port scan to arrive (Figure 3). Port scans hit ports such as 8080 (HTTP) and 1433 (MS SQL Server). The first vulnerability probe arrived within 3 hours on average (Figure 3); all probes were login attempts to Samba share (445) or via RDP (3389). We monitored the servers for a few weeks but didn’t see any exploits due to the limited number of open ports.

Figure 3. Scan and Probe Times for Out-of-the-Box Servers (in minutes)

Wrap up

Back to the question we asked in the beginning: On today’s Internet, how long does it take for an unprotected machine in the cloud to be probed and attacked? The short answer is: not very long.  What do we learn from the experiment? What can you do to beef up the defense of your machines? Go and check out Part II.

Andrew Jaquith and Richard S. Westmoreland contributed to this post.


Windows RDP exploit in the wild — patch your systems today

Written by Evan Keiser. Posted in Blog Post

During this month’s patch Tuesday, Microsoft released six updates, with one critical patch. The critical patch fixed a serious flaw found in all versions of Window’s Remote Desktop Protocol (RDP). This flaw allows attackers to remotely execute code on machine running the service, or perform denial of service (DoS) attacks.

This exploit is extremely dangerous, for three reasons:

  • RDP is widely used
  • RDP is commonly passed through firewalls due to its utility
  • No initial authentication is required to pull off these exploits

On Tuesday, Microsoft’s Security Research and Defense Blog stated that they expected to see exploit code in the wild within 30 days according to a quote from their recent blog post addressing the flaws: ”During our investigation, we determined that this vulnerability is directly exploitable for code execution. Developing a working exploit will not be trivial – we would be surprised to see one developed in the next few days. However, we expect to see working exploit code developed within the next 30 days.”

Unfortunately it seems Microsoft was being a little too optimistic. It has been only been 3 days and there is now code in the wild, which our Security Operations Team observed this morning. The Proof of Concept (PoC) exploit code is able to successfully exploit the flaws with a specially crafted .dat file and a copy of NetCat. Because the PoC code is now circulating, it’s no longer a question of if but when botnet and worm programmers will incorporate it into their existing kits. Due to the simplicity of exploiting the flaw, I personally expect to see RDP exploits become prevalent in banking botnets such as ZeuS and SpyEye. It’s a “no brainer” prediction to say that this exploit will be added to any worms or trojans written this year.

We felt it was extremely important to get the word out to so that you prioritize this patch for your Windows systems. To eliminate this risk, you should patch all of your RDP-enabled hosts immediately.

For those who cannot patche their RDP-enabled hosts immediately, Microsoft recommends using Network Level Authentication (NLA) as a temporary workaround. NLA will substantially reduce the risk on Windows Vista and later systems. NLA works by requiring authentication before a remote desktop session is established. Once NLA is enabled, the vulnerable code is still present and could be potentially exploited. However, the user would need to authenticate before being able to exploit the vulnerabilities. Details on enabling NLA can be found at Technet.

NLA has some side-effects. Enabling NLA will prevent Windows XP and Windows Server 2003 clients from connecting. If you need to connect to an NLA-enabled server with a Windows XP SP3 client, you will need to install support for Credential Security Support Provider (CredSSP). You can install CredSSP by visiting Enabling NLA does not require a re-boot.

In addition, links to the various Microsoft Fix it packages can be found on Microsoft’s blog.

Perimeter E-Security urges you to protect yourself by applying this patch, and/or implementing these workarounds. You should also:

  • harden your network by restricting RDP connections from known IP addresses
  • change the default RDP port
  • restrict users on RDP hosts to specific programs
  • create organizational units for terminal servers, and set restrictive Group Policy settings (RDP enabled) desktops and terminal servers.

Harald Wilke and Andrew Jaquith contributed to this post.


Should you worry about your SSL certificates?

Written by Andrew Jaquith. Posted in Blog Post

In the succinctly-titled research paper “Ron was wrong, Whit was right,” researchers Lenstra, Hughes et al showed that flaws in the RSA key-generation algorithm could cause weak keys to be generated in a small percentage of cases, making RSA-based certificates insecure. As the paper put it,

We performed a sanity check of public keys collected on the web... We found that the vast majority of public keys work as intended. A more disconcerting finding is that two out of every one thousand RSA moduli that we collected offer no security. Our conclusion is that the validity of the assumption is questionable and that generating keys in the real world for “multiple-secrets” cryptosystems such as RSA is significantly riskier than for “single-secret” ones such as ElGamal or (EC)DSA which are based on Diffie-Hellman.

The report was widely reported in the press, including MSNBC, which ran the story under the sensationalistic headline “Hidden flaw jeopardizes millions of online transactions.” Here’s our take on the issue.


Evan’s Picks for February 10th, 2012

Written by Andrew Jaquith. Posted in Blog Post

STAR Team analyst Evan Keiser wrote up four hot security stories our customers should be aware of.

Tracking Kooobface

Sophos put out a post about tracking and hunting down the Koobface botmasters the article is extremely detailed and has a lot of now declassified diagrams showing just how complex their setup was. This is an excellent read.

New Internet Explorer hack: the Question Mark Parsing Flaw

Here’s a little info about the latest IE only XSS attack, The Question Mark Parsing Flaw. (Sounds like a Sherlock Holmes mystery, doesn’t it?)

ThreatPost documents many of the new sites they’ve listed are currently experiencing XSS attacks stemming from this coding error and only affecting IE users. Imperva originally presented the information that appears in the ThreatPost article.

Google’s QR Codes

And finally, Google Introduces two-factor authentication using QR codes. This is a pretty interesting and novel use of QR codes to increase security. Worth a read.

Phish thyself

Bot-chronicler extraordinaire Brian Krebs posted a great article on using the Simple Phishing Toolkit to test your employees’ security  awareness. Phish thyself!