Posts Tagged ‘data breaches’

04
Dec

The Year in Review: 2012

Written by Andrew Jaquith. Posted in Blog Post

Greetings,

As the song goes, It’s The Most Wonderful Time of the Year. It’s the time of the year we write out our holiday cards, buy presents, think kind thoughts of our friends and family, and wax nostalgic.

Security is a big enough deal that it, too, warrants reflection and (dare I say it), a little bit of nostalgia. It’s the gift that keeps on giving. In that spirit, let’s dig up some of the tastiest chestnuts from the preceding 11 months, and gently roast them where appropriate. Given my sense of humor it’s going to be, shall we say, a dry roasting.

Here’s what got our attention in 2012. As is customary and appropriate, we spent a lot of time worrying about malware. The cloud — with all of its opportunities and challenges — was the second most important topic on our minds, along with mobile security. As you might expect, given our customer base of over 1,800 banks and credit unions, we analyzed financial services topics in depth. A variety of other topics got our attention, notably October’s National Cyber-Security Awareness Month and Mac security.

Each of these topics take time to review. So, let’s get nostalgic.

Malware

In 2012, it was clear that malware continued to be a problem for many companies. Of all of the topics we wrote about in 2012, we wrote about malware the most. Malware concerns came in four categories: web malware, new attacks, legacy malware and administrator-targeting malware:

  • Web malware — because of the ubiquity and reach of ad networks, attackers have made it a priority to attempt to infiltrate and infect ad servers. My colleagues, analysts Evan Keizer and Grace Zeng, wrote extensively about a banner-add infection campaign that caused MLB.com to inadvertently serve malware. Unfortunately there are no easy fixes for banner infections; webmasters (and their colleagues in marketing) must be extremely vigilant.
  • New attacks — the Flame malware family, which some have called the most sophisticated malware ever discovered, was discovered by our friends in May at Kaspersky and widely covered. We thought it was notable enough to write about, too. Just to show that I don’t have a monopoly on bad puns, my colleague Rick Westmoreland asked, “Flame: Is it getting hot in here?
  • Legacy malware — we saw campaigns targeting old-school programs like Symantec’s venerable PCAnywhere. (If you are asking yourself, “do they still make that?” you aren’t alone.) Malware targeting Microsoft’s RDP protocol also spread rapidly; we felt it was dangerous enough to issue an advisory.
  • Administrator targeting malware — the most insidious malware campaign we saw in 2012 was one targeting Plesk, an administrative console for website operators. This was a little scarier than most campaigns because it obviously targeted people who have a high level of privileges already — your IT guy. This is the kind of thing that presages an industrial espionage campaign, a topic I covered at length in my webinar “The Hype and Reality of APTs,” something you should watch. (Ed: I am not joking. Really, go watch this; it deflates the APT hype balloon.)
In addition, we gently ribbed the anti-virus industry in an amusing post (Ed: to me, anyway) called “The Best and Worst Data-Driven Security Reports of 2011,” where I made fun of the silliness that comes with the periodic rash of AV “threat reports,” while celebrating the genuine good stuff, such as the Verizon Data Breach Investigations Report.

Cloud security

In 2012, Cloud security topics were right up there with malware in our consciousness. Call me crazy, but to me “the cloud” is a fancy name for hosted services mashed up with virtualization, and juiced up with instant-on provisioning and elastic usage billing. It’s a new — and welcome — twist on an old concept. Companies want to use the cloud in areas where it makes sense — for hosted email, productivity, and sales automation — but they want to do it only when they can be assured that their data is secure.

My colleague, Grace wrote about a key class of cloud risks: the security of servers in the cloud. She performed experiments where she placed 12 unprotected servers in the Amazon cloud and watched what happened. The headline: on average, your new cloud servers will start seeing scans, probes and potential attacks within an hour! Scary stuff — if you haven’t already, you should read these posts.

On the positive side, Perimeter created a series of video blog posts called the Cloud Owners’ Manual that took strong points of view on how companies should think about the cloud, and what they should be asking their vendors. Looking spiffy in a suit, I spoke on camera about key customer concerns about the cloud, and gave prescriptive guidance on the cloud in general, customer fees, data protection, data privacy, contractual terms, and contract termination. As an analogy, I compared cloud security requirements to car safety belts. Did you know that since the advent of car safety technology, based on US DOT official statistics, people now drive faster and have fewer accidents? It shows how safety gear is a precondition for faster, safer driving. To put it differently: confidence requires security. And by analogy: so it is with the cloud.

Mobile security

From iPhones to iPads to Galaxies, mobile devices continued to move to the top of IT security managers’ list of concerns. Beyond the sheer proliferation of devices, we observed four key trends:

  • Bring your own device. When I was an analyst at Forrester, my then-colleague Natalie Lambert coined the term BYOD and wrote quite a bit about it. That was four years ago. Now, it’s the hottest thing in IT. What do companies do about it? For our part, Perimeter answered the bell in September when we unveiled our Cloud MDM service in partnership with AirWatch. In the service, we included strong default policies and a unique BYOD Kit that provides prescriptive guidance for all of the areas employers need to worry about: data rights, support, confiscation, and many other topics. We think the right solution to BYOD is holistic, and encompasses the domains of policy, technology and law.
  • Developer ecosystem concerns. In September, developer Blue Toad had 12 million Apple unique identifiers (UDIDs) stolen. This shined a spotlight on a fragmented, shadowy part of IT: the thousands of smallish, contract mobile app developers, very few of whom are likely following mobile app security best practices. Watch for this topic to explode in 2013 as the Mobile Backend-as-a-Service (MBaaS) category heats up.
  • Data privacy. In the first quarter, we saw a controversy erupt over the Path app, which was uploading customer address book records to their servers unbeknownst to customers. I called Path an example of “nosy apps” and characterized data privacy as the “third rail of mobile.” These kinds of negative stories had an immediate impact on handset makers. Apple, for example, added significant opt-in controls to iOS6 that require customers to explicitly authorize app access to address books, photos, calendars, tasks, FaceBook account information and much more.
  • iOS has been a benefit to security. Speaking of Apple, did you know that iOS is now over 5 years old? In that time, customers have gotten used to the idea of vendor-controlled app marketplaces, digitally signed and trusted operating system runtimes, and locked-down devices. We have Apple to thank for popularizing the concept, building on the kinds of concepts RIM and Symbian had initiated. See my in-depth 5-year iOS security retrospective for details about why I think iOS is overall an huge net win for companies and consumers alike.

Financial services

Banks, credit unions, broker-dealers and other financial institutions continue to be a significant part of Perimeter’s customer base. We noted many, many threats to financial services customers in 2012. The rash of denial-of-service (DDoS) attacks in September prompted us to issue a critical advisory to our customers. We followed up on the DDoS story in October; my colleague Rick Westmoreland called it “the new reality” for financial services firms.

In July, we inaugurated our first-ever Financial Services Threat Report for the first half of 2012, which described the most important threat trends our customers were facing in the year to date. We will be doing more of these reports, and our second-half report will be coming out after year-end. To help our credit union customers, Andrew wrote a three-part series on credit union security topics.

Other

Beyond these four main themes, Perimeter noted several other trends. We weighed in on this newfangled concept called “cyber security,” which is what happens when government-type people get their hands on an otherwise perfectly acceptable phrase — that thing that most of us used to call “information security” — and dumb it down. I suppose cyber-security is, to paraphrase Deng Xiaoping, Security With Government Characteristics.

Whatever you choose to call it, we helped celebrate National Cyber-Security Awareness Month in October with four posts by my esteemed colleague Mr Mike Flouton:

Midway through the year, Perimeter E-Security CEO Tim Harvey and actor/entrepreneur/restauranteur Robert De Niro hosted an exclusive New York event for 75 select partners and customers. The event featured an inspiring talk by two active duty Navy SEALs about building a high-performance, elite team capable of executing the most difficult missions. Tim’s summary of the event is here — in which he describes the key ingredients for success. For the record, I spoke at the event as well, but let’s face it: De Niro and the two Navy SEALs were hard acts to follow. It was a great event, though!

Lastly, Perimeter wrote about those devices your executives and developers are probably now carrying: Macs. In October, we released a survey showing that Mac usage is up, and that security concerns are increasing. Earlier in the year, alerted customers to something rather rare but important: real-life Mac Trojan outbreak in the wild: the Flashback Trojan.

Wrapping up

As I noted at the top of this post, security is the gift that keeps on giving. That’s good and bad. It’s bad for the obvious reason because the threats, concerns and challenges that got our (and the industry’s) attention affect companies and their customers everywhere. If security were a solved problem, we wouldn’t need to spend the time, attention and effort that we do.

I choose to be positive, though. Security threats and challenges are also good things. They remind us that, as professionals, we need to keep upping our game. New business frontiers such as mobile cause us to expand our horizons, become more involved with our colleagues and take the longer view.

As we look ahead to 2013, we are thankful for the continued support of our customers, colleagues and families. We at Perimeter wish you, dear reader, all the best this holiday season.

10
Sep

The BlueToad data breach shines a light on the mobile app developer ecosystem

Written by Andrew Jaquith. Posted in Blog Post

Earlier today, NBC News ran a story naming Florida-based mobile developer and publisher BlueToad as the source of a huge leak of 12 million unique identifying numbers (UDIDs), which were published a week ago by the anarchist hacker collective known as Anonymnous. Anonymous named the FBI as the original source of the leak, but that turned out to be a red herring. When BlueToad compared its database of UDIDs to those that were stolen, it found a 98% correlation, suggesting that the data actually originated from them — meaning, somebody stole it from their servers. And so BlueToad has come forward, to their great credit.

Some background: UDIDs are the unique device identifiers assigned to Apple iPhone and iPad devices. They are important because are permanent, and unique for every device. For this reason, many app developers have used historically UDIDs in the same way that web publishers use cookies: to track user behavior across applications. If you run an mobile ad network or have lots of apps in the App Store, you can collect UDIDs to understand how many of them are used by the same people. Although Apple banned the use of UDIDs in March of this year, the practice won’t be fully eradicated for a while. Old apps need to be purged from the App Store, and/or superceded by newer versions that don’t collect the UDID.

The disclosure of the UDIDs is less of a genuine security worry than a borderline privacy one. The file contains device UDIDs, device name, device type, and APNS certificate information. There does not appear to be any “personally identifying information,” at least with respect to the way that data breach statutes define it, i.e. the information cannot be used to uniquely identify or verify the identity a natural person.

I could certainly speculate at greater length about whether or not the UDID breach should worry customers, In my view, it shouldn’t. But what is more interesting here is where the data was stolen from. Per BlueToad’s description of what it does, the company has makes mobile apps for customers. Hundreds of apps. Many, if not all of these, were written under contract for publishing firms and  other corporate customers. As NBC’s article states, “]BlueToad] provides private-label digital edition and app-building services to 6,000 different publishers, and serves 100 million page views each month.”

A huge ecosystem exists to provide custom mobile application development services. I know this first-hand: I get dozens of solicitations from these types of companies every month. As CTO of Perimeter, I am a natural magnet for every lead-generation campaign; my email address seems to be in every marketing database. Many of these mobile app development companies are offshore. They all promise the same thing: high-quality mobile apps, built to spec, and cheap! Or at least: much cheaper than I could possibly build them in-house. Here’s a sampling of pitches I’ve gotten in the last few months:

  • “Mr. Sam Alva is keen for a meeting with you to introduce our company and discuss the possibilities of ValueLabs supporting your software development, QA/testing and back office processes. Over the years, ValueLabs has provided enormous benefits to its clients by developing and supporting mission critical applications, enabling them to leverage the power of the web. Some of the key areas of our experience that may be relevant to your organization are mentioned below: Platform technologies (.NET, Java & Open source), Web services, application development & Strong UI design capabilities, Mobile Application technologies…”
  • “My name is Mahesh and I wanted to take this opportunity to introduce my company, Prime Technology Group. With over a decade of strong experience in Software Development and other IT allied services, Prime has been serving variety of industries includingHealthcare, Financial, Insurance, Retail & e-commerce, Social Networking & Media, etc. both in the US and European markets. Prime has its corporate headquarters in Philadelphia, PA and a state-of-the-art offshore Software development center in Hyderabad, India. We have a proven track record with large and midsized clients including AstraZeneca, HSBC, Merck, PAMF, SUNRx, Gerson Lehrman Group, Harleysville Mutual Insurance Company, JP Morgan Chase and MedImpact Healthcare Systems, Inc. to name a few. Our Technical Practices include: Full lifecycle Software Application Development, Mobile Application Development, Web Application Development”
  • “I would like to request a meeting with you to discuss an opportunity of building a dedicated development team in Eastern Europe to support your IT needs. I represent TEAM International; a US owned and managed IT Professional Services Company with operations center in Ukraine. We specialize in custom Software Development on JAVA and Microsoft platforms, mobile Application Development, QA & Testing and SaaS / Cloud Computing.”

Now, I don’t mean to pick on these companies (other than to raise an eyebrow at how indiscreet one of them is regarding its clients). The broader point is that there are many, many such firms who are eager to sell mobile app development expertise to companies who don’t have the time or talent to make them themselves. These companies can’t all be geniuses at building security and privacy into the apps they make for their customers. Here are some questions that I’d ask each outsourcer:

  1. Given the focus on outsourcing and cost containment as the key value driver for your clients, how focused are you on secure development lifecycle practices?
  2. How aware are you of best practices for building mobile apps that are secure?
  3. How aware are you of best practices for building mobile apps that conform to the privacy laws that affect your clients?
  4. Are you collecting potentially privacy-invasive identifiers, such as Apple device UDIDs, in spite of the fact that Apple “bans” them?
  5. If you have stopped collecting device identifiers, have you gotten rid of the data?
  6. Are you transparent about your data collection practices, as recommended by the FTC? Or are you, as Graham Lee memorably put it, “Being a Dick?

BlueToad gets credit for being forthright about what they knew and when they knew it. But then again, Apple advised iOS developers in August of 2011 ago that UDIDs would soon be banned. And here we are a year later. Millions of UDID records were just kicking around in BlueToad’s databases, and Anonymous stole them. That means BlueToad either wasn’t very quick about updating its apps (Apple started rejecting apps that collected UDIDs in March), or it never got rid of the data. I’d bet the latter. Either way, it’s not good.

But again, this isn’t really about BlueToad — it’s about the entire mobile app developer ecosystem. How many of these custom mobile developers are in the same boat? It’s hard to tell, but somehow I think we are about to find out.

Mark my words — if 2011 was the year that privacy issues became the Third Rail of mobile security, 2012 will be the year these concerns spread to the developer ecosystem.

27
Aug

Perimeter E-Security 1H 2012 Financial Institution Threat Report

Written by Grace Zeng. Posted in Blog Post

By Grace Zeng, with David Coffey and Andrew Jaquith

Summary: Perimeter E-Security provides comprehensive security services to financial institutions of all sizes. In this report for the first half of 2012, we summarize security incidents based on data from 861 financial institution customers. During that period, 1,619 likely and confirmed compromises were detected. Of these, 43% targeted small, 38% targeted mid-sized, and 19% targeted large institutions. In total, 483 financial institutions were affected by those incidents. A majority of our financial customers (56%) experienced at least one security incident in the last six months. Large institutions had the highest average number of incidents per institution: six, about one per month. Our security services blocked about one third of all incidents, preventing damage to customers’ assets. Based on our analysis, Trojan horses and the Blackhole exploit kit are the most common threats facing financial institution customers today.

Monthly incident trends

Perimeter processes about 1 billion raw security events per month. We distill these events down to approximately 120 thousand potential security incidents. Among those incidents, a majority are low-level — that is, they are informational or reconnaissance related. A smaller number are likely or confirmed successful system compromises — what we call medium- and high-level incidents. Throughout this report, a “security incident” refers to these two types. A Perimeter security analyst analyzes every one of these. When a customer suffers a security incident, it is likely that one of their computing assets such as a desktop, server or other resource has been — to put it plainly — 0wned.

The Perimeter security team analyzed over 1,600 incidents — likely and confirmed compromises — in the first six months of 2012. From the monthly trend graph, we can see that the number of security incidents increased steadily from January to May before slightly declining in June. It appeared that threats and attacks are seasonal: more active in spring (Mar to May) than in winter (Jan and Feb).

Impact on financial institutions

Perimeter protects approximately 1,800 financial institutions. Our financial customers’ businesses range from banking and brokerage to credit unions, savings and loans and insurance. Our financial customers consist of 62% small institutions, 29% mid-sized institutions and 9% large institutions. We define small institutions as having assets less than $25 million; medium-sized between $25 million and $1 billion, and large institutions above $1 billion.

The chart below shows the distribution of incidents among our customer base. The plot shows percentages of financial institutions that had at least a certain number of incidents. In total, 56% of our financial customers experienced at least one incident. At one institution — the outlier at the right side of the chart — we detected 28 incidents over the past six months.

 

When analyzing the incidents by size of institution, we found additional patterns. In the past six months, 69% of our large financial customers experienced at least one incident. Midsize and small institutions, 63% and 51%, respectively, experienced one incident or more. On average, each institution suffered from about three incidents.

Of the top 10 customers that suffered the most security incidents, four are midsize (assets between $25 million and $1 billion) and six are large (assets greater than $1 billion) institutions. On average, each large institution had six incidents; each midsize had four; and each small had three. We believe large institutions are disproportionately targeted because they have large attack surfaces and can garner attackers larger financial gains. Although small institutions are not usually primary targets of attackers, they can serve as stepping stones for larger-scale attacks. And crucially, small institutions are most vulnerable to financial losses, and  may not be able to survive even one attack.

Approximately one-third of all security incidents were successfully blocked by our in-cloud and on-premise security devices. The rest were detected after-the-fact by our security monitoring systems.

Attacker countries of origin

 

Although Perimeter is not as wildly enthusiastic about “top attacking country” metrics as some — we do not suffer from congenitally nervous urges to “name and shame” former colonies, for example — the country origins of attackers help confirm hunches and things we already know.

Of the security incidents we observed, attackers’ IP addresses are distributed across 50 countries across the globe. The heat map below plots these countries with respect to the number of offending sources.  From a percentage perspective, more than 55% of attacks and threats originated from inside the United States. We expect that the main reason is that the financial institutions under scrutiny are almost all US-based. In addition, many of our customers commonly block traffic to and from non-US IP address ranges. We noticed that many users picked up malware from visiting legitimate US web sites.

 

Threat highlights

Financial institutions are particularly vulnerable to cyber crimes such as phishing and identity theft. We have seen numerous security incidents that have resulted in significant losses to the victim institutions.  A common propagation vector is targeted phishing emails addressed to employees with privileged account access. Once the recipient opens the link or the malicious attachment in the email, malware (in most cases, a Trojan) is installed. Sensitive account information is collected, which leads to unauthorized monetary transfers and customer data compromises. Based on the six-month incident data, Trojans turned out to be the major threat category facing financial institutions. As shown in the top 10 threat list, more than half of the incidents we observed were Trojan-related infections. Two threats on the list are particularly noteworthy: the Blackhole exploit kit and the ever-popular fake Anti-Virus. Details on each follow.

Blackhole exploit kit

The Blackhole exploit kit was the top threat plaguing our customers over the past six months. According to AVG Technologies, the Blackhole kit is the most popular toolkit in the cyber-underground. AVG’s Q2 threat report indicates that the Blackhole Exploit Kit makes up over half of detected malware; our figures agree broadly with AVG’s.  The Blackhole kit is installed on a server controlled by a cyber-criminal. When an unsuspecting user visits a compromised page or clicks a malicious link in a spam message, the page or link redirects (usually via <iframe> tags) the user to the server. The server hosts obfuscated code that delivers various exploits targeting vulnerabilities in browsers and their popular plug-ins such as Adobe Flash, Adobe Reader and Java. Once an exploit is successful, the victim machine loads and executes malicious payloads, and downloads additional component if needed.

Perimeter has been closely following this exploit kit since its emergence.  We observed that the ease-of-upgrading helps to make the kit prevalent; zero-day exploits are constantly added to the kit.  For example, a Java vulnerability was disclosed in mid-June and an exploit leveraging this flaw was made available in early July. Blackhole kit also rapidly evolves the way it spreads to web servers. In its recent campaign in late June, web servers were compromised by exploiting the Plesk SQL injection vulnerability.  Many web pages were infected with contaminated JavaScript files which loaded the Blackhole exploitation.  To defend against this ever-evolving exploit kit, we have implemented several protection mechanisms for our customers:

  1. Network-based anti-virus is equipped with JavaScript/iframe signatures to offer client-side protection
  2. Web security (content filtering) can block domains that host Blackhole exploit kits
  3. Multiple correlation rules in our SIEM match patterns of related IP addresses, domains and file names.  Please refer to our recent blog post for details.

Fake Anti-Virus

Rogue anti-virus is a form of Internet fraud that tricks users to install or purchase fake AV programs, to “help” remove non-existent threats in their computers.  These malicious AV programs usually introduce Trojans to the victim computer to harvest personal information. Fake AV has been one of the most prominent online threats in recent years. Purveyors of fake AV push it through a variety of channels:

  • Spam emails with links or attachments
  • Malicious advertising and compromised ad networks
  • Web pages containing exploits
  • Search engine optimization (SEO) poisoning

We have been closely monitoring fake AV activities for our customers and observed a rash of campaigns that led to dozens of infections this May. Early June, we discovered that Major League Baseball and a few other legitimate websites fell victim to a compromised ad network and served up fake AVs to their users.  We managed to pinpoint a specific ad on MLB’s website that embedded an iframe redirection to a malicious server. This server then pushed fake AVs from several Indian .in domains to users.  We published detailed analyses of these campaigns here and here.  To protect our customers, we immediately added null-routes to IP addresses malware-hosting domains resolve to. We also have created several correlations that keep updating to detect new campaigns.

Protecting financial institutions

As our review of the first half of 2012 shows, financial institutions continue to be under attack. To protect our financial customers from attack, we provide multiple layers of defense: firewalls, web content filtering, IDS/IPS, AV tools and SIEM. Each plays an important role in defending against state-of-the-art threats.

Perimeter highly recommends our financial institution customers take all necessary steps to safeguard machines and follow security best practices. Customers should:

  • Never open unexpected email attachments or click on any links in suspected emails
  • Never supply any personal or account information as a result of an email
  • Always keep the operating system and software packages (browser and AV programs in particular) up-to-date
  • Always disable and/or uninstall unused services on endpoint machines, servers and network devices
  • If possible, block ads in the browser, or use web content filtering services

Dan Carter and Mike Flouton contributed to this report.

25
Jun

Why I was lazy about changing my LinkedIn password

Written by Andrew Jaquith. Posted in Blog Post

I have a confession to make. I did not change my LinkedIn password until today, more than two weeks after LinkedIn disclosed that its password database had been hacked.

Some background on the attack. As you may know, an unknown attacker compromised the LinkedIn website and made off with nearly 6.5 million password hashes. These were low-sodium hashes, apparently, because they were not salted — making it easier for an attacker to perform what’s sometimes called a “rainbow table” attack. This means the attacker compares the hashes in the compromised database against a precomputed list of passwords that had been previously run through a hashing algorithm, in this case  SHA-1. Because the hashes were not salted, successfully guessing a given password was relatively trivial for commonly used passwords. Most computer security experts, including no less than SANS Institute’s Johannes Ullrich, recommend that passwords should be “salted” in addition to hashed, which makes this type of attack harder.

If you want to read an erudite, well-reasoned discussion about passwords and why naïve hashing strategies (like the one LinkedIn used) don’t work, go and read Brian Krebs’ interview with my friend Thomas Ptacek, founder of Matasano Security, charcuterie expert extraordinaire and all-around good guy. Thomas argues that as a general principle defenders need to make attackers work harder. He also argues that the typical “well, just make sure you salt your hashes” expert advice doesn’t work any more either. Salting your hash won’t work because it doesn’t add much computational time to the attempt. Here’s the key quote:

Let’s say you have a reasonably okay password and you’re using salted, randomized SHA-1 hash. I can, off-the-shelf, build a system that will try many, many tens of thousands of possible passwords per second. I can go into Best Buy or whatever and buy a card off the shelf that will do that, and the software to do it is open source.

If we were instead using something like Bcrypt, which would have just been the difference of using a different [software] library when I built my application, I can set Bcrypt up so that a single attempt — to test whether my password was “password” or “himom” — just that one test could take a hundred milliseconds. So all of a sudden you’re talking about tens of tries per second, instead of tens of thousands.

What you do with a password hash is you design it in the opposite way you would design a standard cryptographic hash. A cryptographic hash wants to do the minimum amount of work possible in order to arrive at a secure result. But a password hash wants to deliberately be designed to do the maximum amount of work.

Thomas is usually the smartest guy in any room he happens to be in, and I agree with his recommendations. What, then, should you be doing in your web applications? Using an algorithm like Bcrypt is a good idea. If you are using a reasonably modern computer programing language (.NET, Java) with good crypto libraries, you can also use PBKDF2, which stands for Password-Based Key Derivation Function #2. Both Bcrypt and PBKDF2 follow the same principle: they create an initial hash, and then (more or less) perform the hashing operation over and over an arbitrary number of times (the “iteration count”). The result is that it makes it comparatively expensive to compute the hash, but that’s ok for the typical person who is just typing to enter a password. He or she won’t mind if it takes a half-second. But if you are an attacker, even a half-second messes up your business model.

(In case you were wondering, if you protect your iPhone or iPad with a passcode, Apple’s iOS 4 and higher use PBKDF2 with 10,000 rounds of iteration to protect the pass codes. That makes me feel pretty good.)

Ok, smart guy. LinkedIn wasn’t doing any of that stuff. Why didn’t you change your LinkedIn password again?

Whoops. I got a little distracted. So, Bcrypt and PBKDF2, those algorithms I mentioned above — the ones  you should be using in your web applications? LinkedIn wasn’t using them. The company just hashed stuff. Attackers were able to run a simple rainbow-table/dictionary attack and recover a lot of the passwords. In fact, our friends at Rapid7 have created a nifty infographic showing what the most popular ones were. Passwords like “link,” “god,” “job,” and “princess” topped the list. (Princess?)

So, shouldn’t I have been worried that wily hackers cracked my password at some point in the last 2 weeks? Maybe a little. I confess, I slacked a bit in changing my password. But then again, I felt pretty sure mine hadn’t been cracked. In the spirit of full disclosure, this was my old password: 3d*f$elMZ0tK.

There is little chance an attacker could have brute-forced it — it is completely random, and fairly long (12 characters). To generate it, I had previously used a third-party password management tool called 1Password. The tool creates an encrypted vault of passwords, all protected by a master password. I use it to generate unique, long and complex passwords for every website I join or log into. As a result, none of my website passwords are shared. They are all unique. And they can’t be easily brute-forced. Some of my passwords are 36 characters long.

I don’t remember any of them, and I don’t care. I make it a rule not to remember my passwords, except for the master password.*

If you follow a strategy like this as well, when the next big website gets knocked over, you won’t have to care either.

PS. If you are interested in password-vault tools, read Elcomsoft’s paper on password managers first. Although I don’t have experience with the other password managers cited in the paper, Keeper and DataVault both seemed to score fairly highly in terms of resistance to brute-force attacks.  See also the 1Password team’s commentary.

*Actually, that’s not quite right. I have also memorized my iTunes Music Store password (16 characters, totally random). I had to memorize it because I type it in so often. But I digress.

07
Dec

Heard on the Street — Predictions for 2012

Written by Andrew Jaquith. Posted in Blog Post

The Perimeter STAR Team holds its “Heard on the Street” call every week on Wednesdays. On these calls, the team discusses hot security trends, current events, and issues that our customers should be aware of. Below is an annotated summary of the topics we discussed this week, which we present as a service to our customers and to the public.

This week, in a special edition of HOTS, we asked the team to bring two ideas with them: (1) their favorite security, email or networking story of the year (either “best” or “worst”) and (2) one surefire prediction for 2012. Here’s what the team discussed, which we present for your entertainment.

Will Campbell, Senior Director, Network and Infrastructure Engineering

Will’s Evidence-of-Scarcity Story of the Year: IANA gave out the last IPV4 address blocks this year. In 2012, we will see a lot more constraints on giving out address blocks. This will cause more companies to adopt IPV6. Note that this has already happened in countries outside the US, which weren’t given as much address space to begin with, and so depleted their blocks more quickly. As a result of the increased uptake in IPV6, we expect to see more IPV6-related security weaknesses.

On a side note, Perimeter owns a Class B IPV4 address block. We’ve used about 1/4 of it. (ARJ asked, jokingly, whether we could put it on the company’s balance sheet as an asset.)

Will’s Reality-Distortion-Field Prediction: More companies will try to emulate Steve Jobs with their products: better focus on customer experience and product design. They will avoid putting “the sales guys” in charge.

Will’s Stick-Money-Under-The-Mattress Prediction: As a currency, the Euro will fail next year. Greece will essentially be “voted off the island.” As evidence, just look at the trouble Germany had selling its own bonds a few weeks ago.

Tom Neclerio, SVP Professional Services

Tom’s Advanced, Persistent Story of the Year: by far, it was the RSA hack. It shed a lot of light on a subject that wasn’t talked about much before: advanced, targeted attacks that go after a company’s trade secrets.

Tom’s Take-It-To-The-Bank Prediction: I predict mobile data leakage features will become a major point of focus for banks in 2012. I’ve talked to many banks that are used to the idea of using data leak prevention (DLP) software to filter out violations in their email systems. They are very worried about data loss over mobile phones. A key problem is that on personal mobile devices, the openers typically use both personal and work email accounts on the same device. Without appropriate controls, it is too easy to forward emails from work to Gmail, for example.

[Note from ARJ: Perimeter/USA.NET's SaaS Secure Messaging suite provides channel DLP features for detecting credit cards, social security numbers, keywords and other patterns. We'd be remiss if we didn't tell you this, right?]

Ron Martin, QA Manager

Ron’s Story of the Year: I’d agree that the RSA story was it.

Ron’s Credit-Card-With-An-Antenna Prediction: We will see more personal data theft coming from smartphones. There are two problems. From the company perspective, they worry that their information will be stolen or leaked. That’s the first problem. On the personal side, consumers and employees who possess these devices are at increased risk of the theft of personal financial information.

Ron’s Fear-The-Cloud Prediction: We will see at least one new class of vulnerabilities introduced related to cloud services. Cloud platforms are relatively new, and while the attack methods are likely to be similar to those seen with other technologies, cloud has some unique properties. We will see at least one new novel attack technique disclosed, and perhaps used against a major cloud infrastructure provider such as Amazon, Rackspace, GoDaddy or IBM.

Jeff Lathrop, Senior Exchange Developer

Jeff”s Trust-Is-For-Suckers Story of the Year: some of our supposed gatekeepers to the Internet — the SSL certificate authorities — were compromised this past year. As we saw in three cases, Comodo and Diginotar were shown to have issued certificates to unauthorized parties. In Malaysia, the DigiCert CA’s root was revoked by Mozilla and Microsoft after having been shown to be issuing weak certificates in violation of best practices.

Jeff’s Wearier-But-Wiser Prediction: In 2012, we will see more of the same. None of the problems we saw reported this year have been fixed: the CAs issues, DNS problems, personal data leaks on smartphones, privacy issues with Facebook and Google etc. With Facebook, for example, all they got was a slap on the wrist. Because none of the underlying root causes were fixed, 2012 will be a lot like 2011, but more of it.

Andrew Jaquith, Chief Technology Officer

Andy’s Wearier-But-Not-Wiser Story of the Year: The RSA breach was the biggest one by far, as measured by the amount of company resources it took to deal with it. We are an RSA reseller and thus a partner. We learned about the breach by reading a press release. Our customer support teams, operations staff, corporate communications teams and executives worked hard to understand the issue in depth, keep customers informed and create an action plan. That’s hard to do with a breaking story, especially when the vendor isn’t forthcoming about the risks. We wish RSA had handled the situation differently.

Andy’s Tipping-His-Hand-For-Next-Week Prediction: Because we will be hosting our annual “Five Predictions for the New Year” webinar next Wednesday, December 12th at 2PM Eastern time, I’d rather not tip my hand about what all of our predictions in this post. In the meantime, here is one we will be talking about. I predict in 2012, we will see legislation enacted that makes it a crime to mishandle location-based information contained on a mobile devices. There will be generous carve-outs for the usual suspects: national security and cellular carriers. Come to our webinar next week and find out the other four!

05
Dec

Reviewing 2011 Prediction #1: The APT Meme Dies

Written by Andrew Jaquith. Posted in Blog Post

Last December I gave a well-attended webinar called “Five Data Security Predictions for 2011,” in which we predicted that five particular things would happen this year. Predictions are easy. Everybody makes them. It’s less common that you revisit your own predictions, and grade them. Here’s how we grade ourselves:

  • A: The available evidence suggests that we correctly identified the issue, and that available evidence suggests we got the prediction right. By “right”, I mean that we can cite to multiple instances in the mainstream media that agree with the prediction.
  • C: Got the issue right, but the prediction didn’t play out to the degree expected. We saw some corroborating evidence, but caught whiffs of wishful thinking.
  • F: Got neither the issue nor the prediction right. Alternatively, the evidence suggested that the prediction went in exactly the opposite direction as expected.

In today’s short post, let’s revisit one of our 2011 predictions: “The Advanced Persistent Threat meme dies, and is replaced by the more accurate term ‘state-sponsored actors’.”

What did we mean?

Unless you have been living under a rock, you have probably heard of the quaint expression “Advanced Persistent Threat.” You probably have heard that APT is some type of horrible infection or somesuch affliction that makes you itch whenever you go for dim sum. Or maybe you know it as some kind of extra-special malware that infects companies that have secrets worth stealing. I kid, but the reality is that APT marketing hype has infected the marketing departments of nearly every security vendor. If you’ve got an APT infection, goes the pitch, you can buy our miracle cream — requiring a never-ending prescription, to be sure — and it will just go away.

Emblematic of the APT-as-marketing meme was McAfee’s hype-laden report on Operation Shady RAT, which breathlessly revealed how hundreds of organizations have been infected with “APT malware.” (I won’t link to it because it is a very silly report, and I mean that in a Monty Python sense.) In the report, McAfee provides no details about the identity of the attackers. But there is lots of malware infecting lots of companies everywhere, apparently.

The point of my prediction that “the APT meme will die” is that wiser observers would start to see through the marketing haze and call APT what it is: particular who, not a what. Commentators like Richard Bejtlich have long been calling APT by its real name: the nation-state of China. That is, in fact, what APT originally meant when the Air Force defined it. It was meant as a politically correct euphemism for the PRC.

We predicted that the APT euphemism would start getting old, and that wiser heads in the press would figure it out and describe it more accurately. I picked “state-sponsored actors” as among the more palatable and accurate labels.

How’d we do?

Pretty well. It’s no longer an open secret that APT == China. Some evidence:

(1) In August 2011, Ira Winkler, one of the more thoughtful minds in the information security field, wrote a longish post in ComputerWorld in August, challenging the persistent use of APT to hawk products. “The McAfee report was more about marketing than it was about releasing information. McAfee provided few details about the attack, only saying that it was large and hinting at who the targets were.” Winkler also explicitly connects APT to China, citing a 2009 report from Northrop Grumman assessing the PRC’s capabilities: ”there have been documented cases of state-sponsored hacking out of China for more than a decade, targeting every conceivable type of commercial and government organization.”

(2) National Public Radio, on its All Things Considered newscast, ran a story on November 3rd called “China, Russia Top List Of U.S. Economic Cyberspies.” In this report, NPR summarizes and expands on Congressional testimony by Robert Bryant from the Office of the National Counterintelligence Executive. Bryant states that “Chinese actors are the world’s most active and persistent perpetrators of economic espionage, while Russia’s intelligence services are conducting a range of activities to collect economic information and technology from U.S. targets.” (Hat-tip: Richard Bejtlich.)

(3) Even Symantec, previously one of the louder barkers under the APT carnival tent, has subtly changed their tune. Instead of describing APT solely as malware, Symantec’s Kevin Rowney now describes it as a “malware campaign.” It’s still wrong, but closer to the mark in the sense that it implies an actor — a Who.

(4) CSO Online’s Bob Bragdon, just today, wrote a very funny column called “Naming Names in APT.” He writes: “Let’s call a spade a spade: China is the greatest threat to international cyber­security on the planet. I’m tired of pussyfooting around this issue the way that I, and many others in security, industry and government have been for years. We talk about the ‘threat from Asia,’ the attacks perpetrated by ‘a certain eastern country with a red flag,’ network snooping by our ‘friends across the Pacific.’ I swear, this is like reading a Harry Potter book with my daughter. ‘He-Who-Must-Not-Be-Named’ just attacked our networks. Let me be absolutely, crystal clear here. In this scenario, China is Voldemort. Clear enough?”

Overall, it is “crystal clear” that we got this one right. If anything, we were too conservative. We expected that the APT euphemism would be replaced with a more accurate descriptor, “state-sponsored actors.” Little did we know that that descriptor would be too timid for some observers, who have no problems just flat-out saying “China.”

My grade: A-

In in the next post, I’ll review anther prediction, “The US Crawls Towards EU-Style Data Protection.”

15
Apr

FBI Takes Down Coreflood Botnet, But Many Companies Remain Vulnerable

Written by Andrew Jaquith. Posted in Blog Post

By Harald Wilke, Security Analyst, Perimeter E-Security
with Richard S. Westmoreland, Lead Security Analyst and Andrew Jaquith, Chief Technology Officer

On Wednesday April 6th the Federal Bureau of Investigation (FBI) seized control of 5 servers used to control as many as 2 million computers infected with Coreflood malware. This malware, also known as AFCore, quietly steals personal and financial information from the computer and forwards the information to the criminal ring leaders. The attackers use the information collected by AFCore to conduct fraudulent wire transfers, emptying the users’ bank accounts.  The botnet is suspected to have existed since at least 2002, and has evolved over the years from using IRC based command and control and selling DDOS/anonymity services, to HTTP based command and control and performing fraud.

Using a similar approach used to take down the Bredolab botnet, US federal investigators were granted special authorization by the Department of Justice to substitute their own Command and Control server for the hosts operated by the criminal organization.  When the bot of the infected machine checks into the new C&C it is simply given a command to shutdown.  The DNS records used by the bots have also been pointed to Shadowserver’s sinkholes.

Seizing control of the C&C servers by law enforcement is now preventing the criminals from accessing any information already harvested by the infected computers.  It also keeps them from covering their tracks by deleting files and terminating processes.  However, the millions of Coreflood infections remain intact and still require intervention by a trained security analyst or antivirus program with signatures to detect it. Investigators are also alerting the Internet Service Providers of the compromised machines and requesting they inform their customers.

More information about the takedown can be found here:

Perimeter’s Security Operations Center is actively monitoring for outbound activity known to be associated with the Coreflood botnet.  In one instance, minutes after adding inspection for the redirected C&C check-in, alerts indicated a single customer network to have 17 actively compromised hosts. Here’s a sample screenshot from our SOC’s Security and Information Event Management System:

Coreflood Botnet Traffic, from Perimeter SOC

Looking at the raw event logs, we can see that the compromised host is attempting direct HTTP connections to a sinkhole IP. The URI confirms the activity to be related to a bot C&C check-in:

Recommendations for Perimeter customers

Although the FBI has taken ownership of the Command and Control and are issuing shutdown commands to the active bots, the malware is still installed on the compromised machines and reactivated at bootup.  Analysis of this Coreflood variant indicates the C&C domains change monthly and have been pre-registered in countries that are outside of United States jurisdiction.  There still remains a possiblity of the criminal ring regaining control of the botnet.  Perimeter strongly recommends customers take the following actions to stay protected:

  • Use Web Content Filtering to lockdown Internet usage by enforcing user authentication and blocking of categories not critical to business
  • In particular, customers are strongly advised to block access to unclassified sites, which commonly harbor malware and C&C servers
  • Use standard best practices such as Network IPS and Network/Desktop AV to help prevent infections
  • In cases where infections do occur, a strong WCF policy will help prevent theft of data, and will provide additional logging information used by the Perimeter’s Security Operations Center

Thanks for your time and attention, and stay safe.

05
Apr

The Epsilon Mailing List Hack: Nothing to See Here, Move Along

Written by Andrew Jaquith. Posted in Blog Post

by Andrew Jaquith, Chief Technology Officer, Perimeter E-Security

Late last week, e-mail services firm Epsilon, which manages e-mail campaigns for hundreds of high-profile clients in retail, publishing, consulting and other sectors, revealed that it had been hacked. As a consequence, the attackers were able to obtain the names and e-mail addresses of millions of customers of companies like Citigroup, Walgreens, JP Morgan and many, many others.

Like me, you likely received a notice from a company you do business with informing you of the hack. I got mine from McKinsey Quarterly:

We have been informed by our e-mail service provider, Epsilon, that your e-mail address was exposed by unauthorized entry into their system. Epsilon sends e-mails on our behalf to McKinsey Quarterly users who have opted to receive e-mail communications from us.

We have been assured by Epsilon that the only information that was obtained was your first name, last name and e-mail address and that the files that were accessed did not include any other information. We are actively working to confirm this. We do not store any credit card numbers, social security numbers, or other personally identifiable information of our users, so we can assure you that no such information was accessed.

Please note, it is possible you may receive spam e-mail messages as a result. We want to urge you to be cautious when opening links or attachments from unknown third parties. Also know that McKinsey Quarterly will not send you e-mails asking for your credit card number, social security number or other personally identifiable information. So if you are ever asked for this information, you can be confident it is not from McKinsey.

We regret this has taken place and apologize for any inconvenience this may have caused you. We take your privacy very seriously, and we will continue to work diligently to protect your personal information.

Three quick observations about What It Means:

First, this is embarrassing for Epsilon. It suggests that they have some work to do on their defenses. We don’t know how the attackers got in — it could have been by exploiting a weakness in their web applications (likely), or from a social engineering attack of the type that hosed RSA (less likely).

Second, the attack will be of no consequence to most people. Yes, as many commentators have written, there is an “elevated risk of spear phishing attacks,” which in plain English means this: because the bad guys have your name and e-mail address, they might try to trick you by sending you an e-mail with a funny link. But to be honest, I don’t get much, if any, spam — thanks to Perimeter’s multi-stage e-mail filtering service. And if you use a premium spam filtering service, you probably don’t either. And even if the attackers manage to put together an e-mail that does get through your spam filters, how would you be able to tell that this particular break-in was the cause of it? Right.

Third, nice work McKinsey! The e-mail above is a great example of how to write an unambiguous and clear disclosure e-mail. You’ll note that they spell out exactly what Epsilon says has been disclosed (name and e-mail address, not enough to trigger a PCI or HIPAA violation). They also provide appropriate guidance on what to watch out for, and reinforce that McKinsey employees will never request sensitive information from their customers (which they shouldn’t). This is exactly what you should say in an e-mail like this.

The bottom line is this: spam happens. Just make sure that your employees and colleagues don’t blindly click on attachments they shouldn’t, or blindly click on links embedded in e-mail. Take this incident as an opportunity to reinforce your security policies. But don’t worry too much. Compared to the RSA compromise from a few weeks ago, this is very small beer.

21
Mar

RSA warns SecurID customers after company is hacked

Written by Perimeter. Posted in Blog Post

Attention Perimeter Customers: As you may be aware, RSA, The Security Division of EMC, disclosed yesterday that an unknown outside party successfully compromised RSA’s security systems. These attackers are believed to have stolen information related to the operation of RSA SecurID tokens. The identity, motivation and goals of the attackers are unknown. The exact methods they used to compromise RSA’s systems (malware, social engineering, or server exploit) are unknown.

It is not clear whether the theft of this information enables attackers to compromise customers’ own SecurID deployments. RSA claims that the information obtained by the attackers does not. As described in RSA’s advisory (http://www.rsa.com/node.aspx?id=3872):

“[RSA has] no evidence that customer security related to other RSA products has been similarly impacted. We are also confident that no other EMC products were impacted by this attack. It is important to note that we do not believe that either customer or employee personally identifiable information was compromised as a result of this incident.”

However, RSA’s news release notes that “the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack.”

We strongly urge customers to read the full advisory from RSA here: http://www.sec.gov/Archives/edgar/data/790070/000119312511070159/dex992.htm

There is no indication that suggests Perimeter’s customers are at risk. We are continuing to monitor the situation and will send out additional updates as new information is made available.

17
Mar

Picking a Sensible Mobile Password Policy

Written by Andrew Jaquith. Posted in Blog Post

By Andrew Jaquith, Chief Technology Officer, Perimeter E-Security

Defining an enterprise mobile device passcode policy can be surprisingly difficult. Security managers must attempt to reconcile two opposing goals. They must:

  • Create a passcode policy that is strong enough to protect the device if it is lost or stolen, while:
  • Not annoying users with needless length or complexity

These goals are hard to reconcile because mobile devices like smartphones and tablets are personal, portable and convenient. Employees use their devices in places they wouldn’t use a PC: in the car, during their kids’ football game, and during (shall we say) otherwise unproductive periods of the day. It’s tempting to simply duplicate existing network security policies. The rationale goes something like this: smartphones and tables are nothing more than small PCs with antennas, so the password policies should be the same as for PCs. It’s easy to think that, but it’s the wrong attitude.

In this post, I’m going to describe the passcode policy I recommend for mobile devices that comply with NIST’s e-authentication Level 1 guidelines as described in Special Publication 800-63, “Electronic Authentication Guidelines”. My policy is reasonable, employee-friendly and highly usable, but strong enough to protect your company’s data. To cut to the chase, here’s what it is:

  • 8-digit numeric PIN
  • Simple PINs disallowed
  • Automatic lock after 15 minutes
  • Grace period of 2 minutes
  • Automatic wipe/permanent lock after eight wrong tries
  • No expiration

For details, read on. Warning: a tiny bit of binary math lies ahead.

The right passcode length: 8 digits or 6 characters with an automatic wipe policy

Length and composition are the most important parts of any mobile device passcode policy. The longer the passcode, the better. The “best practice” that many security admins follow in the PC world is to require a “strong” password of at least eight characters, plus at least one special character. The goal of this policy is to make the password strong enough so that an attacker wouldn’t be able to guess it within an allotted time period. But what is “strong enough?” As it happens, our friends at NIST have defined this fairly precisely: for an 800-638 “Level 1″ password, “strong enough” means 10 bits of guessing entropy. “Guessing entropy” comes from Claude Shannon’s work on information theory. It is a probabilistic measure that an attacker will successfully guess a password  over its lifetime, expressed as the number of chances the attacker would need. This number is measured in bits (aka powers of two). For example, two bits of guessing entropy means that an attacker would need four tries to guess the password (22).

For a Level 1-compliant password, NIST defines the required strength as 10 bits of guessing entropy. In other words, an attacker who knew nothing more than the employee’s username would have at most 1024 (210) tries to guess the password for the entire time the password is active. For a NIST 800-63 “Level 2″ password, an attacker would need an estimated 65,536 (216) guesses to break the password. (Level 3, in case you were wondering, is a Level 2 strong password plus a soft cryptographic token or certificate; Level 4 is the same but requires a hard token).

All righty then. If all we need to do is pick a strong password, how do we do that? It turns out the answer to this question is, “it depends” based on complexity rules and length, plus whether the employee chooses the password or the system generates it for them. In NIST SP 800-63 Appendix A, Table A.1, NIST estimates the guessing entropy of various combinations. For example, in order to achieve 10 bits of guessing entropy (a Level 1 passcode), assuming the attacker had just one chance to guess it, the following types of passwords would qualify:

  • a 3-digit numeric PIN that the system generated randomly. Entropy: 10 bits of guessing entropy
  • a 5-digit numeric PIN that the employee picked themselves, disallowing simple PINs that repeat the same digit or use a sequence (e.g., 12345). Entropy: 10 bits
  • a 2-character random password that the system generated randomly, and that used the entire 94-character keyboard (A-Z, a-z, 0-9 and the characters ~!@#$%^&*()_-+={}[]|\:;’<,>.?/“”). Entropy: 13.2 bits
  • a 4-character passcode that the employee picked themselves, using the entire 94-character keyboard. Entropy: 10 bits

Two things jump out from these examples. First, note how much stronger the randomly-generated passcodes are than user-chosen ones. The 3-digit random PIN, for example, is as strong as a 5-digit user-generated one (both have 10 bits of guessing entropy). This is because humans aren’t very good at picking random numbers. The second thing that jumps out is that these passcode lengths probably seem unnaturally short to you! Why? That’s because, as I described, we assume the attacker has just one chance to guess the passcode.

But of course, the attacker never has just one chance to guess a password; they usually get many chances. Thus, most password strength policies contain the buried assumption that the attacker has thousands or millions of chances to guess the password over its lifetime. That’s why the typical password policy calls for eight characters, with (for example) at least one upper case letter and a number, plus one special character. Per NIST, a password policy like that boosts entropy to 24 bits, which is 214 times more than the 210 single guess entropy estimate that Level 1 actually requires. In other words, your typical desktop password’s policy essentially assumes that an attacker gets 214 or 16,384 chances to guess the password. If anything, this is probably far too small a margin of safety: an attacker who has gotten access to your Windows domain controller’s SAM file, for example, can execute millions of guesses in just a few seconds. Regardless, you get the idea: that long password your IT department wants you to put on your PC assumes an attacker will have many, many opportunities to break in.

In the case of smartphones and tablets, though, the operating systems typically have a feature that allows administrators to control the number of guesses an attacker gets: the automatic wipe/permanent lock setting. Put simply, modern smartphone operating systems from RIM, Apple, Microsoft and Google can require devices to turn themselves into bricks if an attacker guesses too many wrong passwords. By implementing such a policy, we can effectively shorten the number of entropy bits we would need for the smartphone compared to, say, a desktop PC that we can’t brick. What this means: smartphones and tablets with an automatic wipe policy do not need passwords as long or complex as those for desktops, because the number of guesses an attacker gets is so much smaller. They can be much shorter and simpler and still provide the same level of protection. That’s what the math tells us.

For example, if we impose a policy of eight wrong guesses before a mobile device automatically wipes or permanently locks itself (a fairly reasonable restriction), we need a guessing entropy that is only 213 bits — that is, just three bits (8 guesses) higher than the single-guess entropy of 10 bits. To do that, the following types of user-chosen passcodes would qualify:

  • an 8-digit numeric PIN, disallowing simple PINs (13 bits), or:
  • a 6-character passcode, using the entire 94-character keyboard (14 bits)

Either one of these policies will serve our purposes nicely. Personally, I prefer the 8-digit PIN policy because it’s easier to key in on some smartphone operating systems. For example, Apple’s iOS (the operating system that the iPhone and iPad use) will automatically pop up a numeric keypad, instead of the full alphanumeric keyboard, if the owner initially specified a passcode that contained only numbers. It’s a nice usability touch that employees like because they don’t have to worry as much about fat-fingering the passcode. Even better, eight digits is still short enough to be easily remembered, and they can be tapped in quickly.

Automatically lock mobile devices after 15 minutes

After password length and composition, deciding how long the device can be inactive before it locks itself is the second key policy decision most firms wrestle with. Employees use their devices a lot throughout the day, but on an intermittent basis. NIST has very little to say about mobile locking policies, so use your common sense. Unless your employees carry the secret formula for Coke around on their mobile devices, I generally recommend that companies choose an inactivity timeout period that accommodates employee working styles as much as possible without opening a significant window of attack. Remember, there are less to protect on these devices than on a normal PC.

By “significant window of attack,” my rule of thumb is longer than a quick trip to the break room to get a coffee (5 minutes) but shorter than a lunch break (30 minutes). A sensible inactivity timeout period is probably about 15 minutes. You can go shorter than this, of course, although I personally feel 5 minutes is a fairly employee-hostile policy that will cause you to get a lot of e-mail stink-o-grams.

Many mobile devices also allow employers to implement a “grace policy” setting that will delay password-locking for a few minutes, even if the device has been put to sleep (normally, the passcode lock is switched on right away). For example, if you just checked the calendar on your mobile phone, then hit the sleep switch and put it in your pocket, you would still have a minute or two to check that other thing you just remembered without being hassled with the passcode. Again, as with the automatic lock policy, common sense should be your guide. A grace period of, say, two minutes gives your employees a little extra usability without detracting from security.

Don’t require employees to rotate their passcodes

In the PC world, most security administrators, and indeed most “best practices” as enshrined in NIST, ISO 27000 advocate using password “aging” policies that cause employees to regularly rotate their passwords. The goal of this practice is to reduce the likelihood that an attacker can compromise an account over its lifetime. Despite the nearly universal acceptance of password aging practices, however, there has been surprisingly few empirical analyses showing that they actually increase security. Researchers from Microsoft, for example, suggest that, if anything, password aging policies actually detract from security. This is because employees usually resort to a variety of coping mechanisms to deal with being forced to change passwords so often. They write their passwords down on sticky notes, create easy-to-remember passwords that vary only by one digit between instances, and re-use passwords between services. Microsoft concludes that the typical password rules produce a “minor reduction of risk for a 3.9× magnification of password management effort.”

I am firmly opposed to password expiration policies for most employees in most contexts, although they do make sense in certain cases: for example, for highly privileged service, admin and server accounts, or in cases where you suspect a compromise. But on the whole, I’d rather encourage employees to create harder passwords that don’t expire rather than easier ones that do. This is nowhere more true than with mobile devices. Here are three reasons why you should never implement a password aging policy on mobile devices:

  • Guessing the device password doesn’t buy the attacker anything extra. The passcode protects the integrity of the device, not the data on your network. It’s not a Windows domain password. Guessing the passcode doesn’t get an attacker access to any new resources other than the ones already provisioned on the device (for example, the e-mail account). A lucky guess of a device password — which means they beat 1:1024 odds, very impressive — might mean that an attacker can now send prank e-mails on the victim’s behalf. But they won’t be able to mount up a new SMB share that contains your secrets, for example, or loot your payroll system.
  • Password aging policies are redundant. Recall that a key goal of expiring passwords is to shorten the lifetime over which an attacker can compromise an account. It’s a fine goal, but it is already taken care of by a sensible automatic wipe policy. Eight times to guess a password today is still just eight times, regardless of whether we’re talking about a passcode the employee is using this week, last quarter or next year.
  • Forced password changes are hostile to your employees. Trust me, your employees already hate your enterprise password aging policy, and they have the Post-Its to prove it. By plopping yet another unwanted usability obstacle onto the devices they take to birthday parties, use on the subway or show business partners in restaurants, you’ve just given them another thing to dislike, and another incentive to evade your well-intentioned controls.

So in closing: remember that these devices are often personally owned, and they contain much less sensitive data on them than a typical PC. Instead, win friends and influence people, and don’t put passcode expiration policies on mobile devices.

Implementing mobile security policies

The policies I’ve described in this article can be implemented in all iPhones and iPads running version 3 or later of the operating system, and on all BlackBerry devices. Any Windows Mobile or Windows Phone 7 device can support these policies too. Finally, Android devices running 2.2 and higher support most of these settings. In a future post, I’ll describe the exact settings you should use for ActiveSync-compliant devices and for Apple’s mobileconfig security policy files.

In the meantime, happy texting!