<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1078844192138377&amp;ev=PageView&amp;noscript=1">


Mitigating BlackNurse Denial-of-Service Attacks

In 2016, every company, no matter the size, has an online presence. Whether that is a website, online store front, perhaps a mobile app, or a mail server, chances are your organization has some kind of system in production on the Internet.

It has long been established common practice to secure these types of services or devices via a firewall. A firewall functions exactly as the name implies and only exposes the necessary service ports to the outside world that were absolutely needed for the application to function. For example, you might configure the firewall to allow TCP port 80 and 443 which are common ports for web pages to be served on the Internet but block everything else.

Unfortunately, this creates a single point of failure in that all of that network traffic is now funneled through that firewall. Under normal circumstances, this is usually not a problem. You would “right-size” your firewall appliance to be able to handle your normal traffic load, probably allow for some head room for traffic growth over time, and then throw in a little extra head room in case of burst traffic. From there, you’d probably purchase a second firewall and cluster the two devices together for high availability if the application required it. This way, if one firewall were to become unresponsive, the other firewall would assume the traffic flow and your service would not be disrupted.

Then along came the BlackNurse Denial-of-Service (DoS) attack…

What is the BlackNurse Denial-of-Service Attack?

Note 1: The term “BlackNurse” is named after the team who discovered this type of attack: one was a blacksmith, and the other was a nurse.

Note 2: There have been a number of articles posted already that delve into the ICMP protocol and the different types of ICMP messages that can be generated and how these all work together to generate the BlackNurse Denial-of-Service attack. We won’t rehash detailed ICMP technical explanations here, but there are links for further reading at the end of this post if you’d like more information.

Before we delve into the BlackNurse attack, let’s quickly review a little bit of history, shall we?

At a high level, the BlackNurse DoS attack is a type of ping flood attack. Back in the days of grunge music in the 1990s, a common Denial-of-Service attack was to simply flood a target with the ping command (ICMP Echo packets - Type 8 Code 0 if you’re curious) and maxing out a host’s Internet connection with excessive data. You might remember the “ping of death,” which used malformed pings larger than 1500 bytes to crash devices.

In that era, many of us were on slow dial-up connections and most servers didn’t have 10 gigabit connections like they do today. The Internet was a much smaller place and didn’t quite have near the amount of attack mitigation technology available as it does today. Consequently, it was very easy for a malicious user with access to a relatively fast connection to flood a target with a small amount of traffic (by today’s standards), as most devices couldn’t handle a high amount of traffic in those days.

As technology evolved, switches, routers, firewalls, servers, and network interface controllers became able to generate and also accept much larger amounts of bandwidth. In the time since, we’ve also seen the advent of DDoS mitigation appliances, services such as those offered by traffic policers and other such technologies that have largely eliminated these types of basic attacks involving ping floods, the “ping of death,” and making other volumetric attacks easier to deal with.

Typically, volumetric attacks these days require several gigabits of traffic per second to bring a host offline and also usually involve multiple attack vectors. You’ve probably seen a few of these in the news over the past year or two. These usually involve a coordinated effort of multiple compromised hosts or botnets and generate huge amounts of traffic in order to take down a service. The most recent example in the news was the DDoS attack levied against Dyn.com. This attack was reported to be operating at 1.1 Tbps at its peak.

Why is BlackNurse unique?

So if volumetric attacks require all of these resources in order to bring down servers and we have numerous tools at our disposal to mitigate volumetric attacks, then what makes the BlackNurse so special?

What makes BlackNurse unique is that it is able to bring down firewalls or other network devices with a small amount of traffic (think 15-20 Mbits per second generating 40-50,000 packets per second). Today, laptops and desktop computers can easily generate this amount of traffic on your typical broadband connection.

The way it works is that attackers have identified that a specially-crafted ping command generates a significantly higher amount of CPU usage on certain network devices, namely firewalls, when these devices respond to this type of ping command. This specially crafted ping command exploits how certain devices respond to ping commands that generate ICMP messages of Type 3. ICMP Type 3 messages are known as “Destination Unreachable” messages, which basically means that a host is not available or responding. Unfortunately, it is not possible to simply turn off or block “Destination Unreachable” responses to ping requests, as these are required for keeping hosts operating properly on an IPv4 network per RFC specifications—and for some vendors—required for IPSec and PPTP traffic to operate. For more information, please review the RFC specifications, specifically “RFC 1812 – Requirements for IPv4 Routers”.

Attackers discovered that only 15-20 Mbit of traffic with packet rates of 40,000 to 50,000 packets per second generated by this specially crafted ping command is able to bring these devices offline.

The attack is typically carried out when an attacker targets the WAN or public facing IP address of the firewall with a sustained stream of these ping commands until the firewall runs out of CPU cycles to process traffic. Devices behind the firewall are unable to communicate until the attack subsides.

As you can see, a single user can easily bring down a firewall with a modest amount of easily attainable resources at their disposal.

How is ServerCentral protecting customers against BlackNurse DoS attacks?

ServerCentral has taken several steps to prepare for and mitigate the BlackNurse DoS attack.

Our Network Engineering team has proactively implemented filters on all upstream connections to rate limit the maximum inbound rate of ICMP Type 3 Code 3 messages that are allowed towards our customers. These filters should not impact general day to day operations. All Managed and Colocation customers are covered by these changes.

Our Managed Services team has worked with our firewall vendors to identify potentially affected models used with our Managed Firewall service.

Juniper SRX: At this time, Juniper SRX models appear to not be affected but this is still pending additional research and verification with JTAC. However, we have worked with Juniper to proactively create a custom IDS filter that can mitigate the BlackNurse effects as an added precaution. This filter can be applied on demand.

Fortigate: At this time, Fortigate firewalls running FortiOS v5.2.x appear to not be affected. Some models running FortiOS v5.4.x appear to be affected. ServerCentral has standardized on the v5.2.x code branch and should not be affected. This is still pending additional research and verification.

Should the situation change with our firewall providers, we will update this post accordingly.

For users that have purchased our Managed DDoS Mitigation Service, the built-in behavior Denial-of-Service analysis engine will successfully detect and mitigate the BlackNurse attack during an attack.

For our colocation customers, we strongly urge you to contact your firewall vendors and confirm if your devices or configurations are susceptible to the BlackNurse DoS attack.

If you have any questions or if we can be of assistance, please contact us at your convenience.

Further Reading

Topics: Support Security

ServerCentral's 2016 SOC 2 audit is now available!

Throughout many years of managing audit tasks and compliance programs, the most arduous part has always been gathering the proper artifacts.

  • Did we get the screen shot of one system right?
  • Where did I put that report from our vendor?
  • Who’s seen the monthly vulnerability scan reports?

Well, today ServerCentral took a large step toward making that process easier for our customers by putting our SOC 2 report online in our customer portal! 

Our report alone, though, is not always enough to satisfy your auditors and vendors.

Fear not!

Not only is our own SOC 2 available, however, but we’ve also uploaded the reports of our sub-service organizations and various data center facilities where customer solutions have been deployed. You’ll find the reports assigned to facilities where you house equipment listed alongside the ServerCentral report. All of these reports will be updated regularly with the latest versions from each of the organizations. 

Also, we're very pleased to let you know that, beginning today, when ServerCentral issues our SOC 2 report, a copy will be uploaded and made available to approved customers and contacts who have a signed NDA on file with our legal department. This means:

  • You will no longer need to request the report from your account manager.
  • You will no longer have to wait for the email notifying you when it’s made available each summer.
  • You will no longer be digging through your email archives for it when you need it for your own audit.

Simply sign in to our customer portal and you’ll see the report available under Documents -> Compliance Reports. 

 As always, if you have any questions or concerns, please do not hesitate to let us know.

Topics: Compliance Security Audit

The End of Safe Harbor And What Comes Next

Under European law, service providers are legally obligated to maintain the levels of security and privacy for personal, non-public information. Because of these protections, data from European users cannot be moved to jurisdictions where the same level of protection does not exist.

Think of it as setting a minimum water level for security and privacy:

You cannot take data from a place that offers higher levels of protection and move it to a place where lower levels exist. 

This set a basic set of parameters for all users inside of the EU to ensure their privacy was met the same by every online company they dealt with, no matter if the company was in France, Germany, or Poland.

Of course, this left places like the US out in the cold; after all, our data privacy laws are nearly non-existent compared to those of the EU.

Major tech companies like Google, Facebook, and Apple were faced with a decision: build data centers in Europe just for Europeans or lobby for an alternative.

Enter Article 29 and the Safe Harbor regulations, a series of laws and treaties between several parties that establish a framework of self-certification and public audit for companies in less-regulated markets to certify as being a safe place to send European data despite being outside of the EU. That worked well until October 2015, when the European Court of Justice overturned Safe Harbor as being insufficient protection of end-user personal data. Left with three months to renegotiate the treaty and pass new laws, the US and EU went to work building a new consensus.

Here we are, well past the end of the three month grace period, and a new framework for Safe Harbor still eludes negotiators.

Privacy hawks are arguing for stringent rules regarding surveillance by government agencies, while security hawks are bemoaning the use of encryption, counter-surveillance techniques, and the potential national security implications of not just implied but explicit privacy.

Just this last week, the European Commission heads of each of the data regulators of EU members met in Brussels as part of the Article 29 Working Group to come up with what they would require to reinstate Safe Harbor. They have reached several initial proposals to send to their respective governments, but there has been no final proposal drafted between parties.

Critics are already attacking the new regulations proposed by the working group, pointing out that all EU leaders seek are  letters of understanding signed by high-ranking US officials and entries into the Federal Register stating that "most surveillance" will be off-limits, leaving major loopholes for a repeat before the European Court of Justice.

Without actual legislation on the US side and a firm commitment against mass surveillance and data collection, it seems a true agreement will never be reached.

Without agreement, Safe Harbor is unable to truly provide the free exchange and access of data across the Atlantic that we enjoyed up to last year, but there are alternatives. By including model clauses in contracts, ServerCentral can still meet the stringent legal guidelines of EU regulators for our customers, and they themselves can meet the requirements of their customers and end users. 

Still, a framework is needed soon, or we could face the nuclear option: total cut-off of trans-Atlantic data services.

While that is the last possible outcome anyone would want, in this climate of anti-surveillance protests, who knows how the chips will land.

Topics: Security

5 Things I Learned about Cybersecurity at Chicago Ideas Week

This week I went to a talk on cybersecurity at Chicago Ideas Week. Here's what I learned from the former commissioner of the NYC police department, a Harvard Law professor, the global head of cybersecurity at Palantir, the cofounder and CTO of HackerOne, the founder and CEO of WISeKey, and the general counsel for Wikimedia:

1. Cyberterrorism hasn't actually happened yet.

But when it does, it may come in the form of a spider in your shower. No, really. 

While you're washing your face, a cyberterrorist can remotely instruct a spider-shaped drone to inject you with lethal poison, crawl out your window, and self-destruct—all before you open your eyes.

Dolomedes_tenebrosus_e_2_PEMBecause this isn't creepy enough. (Source)

2. Unauthorized access is usually gained by exploiting weaknesses in people, not software.

It's far more practical to socially engineer private information than it is to gain access to protected networks.

If I wanted to hack your email, I could talk to you about my mom having the weirdest maiden name ever, hoping you'll mention your mom's maiden name during the conversation. If you do, I'll be able to answer your secret question and force a password reset

Soon I'll find out that you Uber across the street.

3. Your mom can buy a Denial of Service attack.

The going rate on the Deep Web is $150 (TrendMicro Research). The average damage to attacked businesses? $40,000/hour (Incapsula). 

Norse_Attack_MapThis is why we do DDoS Mitigation. (Source)

4. Facebook has 1.3 billion products.

They're you, me, and everyone we know. Facebook sells what we "Like" to advertisers for more effective targeting.

facebook-privacyOnly post information if you don't mind sharing it with corporations.

5. Hackers: they're just like us!

Criminal hackers are not as sophisticated as you think. Most of them have bosses, budgets, and impending carpal tunnel, too.

While the security risk landscape is vast, it's knowable. We just have to be smart. Make it as annoying as possible for a hacker to access your information through things like dual-authentication. 

They'll most likely move on to someone with the password "admin".

What, you don't look like this? (Source)

Topics: Security

2015 Technology Infrastructure Predictions

Each year begins with one of our favorite traditions: prediction season. These technology-trend forecasts are always insightful, helpful, and in some cases, humorous. Here are some of the predictions for 2015 that stood out:

Prediction: Adaptable Cloud Agendas

Analysts continue to highlight the fact that rapidly evolving cloud technologies have a real material impact on businesses. This is far more important than most people realize. Be prepared. It really is critical that companies have adaptable agendas for technology to help their business capitalize on key changes in core infrastructure. Something as simple as adopting high-performance cloud storage or deploying offsite backups of virtualized infrastructure could have a significant financial impact on their organization's performance.


Prediction: RESTful Interfaces

As developers increase their desire (and need) to work with services that communicate via RESTful interfaces, it’s worth investigating formal enterprise API management.

Forrester, a leading market research firm, suggests one way to stay flexible in the face of rapid enterprise application evolution is by adding RESTful interfaces for back-office applications.

The flexibility afforded by this approach will enable far more control over upgrade cycles than traditional back-office application developer releases.


Prediction: Security Protocol Enforcement

It’s easy to relax in this area. Don’t.

Most organizations already have security protocols in place. If you do, begin clearly communicating and enforcing them. If you don’t, you should begin working on them immediately.

“There are two types of companies: those who have been hacked, and those who don’t yet know they have been hacked."

John Chambers, Chief Executive Officer of Cisco

Be prepared for a security breach this year. It will most likely be the result of a common process or governance failure, not due to an application or infrastructure component. Be sure your IT security team is on top of management and perimeter-based processes, as well as current training so everyone knows exactly what to do when something happens.


Prediction: Software Containers

If you don't already know about software containers (often referenced as Docker), you will. The benefits of well-designed, containered applications are real. More and more companies are using Docker and other container technologies to improve the efficiency of app development, deployment, and management. 2014 saw a tremendous amount of momentum in this area, and it’s only going to accelerate in 2015.


Prediction: Hybrid Clouds

This one has been on the prediction lists for the past four years, and it probably is going to be there for the next few years, too. As enterprise application architectures evolve into services and virtualization deployments, it’s never been easier to use off-prem private and public clouds for compute and storage. Don’t expect to see full-fledged cloud transitions where an entire enterprise is moved to the cloud in one step. Instead, watch out for steady migrations on an application-by-application basis.


Prediction: Disaster Recovery

There are always lots of doomsday predictions. While we don’t subscribe to the doomsday theory, we do see, all too often, situations where infrastructure and applications were not architected to withstand a disaster event. Whether this is a loss of power, a system failure, or a natural event, the result is the same: critical applications and services are no longer available. There are many ways to address disaster recovery. In many instances, a well-designed backup strategy may be more than enough to withstand an outage. Every application has its own service expectations, so be sure to identify exactly how critical each application is and what type of plan is needed: backup, replication, or complete disaster recovery.


Prediction: Distributed Denial of Service (DDoS) Attack Mitigation

These days, all anyone needs to launch a DDoS attack are a credit card or prepaid debit card and a web browser. It really is that simple. As these “stress testing services” become more prevalent, the volume and sophistication of DDoS attacks will continue to increase, causing more damage than ever before.

It might also be worthwhile to ask your network and infrastructure partner(s) about the tools and processes they have in place to help you in the event of a DDoS attack. If they don’t give clear answers, you may want to consider protection via a third-party provider.

If you haven’t read 5 Key Questions for Selecting a DDoS Mitigation Service, now's your chance.

Remember: it’s not if you are attacked, it’s when.

Topics: Other Security Disaster Recovery Products and Services

GHOST Vulnerability Update

The recently announced Glibc GHOST vulnerability (CVE-2015-0235) has been top of conversation and action since it was announced on Tuesday, January 27, 2015.

Following is a detailed update about our work to address this issue.

ServerCentral administrators are currently working to patch all affected systems against the GHOST vulnerability. We are also working with appliance vendors to apply any needed patches.

We will directly contact any affected managed service customers to schedule the patch and subsequent device reboot.

GHOST Vulnerability Check

We strongly encourage all customers to check your version of Glibc, determine if it is vulnerable, and patch & reboot as needed. The easiest way to check if your version of Glibc is by using the following C code:

<strong>Code block</strong>
/* ghosttest.c: GHOST vulnerability tester */
/* Credit: http://www.openwall.com/lists/oss-security/2015/01/27/9 */
#include &lt;netdb.h&gt;
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include &lt;string.h&gt;
#include &lt;errno.h&gt;

#define CANARY "in_the_coal_mine"

struct {
 char buffer[1024];
 char canary[sizeof(CANARY)];
} temp = { "buffer", CANARY };

int main(void) {
 struct hostent resbuf;
 struct hostent *result;
 int herrno;
 int retval;

 /*** strlen (name) = size_needed - sizeof (*host_addr) - sizeof (*h_addr_ptrs) - 1; ***/
 size_t len = sizeof(temp.buffer) - 16*sizeof(unsigned char) - 2*sizeof(char *) - 1;
 char name[sizeof(temp.buffer)];
 memset(name, '0', len);
 name[len] = '\0';

 retval = gethostbyname_r(name, &amp;resbuf, temp.buffer, sizeof(temp.buffer), &amp;result, &amp;herrno);

 if (strcmp(temp.canary, CANARY) != 0) {
 if (retval == ERANGE) {
 puts("not vulnerable");
 puts("should not happen");

Save this C code to a file called ghosttest.c.

Compile and run it as follows:

$ gcc ghosttest.c -o ghosttest
$ ./ghosttest

Sample output from patched Debian v7.8 server:

not vulnerable

Sample output from unpatched Ubuntu 12.04 LTS server:


How do I list packages/applications that depend upon vulnerable Glibc?

Type the following lsof command:

lsof | grep libc | awk '{print $1}' | sort | uniq

This will produce a list of all packages/applications that use Glibc and will be potentially affected by the vulnerability until Glibc is patched.

We encourage all of our customers to perform additional reviews of their internal and external services and confirm they are secure against this vulnerability.

For more information about the GHOST vulnerability, please visit:

If you have any questions, or if we can be of assistance, please do not hesitate to contact us at your convenience. The best way to do so, regarding this issue, is by dropping a ticket.

Topics: Security

Oodles of POODLEs, Or How Not to Get Bit

Often the biggest concern when a security exploit comes out is the time that elapses from when the issue is reported to the time when the manufacturer issues a patch. Will you be targeted in that brief unprotected moment? Can you ensure your customers will be protected if you're the victim of an exploit?

With all of the security vulnerabilities reported lately, there’s no rest for the weary sysadmin.

The latest vulnerability, nicknamed POODLE, is an issue with SSL (CVE-2014-3566) that allows network attackers to calculate the plaintext of secure connections.

Bad dog.

Because it affects only the SSLv3 suite of security ciphers, our Security and Compliance Committee made the decision to disable SSLv3 on all of our public and private web properties. We’ve already completed the process to disable SSLv3 on our public websites, like our client portal or support interface, and will be assisting customers with disabling the cipher suite in their managed load balancers in the coming days.

Due to the advanced age of SSLv3 and the lack of browsers that require such an old protocol, disabling SSLv3 ensures no customer data is subjected to the vulnerability. (The SSLv3 cipher suite was created in 1996 and has since been replaced with newer and better security protocols.)

Spending all of your time worrying about information security can be exhausting, but our team is here to assist with your security needs 24x7x365. Check out our managed services or contact us.

Topics: Security

Small Businesses Can't Afford to Ignore Information Security

If you read the news lately, it seems like every corporate board across the country is adding Chief Information Security Officer to the ranks of their C-suite. The bailiwick of the CIO—Information Security—is now the de facto buzzword of the post-Heartbleed world. While that’s all well and good for the publicly-traded giants of Wall Street, where does that leave small business owners and startups with just a few servers to secure?

There are three major threats affecting small businesses today when it comes to their information security:

  1. Malware and viruses
  2. Intrusion attempts
  3. Denial of Service (DoS) attacks

Add eCommerce and online billing systems into the mix, and you bring in an entirely new set of requirements for combating these threats. Still, any company, large or small, that accepts credit cards over the Internet has a fiduciary responsibility to protect against all three attacks. Buried in merchant agreements is a line putting the small business—not Visa or MasterCard—on the hook for fraudulent charges and chargebacks caused by a breach of that responsibility.

The network configuration and monitoring screens of Chris Haun, ServerCentral Network Engineer

Quarterly scans, industry-standard equipment, and a watchful eye on your systems are required to meet the security requirements of your contract—and that’s where we can help.

Our offerings for small online businesses cover all three attacks, giving you the same peace of mind brought by a Chief Information Security Officer without the seven-figure salary. We can stop attacks before they can breach your perimeter with our Managed Network and stop Denial of Service attacks dead in their tracks with our DDoS Mitigation Service. We’ll even help you sell your security to your clients through Compliance Engagements, demonstrating that we really are the best at securing your data.

There’s a reason that the biggest and the best eCommerce companies like Shopify trust ServerCentral with their data, whether it's in the cloud or a dedicated server stack. Contact us to talk about ways to address your small business challenges.

Topics: Security