<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1078844192138377&amp;ev=PageView&amp;noscript=1">

Blog

Mitigating BlackNurse Denial-of-Service Attacks

In 2016, every company, no matter the size, has an online presence. Whether that is a website, online store front, perhaps a mobile app, or a mail server, chances are your organization has some kind of system in production on the Internet.

It has long been established common practice to secure these types of services or devices via a firewall. A firewall functions exactly as the name implies and only exposes the necessary service ports to the outside world that were absolutely needed for the application to function. For example, you might configure the firewall to allow TCP port 80 and 443 which are common ports for web pages to be served on the Internet but block everything else.

Unfortunately, this creates a single point of failure in that all of that network traffic is now funneled through that firewall. Under normal circumstances, this is usually not a problem. You would “right-size” your firewall appliance to be able to handle your normal traffic load, probably allow for some head room for traffic growth over time, and then throw in a little extra head room in case of burst traffic. From there, you’d probably purchase a second firewall and cluster the two devices together for high availability if the application required it. This way, if one firewall were to become unresponsive, the other firewall would assume the traffic flow and your service would not be disrupted.

Then along came the BlackNurse Denial-of-Service (DoS) attack…

What is the BlackNurse Denial-of-Service Attack?

Note 1: The term “BlackNurse” is named after the team who discovered this type of attack: one was a blacksmith, and the other was a nurse.

Note 2: There have been a number of articles posted already that delve into the ICMP protocol and the different types of ICMP messages that can be generated and how these all work together to generate the BlackNurse Denial-of-Service attack. We won’t rehash detailed ICMP technical explanations here, but there are links for further reading at the end of this post if you’d like more information.

Before we delve into the BlackNurse attack, let’s quickly review a little bit of history, shall we?

At a high level, the BlackNurse DoS attack is a type of ping flood attack. Back in the days of grunge music in the 1990s, a common Denial-of-Service attack was to simply flood a target with the ping command (ICMP Echo packets - Type 8 Code 0 if you’re curious) and maxing out a host’s Internet connection with excessive data. You might remember the “ping of death,” which used malformed pings larger than 1500 bytes to crash devices.

In that era, many of us were on slow dial-up connections and most servers didn’t have 10 gigabit connections like they do today. The Internet was a much smaller place and didn’t quite have near the amount of attack mitigation technology available as it does today. Consequently, it was very easy for a malicious user with access to a relatively fast connection to flood a target with a small amount of traffic (by today’s standards), as most devices couldn’t handle a high amount of traffic in those days.

As technology evolved, switches, routers, firewalls, servers, and network interface controllers became able to generate and also accept much larger amounts of bandwidth. In the time since, we’ve also seen the advent of DDoS mitigation appliances, services such as those offered by traffic policers and other such technologies that have largely eliminated these types of basic attacks involving ping floods, the “ping of death,” and making other volumetric attacks easier to deal with.

Typically, volumetric attacks these days require several gigabits of traffic per second to bring a host offline and also usually involve multiple attack vectors. You’ve probably seen a few of these in the news over the past year or two. These usually involve a coordinated effort of multiple compromised hosts or botnets and generate huge amounts of traffic in order to take down a service. The most recent example in the news was the DDoS attack levied against Dyn.com. This attack was reported to be operating at 1.1 Tbps at its peak.

Why is BlackNurse unique?

So if volumetric attacks require all of these resources in order to bring down servers and we have numerous tools at our disposal to mitigate volumetric attacks, then what makes the BlackNurse so special?

What makes BlackNurse unique is that it is able to bring down firewalls or other network devices with a small amount of traffic (think 15-20 Mbits per second generating 40-50,000 packets per second). Today, laptops and desktop computers can easily generate this amount of traffic on your typical broadband connection.

The way it works is that attackers have identified that a specially-crafted ping command generates a significantly higher amount of CPU usage on certain network devices, namely firewalls, when these devices respond to this type of ping command. This specially crafted ping command exploits how certain devices respond to ping commands that generate ICMP messages of Type 3. ICMP Type 3 messages are known as “Destination Unreachable” messages, which basically means that a host is not available or responding. Unfortunately, it is not possible to simply turn off or block “Destination Unreachable” responses to ping requests, as these are required for keeping hosts operating properly on an IPv4 network per RFC specifications—and for some vendors—required for IPSec and PPTP traffic to operate. For more information, please review the RFC specifications, specifically “RFC 1812 – Requirements for IPv4 Routers”.

Attackers discovered that only 15-20 Mbit of traffic with packet rates of 40,000 to 50,000 packets per second generated by this specially crafted ping command is able to bring these devices offline.

The attack is typically carried out when an attacker targets the WAN or public facing IP address of the firewall with a sustained stream of these ping commands until the firewall runs out of CPU cycles to process traffic. Devices behind the firewall are unable to communicate until the attack subsides.

As you can see, a single user can easily bring down a firewall with a modest amount of easily attainable resources at their disposal.

How is ServerCentral protecting customers against BlackNurse DoS attacks?

ServerCentral has taken several steps to prepare for and mitigate the BlackNurse DoS attack.

Our Network Engineering team has proactively implemented filters on all upstream connections to rate limit the maximum inbound rate of ICMP Type 3 Code 3 messages that are allowed towards our customers. These filters should not impact general day to day operations. All Managed and Colocation customers are covered by these changes.

Our Managed Services team has worked with our firewall vendors to identify potentially affected models used with our Managed Firewall service.

Juniper SRX: At this time, Juniper SRX models appear to not be affected but this is still pending additional research and verification with JTAC. However, we have worked with Juniper to proactively create a custom IDS filter that can mitigate the BlackNurse effects as an added precaution. This filter can be applied on demand.

Fortigate: At this time, Fortigate firewalls running FortiOS v5.2.x appear to not be affected. Some models running FortiOS v5.4.x appear to be affected. ServerCentral has standardized on the v5.2.x code branch and should not be affected. This is still pending additional research and verification.

Should the situation change with our firewall providers, we will update this post accordingly.

For users that have purchased our Managed DDoS Mitigation Service, the built-in behavior Denial-of-Service analysis engine will successfully detect and mitigate the BlackNurse attack during an attack.

For our colocation customers, we strongly urge you to contact your firewall vendors and confirm if your devices or configurations are susceptible to the BlackNurse DoS attack.

If you have any questions or if we can be of assistance, please contact us at your convenience.

Further Reading

Topics: Support Security

ServerCentral's Onsite Parts Depot 

It's a huge pain to go to the data center when hardware fails. Especially when you live far away.

Servers don't break at convenient times, either. Often it's at 2 AM, rush hour, or the moment you sit down to watch a new episode of Rick & Morty.

So not only do you have to travel to the data center to find out what broke, you also have to replace it. That means going to the store—or worse, ordering parts online and waiting for them to arrive—before you can fix it. All the while, your server is down and your boss is upset.

This is a problem that ServerCentral addresses daily. We maintain a huge inventory of replacement parts in our Chicago-area data centers.

DC-GALLERY-parts-1

Whatever you need is always in stock. Even exotic optics and ancient RAM.

It doesn't matter if your servers are from 1999. Once you're a customer, we can stock your treasures from the Mesozoic Era steps from your cabinet. 

If you're too busy working on 1000 other projects to do a hardware swap, our in-house Remote Hands team can do it for you. They're at the data center all the time, including weekends and holidays.

Existing customers can put in a request at support@servercentral.com

Topics: Support Data Center

5 Ways to Get The Most Out of Remote Hands

Part One: Plan for A Fire Drill

Responding to trouble tickets manually submitted by our customers is one of the most prevalent and essential tasks that we perform on a daily basis at ServerCentral. Sometimes I take how long I've been sending and replying to trouble tickets for granted. It’s almost automatic.

That got me thinking...what makes a good trouble ticket?

In this three-part blog bonanza, I'll detail some major components of trouble tickets to ensure they’re responded to in the most expeditious and accurate manner possible. My first entry, "Plan for A Fire Drill," covers oft-overlooked housekeeping exercises that could potentially save you costly downtime and frustration later on.

Here are my top five, can't-miss tips to ensure your turf is ready for a fire drill:

1. Labels Are Cheap; Downtime Is Expensive

Many customers label the front of their gear, but not everyone knows to label the back. It’s important to label both sides so that Remote Hands technicians are able to service your equipment faster.

99% of the time, a technician begins triage at the business end of the rack—the rear.

If there are no labels in back, she’ll have to walk around to the front to identify which rack unit (RU) the specific machine occupies. This commute can be long, as modern data center rows often approach 100 feet. She’ll then have to walk back around (another ~100 feet) to locate the appropriate RU and machine.

By labeling both the front and back of your gear, you can save several minutes of gruesome downtime while abating the chance of miscounting RUs or troubleshooting the wrong machine. 

blinkylights plz reboot the one with the blue light, lol

2. Don't Just Create An Install Base, Maintain It

An install base is a spreadsheet that details the location, identity, and connectivity of all gear in a rack.

Creating the document is easy, but updating it every time you make a change is not.

I suggest saving this document somewhere in a shared folder where all of your operations personnel can easily access it. Audit against your install base annually to keep your team honest. The format of the of it is irrelevant, so long as the data are complete, accurate, and easily understood. Here's an example Single Cabinet Install Base Example Template that you can modify for your own environment.

3. A Picture Is Worth A Thousand Reboots

Install bases and Visios are great tools for keeping track of your environment, but sometimes you just want to reference a picture.

If you send your Remote Hands vendor 30 nodes to rack and cable, ask them to take pictures when they’re done.

That way you can make sure your rack and cabling standards are upheld and have an image to reference for your infrastructure scrapbook.

4. Keep Your Cabling Neat And Tidy

Messy, undocumented cabling poses significant challenges to any technician (including you!) working in a cabinet.

You've heard the horror stories—waterfalls of spaghetti cable monsters pouring out of cabinets eventually coming to life and terrorizing the village. They all begin the same way: someone runs one, singular cable in a haphazard fashion, and without fail, the temptation to lax standards on the next one becomes too great—and so born is the beast.

Clean cabling allows technicians to work with confidence, among other benefits covered in Chris’s post, Clean Up After Your Cables.

5. Say "Hi" Once In A While

I can't speak for other data center operations teams, but at ServerCentral we relish the opportunity to get to know the folks we support. Getting to know the team that’s responsible for maintaining your physical environment puts a face to a name, engendering a sense of familiarity and responsibility.

In Part Two of this series, I'll write about communicating effectively to optimize trouble ticket resolution.

Topics: Support

What to Put in Your Support Ticket for The Fastest Response

Once in a blue moon, we get a support request like this:

"My server isn't responding. Please reboot it."

Well, yes, we'd love to, except we have thousands of servers and don't know which one you'd like to reboot. In this case, we would need to write back asking for more information before we could start. Back-and-forth data hunts are an unnecessary delay.

Other times, a ticket comes in like this:

"My server SQLDB27-A in rack EL176 appears to be hung. Please visually inspect this machine for error lights, and if none are found, power-cycle it. Let me know what you find."

In this case, we have everything we need to jump right over to cabinet EL176, find the machine labeled SQLDB27-A, and start our visual inspection. Informative requests like this one are completed in minutes.

Here's some good information to include in a ticket:

  • The cabinet location: "EL176."
  • The physical (not logical) name labeled on the device, or the device's numbered location in the cabinet: "The device ID is SQLDB27-A."
  • The suspect problem or issue, with as much detail as possible: "The 1 T SATA disk in bay 3 reports failed."
  • The operation you want done and the results you expect: "Please take a spare 1 T SATA disk from our stock and hot-swap the disk in bay 3. The diagnostic light blink for bay 3 is now on to help identify it. Please mark the old disk as failed and put it in our storage."

No matter what the request is for, spare no detail for the fastest turnaround.

Topics: Support Tips

People Make All The Difference

When you buy technology for your business, you're not just buying systematized 1s and 0s and boxes with blinky lights. You're buying an extension of your workforce.

Information Technology is now the backbone of modern business. Without it, orders go unfulfilled, bills go unpaid, and workers go unemployed. It takes multiple teams of humans to run IT smoothly, no matter what size or market. Can you rely on each of your partners to quickly and proficiently resolve an issue you're having right this second?

Choosing competent, responsive, and stable technology vendors is just as important as choosing your own employees.

Increasingly, tech providers are supplying less and worse personalized support for their products in favor of knowledge bases, call tree IVRs, document repositories, self-service widgets, and the like (the leading example of this phenomenon being the bookseller turned cloud services provider). Good luck getting immediate or personal support without having a few commas in your monthly tab. To be fair, they're not the only ones guilty of this. Many software and IT infrastructure outfits are following suit, burying limited support underneath cheap prices.

As a sales engineer and buyer of hardware, software, and related services, I have a fairly unique, cradle-to-grave view of support from both a sales and buyer perspective. I've learned to always consider the cost of receiving support quickly when issues arise—because they always do. I ask myself:

  • What's the impact to my business if X goes down?
  • How much does it cost my business if it takes 24 hours or longer to resolve the issue?
  • How irate will IT Johnny be if he's asked to work another 16-hour burner?
  • Did I really save money buying technology versus a technology provider with a rich support ecosystem?

It's not all gloom and doom, though. There are bastions of magnificent support across every branch of technology, and thankfully these providers are easily vetted. Does the company advertise its support email address? If so, send them a friendly note letting them know that you're evaluating the responsiveness of their support. Time their reply. Award bonus points for a non-template response. Again, do they advertise their support phone number? If so, ring them up and time how long it takes to talk to a real person. Gauge their liveliness and demeanor. Ask them a couple questions. Some of my favorites are:

  • "How long have you worked there?"
  • "What's your favorite pizza topping?"
  • "Do you like your job?"

You'd be surprised how many companies flub this simple test.

Perhaps the most powerful tool is asking your peers about their experience with the vendor you're considering. This is the true litmus test for any support organization.

This leads me to some obligatory horn-tooting. ServerCentral has been providing 24xforever support for well over 15 years. Our customers can reach a smart, onsite human being at any time of day, any day of the year by email or phone. And perhaps just as important, we all love pizza.

Topics: Support