<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1078844192138377&amp;ev=PageView&amp;noscript=1">

Blog

What's Top of Mind in IT?

In December 2016, we began a research effort by categorizing the conversations we have with customers and prospects. 

We wanted to answer a few questions:

Coming into 2017, what are people most interested in and focused on?

Are there common themes or project scopes that are top of mind and to-do lists? If so, are there external catalysts driving these decisions and actions?

How much variance exists between the questions we're being asked and the projects being undertaken? 

Are the questions being asked and the topics/solutions being researched the actual objective, or are they on the edges, reflecting a different way of thinking about a risk or opportunity?

What, exactly, are their definitions of the terms, products and services they're interested in?

In other words, are we speaking the same language?


We then compared this information against our web traffic, including page visits and keyword triggers.

With 60 days of data in hand, we identified four interesting trends:

Cloud Migrations.

How do I migrate existing applications and data from where they are (wherever that may be) to the cloud? The cloud, in this case, is bookended by two definitions. On one end, the applications and data are no longer on-prem. On the other, the application has been completely optimized/modernized/transformed for a SaaS model and to take advantage of seamless integration with third-party APIs.

Availability.

Availability, in this case, is defined in three ways:

  • The application or data is always available. This is what we would traditionally term as High Availability or HA application architectures;
  • The application performance is improved to meet current end-user speed or usability expectations. This is very closely linked to application transformation or modernization (effectively taking advantage of advanced data, application and network architectures to improve performance); and
  • APIs. The application and data is available in new ways as the business, end users and third-party partners may require.  

DevOps.

More closely linked to Cloud Migration and Availability than we had expected, the DevOps questions all center around increasing the availability of, and access to, infrastructure management capabilities. This is either through automation (think Continuous Integration) or through outsourced management of these components. What surprised us the most here is that these conversations really dove into a need for someone to bring DevOps processes and mentality into their organizations. Equally surprising was that this requirement extended beyond infrastructure and reached all the way up into the application development process.  There is a clear desire for modern processes, and it's great to see.

Security.

When we dug into these questions we quickly learned that security really was part of a much larger effort to meet rapidly evolving compliance requirements. Whenever someone mentioned fully managed security, and we asked them to define what that means to them, it quickly became a compliance conversation. The underlying need is to mitigate the risks associated with increasing requirements for data access (via APIs) and extremely fluid security and compliance regulations. The TL;DR version is best practices to manage an ever-changing landscape.


So, how much variance exists between the questions we're asked and the projects we undertake?

Not much, really. When we dug deeply into these conversations, we were able to learn that while there was usually one key driver for the research or inquiry, it was ultimately part of one of the four trends mentioned above. People were just coming to one of these opportunities from slightly different angles based upon their unique requirements. In essence, it is important to do more listening than talking to be sure the edges of the scope are clearly known by everyone involved. 

Finally, are we speaking the same language?

In many instances, we are. However, the definition of a term or expectation of a service are inherently unique. In many cases, the expectation extends beyond what would be termed an industry standard (in a positive way). This corresponds directly with the aforementioned statement about being an attentive listener and, in all instances, asking very direct questions. This minimizes, if not eliminates, ambiguity in the conversation and helps get everyone right where they're going as efficiently as effectively as possible.


If you have any interest in discussing these trends or the research we're currently undertaking in more detail, please put some time on my calendar. I'm happy to make time for the discussion and share what we're learning. 

Topics: Data Center Infrastructure

Mitigating BlackNurse Denial-of-Service Attacks

In 2016, every company, no matter the size, has an online presence. Whether that is a website, online store front, perhaps a mobile app, or a mail server, chances are your organization has some kind of system in production on the Internet.

It has long been established common practice to secure these types of services or devices via a firewall. A firewall functions exactly as the name implies and only exposes the necessary service ports to the outside world that were absolutely needed for the application to function. For example, you might configure the firewall to allow TCP port 80 and 443 which are common ports for web pages to be served on the Internet but block everything else.

Unfortunately, this creates a single point of failure in that all of that network traffic is now funneled through that firewall. Under normal circumstances, this is usually not a problem. You would “right-size” your firewall appliance to be able to handle your normal traffic load, probably allow for some head room for traffic growth over time, and then throw in a little extra head room in case of burst traffic. From there, you’d probably purchase a second firewall and cluster the two devices together for high availability if the application required it. This way, if one firewall were to become unresponsive, the other firewall would assume the traffic flow and your service would not be disrupted.

Then along came the BlackNurse Denial-of-Service (DoS) attack…

What is the BlackNurse Denial-of-Service Attack?

Note 1: The term “BlackNurse” is named after the team who discovered this type of attack: one was a blacksmith, and the other was a nurse.

Note 2: There have been a number of articles posted already that delve into the ICMP protocol and the different types of ICMP messages that can be generated and how these all work together to generate the BlackNurse Denial-of-Service attack. We won’t rehash detailed ICMP technical explanations here, but there are links for further reading at the end of this post if you’d like more information.

Before we delve into the BlackNurse attack, let’s quickly review a little bit of history, shall we?

At a high level, the BlackNurse DoS attack is a type of ping flood attack. Back in the days of grunge music in the 1990s, a common Denial-of-Service attack was to simply flood a target with the ping command (ICMP Echo packets - Type 8 Code 0 if you’re curious) and maxing out a host’s Internet connection with excessive data. You might remember the “ping of death,” which used malformed pings larger than 1500 bytes to crash devices.

In that era, many of us were on slow dial-up connections and most servers didn’t have 10 gigabit connections like they do today. The Internet was a much smaller place and didn’t quite have near the amount of attack mitigation technology available as it does today. Consequently, it was very easy for a malicious user with access to a relatively fast connection to flood a target with a small amount of traffic (by today’s standards), as most devices couldn’t handle a high amount of traffic in those days.

As technology evolved, switches, routers, firewalls, servers, and network interface controllers became able to generate and also accept much larger amounts of bandwidth. In the time since, we’ve also seen the advent of DDoS mitigation appliances, services such as those offered by traffic policers and other such technologies that have largely eliminated these types of basic attacks involving ping floods, the “ping of death,” and making other volumetric attacks easier to deal with.

Typically, volumetric attacks these days require several gigabits of traffic per second to bring a host offline and also usually involve multiple attack vectors. You’ve probably seen a few of these in the news over the past year or two. These usually involve a coordinated effort of multiple compromised hosts or botnets and generate huge amounts of traffic in order to take down a service. The most recent example in the news was the DDoS attack levied against Dyn.com. This attack was reported to be operating at 1.1 Tbps at its peak.

Why is BlackNurse unique?

So if volumetric attacks require all of these resources in order to bring down servers and we have numerous tools at our disposal to mitigate volumetric attacks, then what makes the BlackNurse so special?

What makes BlackNurse unique is that it is able to bring down firewalls or other network devices with a small amount of traffic (think 15-20 Mbits per second generating 40-50,000 packets per second). Today, laptops and desktop computers can easily generate this amount of traffic on your typical broadband connection.

The way it works is that attackers have identified that a specially-crafted ping command generates a significantly higher amount of CPU usage on certain network devices, namely firewalls, when these devices respond to this type of ping command. This specially crafted ping command exploits how certain devices respond to ping commands that generate ICMP messages of Type 3. ICMP Type 3 messages are known as “Destination Unreachable” messages, which basically means that a host is not available or responding. Unfortunately, it is not possible to simply turn off or block “Destination Unreachable” responses to ping requests, as these are required for keeping hosts operating properly on an IPv4 network per RFC specifications—and for some vendors—required for IPSec and PPTP traffic to operate. For more information, please review the RFC specifications, specifically “RFC 1812 – Requirements for IPv4 Routers”.

Attackers discovered that only 15-20 Mbit of traffic with packet rates of 40,000 to 50,000 packets per second generated by this specially crafted ping command is able to bring these devices offline.

The attack is typically carried out when an attacker targets the WAN or public facing IP address of the firewall with a sustained stream of these ping commands until the firewall runs out of CPU cycles to process traffic. Devices behind the firewall are unable to communicate until the attack subsides.

As you can see, a single user can easily bring down a firewall with a modest amount of easily attainable resources at their disposal.

How is ServerCentral protecting customers against BlackNurse DoS attacks?

ServerCentral has taken several steps to prepare for and mitigate the BlackNurse DoS attack.

Our Network Engineering team has proactively implemented filters on all upstream connections to rate limit the maximum inbound rate of ICMP Type 3 Code 3 messages that are allowed towards our customers. These filters should not impact general day to day operations. All Managed and Colocation customers are covered by these changes.

Our Managed Services team has worked with our firewall vendors to identify potentially affected models used with our Managed Firewall service.

Juniper SRX: At this time, Juniper SRX models appear to not be affected but this is still pending additional research and verification with JTAC. However, we have worked with Juniper to proactively create a custom IDS filter that can mitigate the BlackNurse effects as an added precaution. This filter can be applied on demand.

Fortigate: At this time, Fortigate firewalls running FortiOS v5.2.x appear to not be affected. Some models running FortiOS v5.4.x appear to be affected. ServerCentral has standardized on the v5.2.x code branch and should not be affected. This is still pending additional research and verification.

Should the situation change with our firewall providers, we will update this post accordingly.

For users that have purchased our Managed DDoS Mitigation Service, the built-in behavior Denial-of-Service analysis engine will successfully detect and mitigate the BlackNurse attack during an attack.

For our colocation customers, we strongly urge you to contact your firewall vendors and confirm if your devices or configurations are susceptible to the BlackNurse DoS attack.

If you have any questions or if we can be of assistance, please contact us at your convenience.

Further Reading

Topics: Support Security

High Availability and Redundancy Features in vSphere 6.5

VMware announced the new version of its core data center product, vSphere 6.5, at VMworld Barcelona last week. With it, as always, comes new ESXi and vCenter features—and make no mistake, there are some great ones!

One thing I personally liked is the attention to the core feature set that makes vSphere so great:

  • HA and DRS improvements
  • added security features
  • a new VMFS version

These are features that every vSphere environment, big or small, simple or complex, can utilize. Those on the cutting edge aren’t forgotten either, with a push towards containerization and a standardization of its RESTful API platform (with an increased focus on documentation to boot).

There are two topics that I would like to focus on: improvements to the availability of the VMs that run in your environment, and an easier way to provide redundancy to vCenter itself.

Let’s go over some of the important features that you will be able to use right off the bat.

Higher Availability: vCenter

One of the big issues that many customers face when deploying vCenter is related to availability. After all, vCenter itself can be a single point of failure in many environments. While technologies like HA are designed to function without vCenter being available, many deployments (including ServerCentral’s Enterprise Cloud) need very high uptime for vCenter just to carry out day-to-day activities. vCenter Heartbeat was the old way of doing this, but that product is no longer offered by VMware.

With 6.5, VMware offers a new way of ensuring high uptime for vCenter itself: VCSA High Availability.

This is a VCSA-only technology (not surprising given VMware’s desire to move away from the Windows vCenter Server). It basically allows you to cluster multiple vCenters that are responsible for the same environment. This is an Active/Passive configuration that uses a back-end network to replicate the vPostgres data between the Active, Passive, and Witness appliances. The Witness, as you can probably guess, is there to prevent split-brain scenarios (which sounds a lot cooler than it is in practice).

Configuring vCenter HA within the UI.png
Configuring vCenter HA within the UI

vCenter HA will go a long way in ensuring even higher uptime for vCenter. Given that things are becoming faster and more automated as a whole (and such automation would use vCenter to complete tasks), this is more important than ever to implement in modern environments.

While all of this vCenter stuff is great, what about improvements to normal VMs and workloads? VMware has made several improvements to their oldest feature in the book: High Availability. 

Redundancy of the Future: Orchestrated HA, Proactive HA, and Admission Control

Orchestrated HA

One basic feature that is long overdue is the ability to set a restart order for an HA event. Consider this: Three servers (Database, Application, and Web) all service the same logical app. If you were to cold boot this app, you would probably have an order to the VMs to bring up, right? Usually it is something like Database first, then Application, and finally Web. Well, what if those three servers were all on the same physical host, and that host failed? Currently, HA will restart all three servers on different hosts. Unfortunately, while vSphere has traditionally had a very rudimentary VM Restart Priority, it doesn’t actually orchestrate any servers as they relate to each other. With 6.5, HA can bring up servers in a specific order (think vApps, SRM, or Zerto) after an HA event has occurred. This helps make failure events more predictable, and predictability is a great thing!

Setting VM Restart Order.png
Setting VM Restart Order

Proactive HA

Also in the HA realm of upgrades, vSphere now has a new checkbox called Proactive HA. This is a third-party-dependent feature that enables hardware vendors to inform vSphere of potential issues with a host and trigger an HA event before the host actually begins showing signs of an issue. I can see definite use cases with this when it comes to things like SMART data from hard drives, flaky DIMMs, or even something like an overheating chassis.

This event will actually use vMotion to migrate the VMs (similar to putting a host into Maintenance Mode), thus preventing any downtime at all. This can also utilize a new status called Quarantine Mode, which keeps VMs from starting on this host unless absolutely necessary (or you can use a more traditional Maintenance Mode, which won’t put any VMs on the host no matter what).

Proactive HA set to Quarantine Mode.png
Proactive HA set to Quarantine Mode

Admission Control

The last of the big new HA features is a usability improvement focusing around Admission Control. Admission Control has long been a misunderstood and often improperly used feature, and in my opinion, is overdue for an overhaul. In previous versions of vSphere, it often felt like one of the features that was forgotten about from the 3.x days and never improved upon (fixed slot sizes notwithstanding). With 6.5, the VMware dev team has taken a more simplistic approach to Admission Control.

The long-time favorite of most vSphere admins has been Percentage based on resources used. In this setting, the admin sets a percentage of CPU and Memory that the cluster cannot go below, at which point it prevents the powering on of new VMs. This was done so that the existing VMs could still function should an HA event occur. Typically, this would be set to a percentage depending on the number of hosts in the cluster, designed for a single host failure. For example, if there were two hosts in the cluster, the percentage would be 50%, or ½. If you added a third host, the new ratio would be 1/3, or 33%, and so on. Prior to 6.5, this percentage would not update in the event of a host addition, so it was up to the admin team to ensure that the percentage was set properly (and let’s hope they could do fractions). That has all changed with 6.5: simply set the number of host failures you can tolerate, and it figures out the percentages for you.

The other huge issue with Admission Control was that it functioned strictly with reserved resources. If you didn’t set any reservations on a VM, then Admission Control would only care about the very small CPU and memory resources required. You could deploy many more machines than the cluster could actually support (assuming you’re running at 100%). For example, if you had a three-host cluster with 256 GB memory on each host, then you could really deploy 512 GB worth of VMs and still be at 100% with n-1 redundancy. Even with Admission Control on, if you didn’t reserve any resources, you could deploy many times that amount of resources and the VMs would still power on. They would just be incredibly slow because you would be swapping to disk constantly.

vSphere 6.5 has a new setting called Performance degradation VMs tolerate. This feature issues a warning when a host failure would cause a reduction in VM performance based on actual resource consumption, not just the configured reservations. By setting this to 0%, you can effectively change Admission Control to work for configured memory as opposed to reserved memory, and therefore the performance and not just the availability of the VMs can be preserved during an event.

A modernized Admission Control
A modernized approach to Admission Control

My Take

I think these features will become staples in both new deployments and upgrades, as they are very easy to implement, accessible directly from the UI, and non-invasive to the VMs and overall vSphere environment. The benefits are significant and long overdue, and the costs (mainly an additional vCenter Server) are negligible for modern environments.

Topics: VMware

ServerCentral Selects Ciena DCI Platform to Deliver Critical Business and Cloud Applications

To satisfy growing customer demands for cloud migration and high-performance applications, ServerCentral is deploying Ciena’s® (NYSE: CIEN) Waveserver stackable data center interconnect (DCI) platform. This upgrade allows ServerCentral to quickly scale bandwidth and support customer needs for high-speed data transfer, virtual machine migration and disaster recovery/backup between data centers. Additionally, with the extremely compact design of the Waveserver platform, ServerCentral can make more efficient use out of its data center footprints.

Key Facts:

  • ServerCentral is a managed IT services provider delivering infrastructure for startups and Fortune 500s worldwide. ServerCentral delivers comprehensive infrastructure services including private clouds, dedicated infrastructure, connectivity to third-party cloud providers, infrastructure links and more across multiple diverse fiber paths from the company’s global data centers.
  • This upgrade allows ServerCentral to provide highly reliable data connectivity at levels of 10, 40 and 100 GbE, which means its customers can confidently deploy bandwidth-intensive, web-scale applications. Additionally, with Waveserver, ServerCentral can offer new levels of flexibility by providing different connectivity speeds on short notice.
  • With Ciena’s Emulation CloudTM, an open application development environment, ServerCentral can create, test and fine-tune customized web-scale applications tailored to its specific customer needs. ServerCentral can also use Ciena’s Essentials App to manage Waveserver manually or remotely and quickly establish a connection, order, install or provision a service via any smart device.

Executive Comments:

Today’s organizations are faced with increasingly complex, cloud-based architectures. In order to help our customers architect, deploy, manage and scale the optimal solutions for their business, we need to continue to evolve our scalable, flexible and redundant network. Ciena’s Waveserver helps us ensure we have the optimal foundation for our customer’s business and applications.
Bill Lowry, Vice President, Products, ServerCentral

ServerCentral’s forward-looking vision and deployment of new technologies not only ensure that it meets present-day requirements, but can also adjust to changing customer needs. With our Waveserver platform, ServerCentral can provide a highly dynamic, scalable and low-latency interconnectivity experience to its customers.
Jason Phipps, Vice President & General Manager, North America, Ciena

About ServerCentral
ServerCentral is an IT infrastructure solutions provider. Since 2000, leading technology, finance, health care and e-commerce firms have put their trust in ServerCentral to design and manage their mission-critical infrastructure. With data centers in North America, Europe and Asia, ServerCentral works with customers to develop the right solution for their business. Whether it is colocation, managed services, Infrastructure as a Service (IaaS) or cloud, ServerCentral designs the optimal solution for each client. Learn more at https://www.servercentral.com.

About Ciena
Ciena (NYSE: CIEN) is a network strategy and technology company. We translate best-in-class technology into value through a high-touch, consultative business model – with a relentless drive to create exceptional experiences measured by outcomes. For updates on Ciena, follow us on Twitter @Ciena, LinkedIn, the Ciena Insights blog, or visit www.ciena.com.

Topics: Press Releases

From Infrastructure to Workloads. What We're Talking About Today

Three years ago we were asked a relatively simple question 

"Can you get me out of the infrastructure business?"  

The answer to this question was a function of migrating applications and infrastructure from a customer-site, customer-managed environment to ServerCentral managed infrastructure located in a ServerCentral managed data center. Simple enough. Basic migrations that sometimes remained on bare metal, and sometimes transitioned to fully virtualized environments. Private cloud at its finest, if you will. 

Today, however, we are asked a very different question, one that isn't anywhere near as simple, 

"Where should I put each workload to optimize performance, maximize flexibility / agility / ability to scale, minimize cost and be sure my security / compliance requirements are addressed?" 

The answer to this question is a function of significantly more analysis, assessment and strategy. 

Why do we bring this up? Another good question. 

We're bringing this topic up because it reflects the natural evolution of ServerCentral – helping companies by providing the right infrastructure for their applications, available in the right places at the right times in order to help them realize the right results for their business. 

Just a few years ago helping our customers was typically about a single migration from location A to location B. Today, helping our customers means having an ever increasing foundation upon which we can architect, deliver, manage and scale solutions on their behalf.  

What does this mean for you? 

This means we're going to be asking a lot more questions … but it is all part of making sure that you have exactly what you need, where you need it, to achieve your objectives. 

Since ServerCentral is a services company, we are continually working to be in the best possible position to enable your organization with all of the infrastructure elements needed, without making your organization reliant upon any one hardware, network or cloud platform.

Topics: Cloud Infrastructure Migration

7 PowerCLI Learning and Coding Tools I Learned about at VMworld 2016 US

With VMworld 2016 US coming to a close, there’s no time like the present to review one of the major tracks from the conference. Up first, one of the main pillars of VMware automation and scripting: PowerCLI.

A PowerCLI primer:

PowerCLI is a VMware-focused extension to Microsoft PowerShell. While on the surface it looks and feels very much like Windows Command Prompt (cmd.exe), spend any time in PowerShell or PowerCLI and you'll notice that it is an altogether different beast. PowerCLI is an object-oriented scripting language designed to take the difficulty and time-consumption out of your most commonly repeated tasks. It uses “cmdlets” to issue commands in the “verb-noun” syntax. For example, “Get-VM” returns all of the Virtual Machines in an environment, whereas “Set-VMHostNetworkAdapter” makes changes to the NIC of a VMware ESXi Host. 

PowerCLI has been used for a multitude of different purposes, from simple scripts to entire deployments and configuration changes. It can even be used for running scheduled configuration checks and automatic remediation to make sure hosts, switches, VMs, and other objects are exactly where they should be. This is similar to Host Profiles, but is executed at a scripting level as opposed to being a vCenter feature. This means you won't need Enterprise Plus licensing, saving you some serious cash on your VMware deployment. 


Whether you’re just getting started with PowerCLI or are an experienced veteran, I’d like to share some of the resources from VMworld sessions that I’ve compiled for our team here at ServerCentral. Hopefully they can help you succeed with this incredibly powerful tool, too. 


Session 1: Getting Started with PowerShell and PowerCLI for Your VMware Environment [INF8038R]

View Session Video | Session Info

Chris Wahl (wahlnetwork.com) and Kyle Ruddy (thatcouldbeaproblem.com) did an excellent session on an introduction to PowerCLI. It was great in that it not only covered some basic cmdlets and ways to get around PowerCLI, but also had a couple gems in the Add-ons that are common for PowerCLI admins.

Tip: If you’re just starting out on PowerCLI, head over to the excellent VMware Hands-On Labs to get your feet wet. Go to http://labs.hol.vmware.com/HOL/catalogs/catalog/123 and search for HOL-SDC-1607, From Beginner to Advanced Features with PowerCLI. You'll need to create an account if you don't have one already, but it's worthwhile. Here's the lab documentation.) 

Tools and resources from this session:

Session 2: Enforcing a vSphere Cluster Design with PowerCLI Automation [INF8036]

View Session Video | Session Info

If you’re past the basics and your environment is all set up, then a great practical application session was Enforcing a vSphere Cluster Design with PowerCLI Automation. Presented by Duncan Epping (yellow-bricks.com) and Chris Wahl (wahlnetwork.com), this session dives into a great example of automated cluster checking and remediation. Furthermore, they dig into the vSphere API, which can be called by PowerCLI (typically via the “get-view” cmdlet), for advanced functionality not found in the top-level PowerCLI cmdlets. 

Tools and resources from this session:

But wait, there's more!

Example scripts and pre-built tools:

Session 3: The Power Hour, Deep Dive, DevOps, and New Features of PowerCLI [INF8092]

View Session Video | Session Info

If you’re a scripting veteran or if you’re just curious and want to see what PowerCLI is really capable of, there was an incredible Deep Dive session by two masters of PowerCLI: Alan Renouf (http://www.virtu-al.net) and Luc Dekens (http://www.lucd.info). In 60 minutes, Alan and Luc go heavily into the API, covering features like script optimization, streamlining report generation speed, and a lot more. Perhaps most importantly, Alan discusses a major shift in PowerShell itself: the transition from Windows-only to Linux and Mac OSs. This is a huge jump for VMware as a company, which is aggressively expanding beyond its traditional, Microsoft-only roots (as evidenced by its move away from the Windows-only C# client to the OS-agnostic Web Client and its move to the vCenter Server Appliance.

FYI, this session is going to require a few viewings. There's just that much good stuff in there. Still, it's a great resource to see some atypical examples of PowerCLI usage. 


Now get out and start scripting!

It's amazing how much time you'll save once common tasks can be automated. Stay tuned for more VMworld recaps, including a post on the future of the vSphere Client for those of you who hate CLIs and love your GUI (and if that’s the case, why are you still reading this article?!).

Topics: Tips

What We Learned @ DevOpsDays Chicago

DevOps supports the applications, infrastructure and, most importantly, the business ... but who is supporting DevOps?

Today, every organization, big and small, is scrambling to establish their DevOps strategy. The promise of smooth collaboration and communication between devs, IT and the business is simply too good to pass up – and that’s before we even begin talking about effective orchestration of technology and business processes. DevOps is showing tremendous promise to support – and transform – organizations.

However, with this much pressure being put on DevOps shoulders, we have to ask,

"Who is supporting DevOps?"

Far and away the biggest learning from this year's DevOpsDays Chicago.

The infrastructure, tools and processes necessary to insure your platform is built to make your DevOps - and your company - successful don't happen by accident. What is needed is significant additional support for DevOps teams so they can fulfill their mission. This is where your choice in infrastructure, tools and process design becomes critical ... this is where you need your own support structure ... this is why we're here.

If you feel we may be able to help, please let us know.

Topics: DevOps

ServerCentral's 2016 SOC 2 audit is now available!

Throughout many years of managing audit tasks and compliance programs, the most arduous part has always been gathering the proper artifacts.

  • Did we get the screen shot of one system right?
  • Where did I put that report from our vendor?
  • Who’s seen the monthly vulnerability scan reports?

Well, today ServerCentral took a large step toward making that process easier for our customers by putting our SOC 2 report online in our customer portal! 

Our report alone, though, is not always enough to satisfy your auditors and vendors.

Fear not!

Not only is our own SOC 2 available, however, but we’ve also uploaded the reports of our sub-service organizations and various data center facilities where customer solutions have been deployed. You’ll find the reports assigned to facilities where you house equipment listed alongside the ServerCentral report. All of these reports will be updated regularly with the latest versions from each of the organizations. 

Also, we're very pleased to let you know that, beginning today, when ServerCentral issues our SOC 2 report, a copy will be uploaded and made available to approved customers and contacts who have a signed NDA on file with our legal department. This means:

  • You will no longer need to request the report from your account manager.
  • You will no longer have to wait for the email notifying you when it’s made available each summer.
  • You will no longer be digging through your email archives for it when you need it for your own audit.

Simply sign in to our customer portal and you’ll see the report available under Documents -> Compliance Reports. 

 As always, if you have any questions or concerns, please do not hesitate to let us know.

Topics: Compliance Security Audit

ServerCentral Partners with Nimble Storage to Expand Managed Cloud Storage Offering

Offering includes dedicated and shared storage options managed by ServerCentral's experts

 

ServerCentral partners with Nimble Storage [NYSE: NMBL] to expand its dedicated and shared managed storage offering. All storage options are fully redundant and managed by ServerCentral, which brings more than 16 years of storage and IT infrastructure experience to customers worldwide.

"We are constantly evolving our storage options based on our customers' requirements. We wanted a strategic partner to help us build a comprehensive offering," said Tom Kiblin, ServerCentral VP of Managed Services. "After evaluating numerous vendors and extensive technology testing, we chose the Nimble Predictive Flash platform for its enterprise-grade features that are cost-effective and scalable as well as its market-leading analytics that provide significant visibility into the entire IT stack."

ServerCentral offers dedicated and shared storage arrays, all of which are fully redundant, highly available and completely managed by ServerCentral's experts. SmartSecure Federal Information Processing Standard (FIPS) certified software-based encryption allows ServerCentral to better serve highly regulated industries such as finance and healthcare.

"ServerCentral's approach is building the right solution for each customer, which is the type of partner enterprises depend on when navigating today's complex IT infrastructures," said Leonard Iventosch, vice president of worldwide channels at Nimble Storage. "ServerCentral's expertise is a great fit for customers who are looking for a strategic partner to help them make intelligent infrastructure decisions. With the Nimble Storage Unified Flash Fabric, partners like ServerCentral are able to provide their customers with flash for all enterprise applications by unifying All Flash and Adaptive Flash arrays into a single consolidation architecture with common data services."

Nimble recently launched its Predictive All Flash Arrays that combine fast flash performance with InfoSight Predictive Analytics to deliver increased data velocity. The Nimble AF-Series All Flash arrays deliver absolute performance, superior scalability and non-stop availability, at a total cost of ownership (TCO) that is 33 to 66 percent lower than competitive arrays.

"Nimble's Predictive All Flash arrays are an innovative addition to its offering and will be a fit for many of our customers," said Kiblin. "Nimble has an impressive roadmap, and they deliver on their target release dates."

In addition to Managed Cloud Storage, ServerCentral's Managed Service portfolio includes Backup as a Service, Disaster Recovery as a Service, and managed networkand data center migration.

 

 

ServerCentral Named One of ComputerWorld’s 2016 Best Places to Work

A company's culture is difficult to define. For some organizations culture is an internal component of their success. It's an intangible element of collaboration and support. For other organizations, culture is the external presentation of their values to the broader communities.

At ServerCentral, we are most proud of the fact that our culture is both. We pride ourselves on being a place where top talent likes to work.

Being recognized as the 16th Best Place to Work in IT in ComputerWorld's 2016 Small Employer segment is a tremendous validation of our culture and commitment to doing everything with excellence. With a growing employee base, we are relentlessly committed to collaboration and supporting each other, our customers, partners and the IT community as a whole.

However, we don’t rest on this – or any other – accolade. To this end we continue to expand our efforts to support our employees with the recent introduction of a comprehensive employee sabbatical program and our ongoing support of each team member's continuing education efforts. We're also continuing to increase our on staff expertise to provide the one-on-one support and value our customers and partners deserve and expect from ServerCentral.

The more we continue to be behind the success of each and every ServerCentral team member, the more completely we can be behind the success of each and every customer, partner and community with whom we work.

See ComputerWorld’s 2016 Best Places to Work list here.

Topics: Inside ServerCentral Marketing