<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1078844192138377&amp;ev=PageView&amp;noscript=1">

Blog

What Are Managed Services?

Every company has their own definition of a managed service. This is ServerCentral’s.

DC-GALLERY-remote-hands-2.jpg

What do you manage?

Any hardware offered by ServerCentral, including the operating system.

What don’t you manage?

Applications, software, and customer-provided equipment.

Are service level agreements the same for customer-provided equipment?

Yes. We offer 100% uptime on network connectivity and power, provided the gear is configured with redundant power (it’s free!) and redundant network connectivity.

What managed services does ServerCentral provide?

We manage servers, network stacks, storage, Veeam and CommVault backups, Zerto disaster recovery as a service (DRaaS), private/public VMware clouds, Fortinet and Juniper firewalls, Juniper switches and routers, load balancers, hypervisors, and any other hardware colocated in one of our data centers.

How do managed services work at ServerCentral?

Colocation customers ask us to manage all or some of their gear in addition to or independently of their team. Together, we produce a runbook detailing a plan of action for every permutation of infrastructure management. Our team takes on the requested management responsibilities, and the customer can stay as informed and involved as they want.

Who manages my gear?

We do! System administrators, network engineers, and experienced data center technicians manage your gear 24/7. We can work closely with your existing team if you have one, or manage it all if you don’t.

What are your management hours?

24/7/365.

Anything else you can take off my plate?

  • We can monitor, troubleshoot, and fix your infrastructure.
  • We can move you into our data center.
  • We can provide onsite remote hands support 24/7.
  • We can work with your auditors to demonstrate compliance with your requirements.

Can you manage some of my stuff and let me manage the rest?

Yes.

Can my managed gear live in my cabinet?

Yes. It can also live in a ServerCentral Infrastructure as a Service (IaaS) cabinet.

Can I touch the managed equipment in my cabinet?

No.

How many technicians manage my gear?

It depends. Between one and ten.

Can I have root access to managed equipment?

No.

Please?

Sorry, no. In our experience, comanagement leads to finger-pointing. It’s just not scalable for production infrastructure.

How much are your managed services?

This is something we’ve struggled with, because it depends. For each service, cost is based on space, power, connectivity, hardware, and the required level of involvement. Our services—and therefore, our costs—vary from one project to the next.

What’s included in the cost?

Hardware, maintenance, 100% uptime SLAs, and responsive support.

If there’s anything else you want to know, ask away at jamie@servercentral.com.

Topics: Products and Services

12 Keys to Successful Private Cloud Migrations

After you answer the “why private cloud?” question, it’s time to look at how you get there.

Cloud migrations are fraught with points of frustration that, according to a Gartner piece by Thomas Bittman, result in the failure of 95% of private cloud projects.

The root of this frustration, and ultimately project failure, is simplicity. When looking at any migration, the first and most important question to ask is:

“What are we looking at?”

When assessing a private cloud migration, it’s important to forget about the technology for a minute. The keys to your project's success lie in the operational details.

12 Steps to A Private Cloud Migration

Having managed and supported more cloud migrations than we can remember, we've distilled 12 critical success factors for these projects.

  1. Which applications will be moving to the cloud?
  2. Which servers are running these applications?
  3. What are the server’s CPU utilization rates?
  4. Which groups, departments, individuals use these applications?
  5. How critical are these applications to the organization?
  6. Who manages these applications?
  7. What is the impact to the organization if these applications are unavailable for an hour? For a day? For a week?
  8. Are these applications architected for high-availability and/or redundancy?
  9. Are there legal/compliance requirements for RTO and RPOs?
  10. What are the relationships between these applications?
  11. What are the security requirements for these applications?
  12. What are the data protection and retention requirements for these applications?

Are the answers to these questions the same for each application that is being migrated?

If not, what are the key differences?

When a project to "migrate everything to the cloud" needs to start yesterday, this is the perfect time to stop and make sure you have answers to these questions. In our experience, these questions often aren't asked, let alone answered. The result? Dissatisfaction, as there are no clear goals or an understanding of the migration implications.

All too often, organizations believe the cloud automatically makes applications highly-available and redundant, that a cloud-based infrastructure will be cheaper. Unfortunately, this isn’t always the case.

The most successful cloud migrations we’ve been a part of begin with detailed planning. In the largest of these migrations, we spent almost a year working with a customer auditing equipment, applications, access, relationships, business requirements and SLAs, talking to app users and owners to develop a clear and complete picture of the migration. Yes—almost a year.

Once this planning was complete, the actual migration time to cloud/virtual took approximately 6 months.

This detailed planning and preparation resulted in a migration with minimal issues and documented real savings (hard dollars) of millions of dollars per year.

Where did the savings come from?

A significant component of the savings was the result of the audit which showed that less than 25% of the compute capacity across the organization was in use. This meant a 1:1 relationship on compute capacity wasn’t required for the migration.

There's a lot of planning to be done to set yourself up for success.

Summary?

Once you’ve answered “Why private cloud?” and defined your end-goals (cost savings? agility? get out of the IT business?)—the more attention you pay to the details up front, the more likely your migration to cloud will be met with success.

Topics: High Availability Products and Services

Choose Your Cloud Partner Wisely

Over the past few weeks, I’ve been talking with dozens of technology leaders from startups and early-stage companies. I wanted to learn about the technical challenges they face today and understand what they feel they need to be successful.

Out of the hundreds of data points gathered in these conversations, the biggest issue I’ve seen is that these individuals (and their organizations) spend roughly 50% of their time just getting to the point where they can actually do work. In this case, “work” means writing code and supporting customers. 

This half of their time is spent on countless technology infrastructure tasks such as DNS, network architecture and management, security policies, DDoS mitigation, firewall management, load balancing, CDN configuration, etc. All of these are important to the company’s success, but do not involve writing code and supporting customers.

Startups are too busy maintaining their IT operations to start up much of anything.

The question, “Who can take care of my infrastructure for me so I have time to  focus on my app?” became all too common.

As the provocateur, it was impossible not to ask, “Isn’t this why you architected and built everything in the cloud?”

Not a fair question, I’ll admit. I just had to to be sure that I did, in fact, know their answers:

I don’t have the time to spend learning someone else’s blackbox platform.

I need to know what’s under my apps so I know how to keep them streamlined and portable.

I need to know there’s a security expert watching over what I’m doing or simply there to do it for me.

My business runs 24/7 and I need to know I have real people supporting me 24/7.

The moral of the story? Choose your cloud partner wisely.

Your cloud partner should be supporting you. You shouldn’t be supporting your cloud partner.

 
Topics: Tips Products and Services

The Levels of RAID

Last time we discussed the various benefits of RAID - protection against failure, larger volumes, and improved performance. As with anything in engineering, there are several ways to go about this, each with their own set of tradeoffs. If you have any questions or want to discuss this further, contact us.

RAID-0

The most basic form of RAID is level 0 (these levels are typically expressed as RAID-#, so RAID-0, RAID-1, etc). RAID-0 stripes data across multiple drives. It writes the first block to the first drive, second to the next, and so on, until it hits all of the drives, then comes back to the first drive. This means you see the full performance of your drives since they're all working in parallel, with essentially no overhead. The downside is there is no redundancy; any drive failure will cause complete data loss across all drives. So while this allows expanding storage and extremely high performance, it's actually less reliable than a single drive.

 

RAID-0 RAID 0

RAID-1

The next-simplest RAID level uses mirroring. This takes all data written to one drive, and writes it in parallel to a second drive. This provides the highest redundancy since there is a 1-for-1 copy of all data written. It also provides very high read performance, as both disks can be read from in parallel. Write performance is unaffected as although there are two disks writing in parallel, they're writing the same data twice. The downside to RAID-1 is high cost, as one must build out twice the capacity that's actually required. Traditional RAID-1 is also designed for exactly two drives, and as such is limited in how far it can expand storage.

 

RAID-1 RAID 1

RAID-10

Addressing the lack of expandability of RAID-1, RAID-10 combines the approaches of RAID-1 and RAID-0. First, disks are mirrored into pairs, providing the high redundancy and high read performance of RAID-1. Then all of these pairs are striped using RAID-0, allowing it to expand across more than two drives, and also improving write performance. This is considered the gold standard for high-performance, high-reliability, high-capacity systems, although like RAID-1 it's very expensive to implement as half of the capacity is used for the mirror. RAID-10 requires an even number of disks, and at least 4 disks in the RAID set (even since it's built from RAID-1 pairs of disks, and minimum 4 since two pairs are the minimum that can be striped).

 

RAID-10 RAID 10

RAID-5

RAID-5 introduces the concept of parity to provide redundancy. Rather than write a complete duplicate of data to a second drive, it runs a fast algorithm across the same block on several disks, and mathematically creates a new block based on them. Much as you can look at "5 + 2 + x = 10" and determine that x is "3", when a drive fails a RAID controller can look at the remaining disks and reconstruct the missing data bit-by-bit. RAID-5 supports single parity, so any drive in the array can fail and it can still function and rebuild the data. Parity information is spread across the drive set to even out access patterns and improve performance. This provides high storage capacity since not as many drives are devoted to redundancy, decent robustness as any drive can fail, and middling performance. Performance is impacted by the calculations necessary to read and write the data, and reduces the parallelism that can be employed compared to RAID-0 and RAID-1.

 

RAID-5 RAID 5

RAID-6

While RAID-5 protects against a single drive failing and can rebuild it, RAID-6 uses two different sets of parity calculations and can rebuild an array even with two simultaneous failures (what's known as N+2 redundancy). RAID-6 requires at least 4 drives - at least two for data, and two for the parity information. RAID-6 makes most sense when used with more disks - up to a point. While the efficiency rises with the number of disks, so does the chance of multiple failure and the complexity of rebuilding the disk set. RAID-6 has a good balance of capacity, redundancy, and performance, which makes it the workhorse of high-capacity storage.

 

RAID-6 RAID 6

Note: Missing something?

We skipped over RAID levels 2 through 4 - these were variants on striping and parity-based redundancy, but were largely superseded by RAID-5. They aren't supported by modern RAID controllers, so it's highly unlikely you'll see them in practice.

Rebuilding

When any redundant RAID system has a drive fail, it has to reconstruct the data once the failed drive is replaced. As drives get larger this takes longer. While the drive is rebuilding the array isn't at full redundancy; it's described as being degraded. While an array is in a degraded state it typically has less or no redundancy, performance is reduced sharply, and drives are under stress as they provide the data for the rebuild. Mirroring systems aren't affected as badly, since only a single disk is required to rebuild the mirrored drive, but on parity-based systems like RAID-5 and RAID-6, performance typically drops significantly as data must be reconstructed on the fly via parity calculations.

As drive sizes increased, RAID-5 rebuild times followed, taking hours or even days to reconstruct the data from a lost drive. During this time disks were run full-tilt as the RAID controller recreated the missing drive (compare to RAID-1 or RAID-10 which only needs to look at a single drive to rebuild). This led into an issue of similar failure rates between drives in the same manufacturing lot. Typically the drives for a RAID set are bought together, and will come from the same manufacturing lot - which means they have similar failure characteristics. Since all of the drives in the array operate as a set, once one drive fails it's not uncommon to have another marginal drive in the same set. RAID-6 was created to address the shortcomings with RAID-5 as drive capacity grew, and provides protection against this by allowing for two simultaneous failures.

Wrapping Up

  • RAID-10 has the best performance and redundancy characteristics, but halves the usable capacity, which can make it expensive to deploy at scale. Sometimes this will be referred to as RAID-1, even though technically RAID-1 refers to only two disks. Provides 2N redundancy, wherein up to half the disks could fail (although you'd have to be lucky as to precisely which disks). RAID-1 and 10 are useful when you need very high performance and reliability, and are commonly seen on OS/boot drives and high-performance application servers.
  • RAID-6 is typically used when a large amount of storage is required and there are a large number of disks in play. Provides N+2 redundancy. RAID-6 is commonly used inside large-scale storage products from EMC, IBM, and others for its high capacity and fault tolerance, although it is frequently supplemented by caching or SSD to mask performance issues. RAID-5 and 6 are useful when you have a large amount of data that needs to be redundant. Commonly seen on databases and large storage shelves.
  • RAID-0 is seldom seen in the enterprise due to complete lack of redundancy (a single drive failure will lose the whole array), but in specific cases you may want to consider it. For instance, it can be useful for caching servers, where the stored data is unimportant, trivially replaceable, and high performance is critical.
RAID Level
Redundancy
Capacity
Read Performance
Write Performance
0 None All drives Excellent Excellent
1 / 10 2N 50% of all drives Excellent Decent
5 N+1 All but one drive Decent Variable
6 N+2 All but two drives Decent Variable

 

If you have any questions or want to discuss this further, contact us.

  View Managed Storage

Topics: Products and Services

Preventing Data Loss with RAID

The concepts of scalability and redundancy go hand-in-hand. Building an environment that is capable of scaling out offers the ability to fine-tune how much failure you can withstand. There are a dizzying approaches to redundancy—power, network, storage, server, data, backup and replication, disaster recovery, load balancing, site redundancy—but for today we're going to hit the basics of one of the most fundamental: storage. More specifically, RAID—a Redundant Array of Independent Disks.

Why RAID?

RAID provides a lot of bang for the buck. For what is in most cases a small investment compared to other options, you can provide significant protection against one of the most common forms of failure.

If your power or network fails without redundancy, your site is down. Outages like this can be extremely expensive, but they're also typically fast to recover from. Entire servers and sites are important to consider, but require significant planning to address fully. Backup and replication are important, but again bring complexity and can have extended restoration times.

However, if you have a disk that fails without RAID, you've just lost data.

Replacing that drive won't bring the data back; you'll need backups (you do have tested up-to-date backups, right?), you'll need a plan to rebuild the OS and restore those backups, and you'll have extended downtime to rebuild. A non-redundant drive loss in turn means you likely have a server outage, as a server without is data or OS is a very expensive brick. RAID is one of the most cost-effective improvements you can make to an environment - for the cost of disks and an adapter (all typically a fraction of the cost of a server), you can protect against failure, add capacity, and improve performance.

RAID isn't a panacea - there's still a lot that can go wrong that it won't protect you from. Even just within the realm of storage, human error, application bugs, or filesystem corruption can still make your disk array useless (RAID is not a replacement for backups). Much like power conditioning, it won't replace the need for backup systems, but it can lessen your reliance on them and exposure in case of failure. There are also some applications that don't lend themselves to RAID by their nature - Hadoop, ZFS, and other self-managing systems typically need direct disk access and provide their own redundancy and scalability features.

Capacity, Redundancy, Performance

Single disks suffer from many problems:

  • They're low in capacity, maxing out at a "mere" few terabytes.
  • They're failure-prone, especially traditional spinning hard disks with their tight tolerances and moving parts, but even solid state drives fail over time.
  • Single disks can also only provide so much performance—hard drives have been a known bottleneck for many years, but SSD's can be constrained by their interface and internal design and still have upper limits that are easy to hit with modern applications.

All of these limitations are unacceptable for critical business infrastructure.

RAID allows us to address these concerns by spreading storage across a number of disks, harnessing their combined capacity, performance, and enabling redundancy. Multiple disks are teamed together, providing features greater than the sum of their parts, but with storage less than the sum of their GB to provide redundancy. Redundancy is provided via a number of algorithms and methods, but the ultimate goal is to write additional data that allows reconstructing any data lost due to a drive failure. Different layouts or RAID levels allow one to optimize storage for a specific purpose. There are several different RAID levels in use today, the most common being 0, 1, 5, 6, and 10.

Summary

In closing, RAID provides a lot of benefits for a relatively small investment. You're protected against the most common type of failure, and one that has the worst consequences—loss of data. Not only does it protect data, it can enhance the scalability and performance of your underlying storage. There are many different ways to deploy RAID, each with its own set of tradeoffs, which we'll examine next time. Stay tuned!

Topics: Redundancy Products and Services

Which Cloud Is Right For You?

The cloud is easy when you know what you need.

Select how much CPU, memory, storage, and bandwidth you want. Click. Order. Compute. Something like that.

If you’re not sure what you need, however, the ease with which you can purchase and get locked into a cloud can quickly become a nightmare. Things get especially complicated when you engineer your applications and infrastructure for a particular framework that you’ll one day outscale.

You don’t need me to tell you that choosing a cloud is a big decision.

So, what does this mean for you?

Let’s start with a look at Private Clouds.

Private Cloud Uses

When we look out across our customer base, there are five conditions that tend to drive a private cloud decision:

  1. Applications requiring high SLAs. In these situations, organizations require systems that are completely self-contained, capable of achieving 100% uptime.
  2. Applications with clearly-defined compliance requirements. In these situations, organizations have very specific compliance requirements around data and/or business processes that must be met. A self-contained environment provides complete control over the variables that impact these objectives.
  3. Back-office/legacy applications. In these situations, an organization has an application, or set of applications, that are optimized for very specific infrastructure stacks. In many cases these are heavy applications with significant dependencies among multiple services and databases.
  4. Customer-facing applications. In these situations, the availability of the application takes precedence over everything else. Similar to applications requiring high SLAs, the difference here is that the requirement is purely external-facing and typically has a very high revenue per minute of uptime dependency.
  5. Noise. Many organizations simply don’t want noisy neighbors. They want 100% control over their environment and all critical dependencies within their purview.

Now let’s look at shared clouds.

Shared Cloud Uses

We see organizations using shared clouds for a different, slightly narrower, set of reasons:

  1. Testing and development. In these situations, organizations are looking for environments that can be quickly and easily defined, altered, and executed to meet rapidly changing criteria.
  2. Short-term applications. In these situations, seasonality or other factors such as an ad campaign, etc. require an application to be available for only a short period of time. Shared clouds enable the rapid deployment, scale, and tear-down of infrastructure to support these requirements.
  3. Applications with light compliance requirements. The exact opposite of the high compliance requirements noted above.
  4. Cost vs. Performance. This is as simple as it sounds. Applications where the cost of providing them is more important than a mission-critical level of system performance are ideal for multi-tenant environments.

What's Your Cloud Use Case?

For some organizations, there will be one right answer. We have a number of customers who are very comfortable with their private cloud’s performance, compliance, and control.

Conversely, we have a number of customers who are comfortable in a shared cloud environment. They take advantage of our multi-tenant Enterprise Cloud’s cost-effectiveness to meet their application’s requirements.

For other organizations, there isn’t one right answer. Certain applications may require private configurations, while other applications will be perfectly fine in shared environments.

If you’re not sure which cloud is right for you, ask. That’s what we’re here for. We'll help you make the right decision no matter which cloud platform or provider you choose.

Topics: Products and Services

Ask 10 People What The Cloud Is, Get 11 Answers

I asked a number of people:

“What is the cloud?”

There were three trends present in all of their responses:

Location: The cloud is not confined to my office. It is not confined to my premise.
Scaling Up: Gives me more of everything when I need it.
Scaling Down: Gives me less of everything when I don’t need it.

 

Even when talking about something as mainstream as the cloud, there is a tendency for people, especially providers, to skip over their own definition of the cloud. Instead of focusing starting at the beginning, assumptions are made and the conversation begins immediately with the capabilities of a solution. For this reason, we make it a point to explain our definition of cloud at the beginning of every conversation. This may seem like overkill, but we believe it is important. We want to be sure everyone is on the same page.

With that out of the way, let’s get something accomplished here.

Defining Clouds at ServerCentral

When we speak about clouds at ServerCentral, we’re speaking about a collection of virtualized compute, memory, storage, network and security resources that can be applied to any type of application. These are distinct and secure resources that are provided from one of six (6) availability zones across the U.S., Asia and Europe.

What Is A Private Cloud?

When we speak about a Private Cloud, we are speaking about distinct and secure compute, memory, storage, network and security resources that are utilized by one (and only one) organization. These resources are configured to meet the specific needs of this one organization. For Private Clouds, ServerCentral manages the physical infrastructure and the virtualization layer empowering our customers to begin their involvement at the OS and application levels.

We have one goal for our Private Clouds:

Deliver an unshared, virtualized infrastructure that is tuned and managed to the customer’s exact specifications.

What Is ServerCentral’s Enterprise Cloud?

ServerCentral’s Enterprise Cloud is a set of distinct and secure compute, memory, storage, network, and security resources utilized by multiple organizations.

We have three goals for our Enterprise Cloud:

1. Transparency: We accomplish this by offering pre-defined resource pools that help you quickly and cost-effectively deploy the right-sized cloud for your business requirements. You’ll always know what you’re paying.
2. Performance: We strive to provide our customers with the best performing, easiest to use multi-tenant cloud infrastructure available by delivering fully managed VMware vSphere environments on unmatched physical infrastructure.
3. Support: All Enterprise Cloud customers have access to our outstanding support organization. You’ve heard us say this time and time again, but it’s true - there are no phone trees or customer service reps. Should you have a question or need help, you’ll always speak with a real person with system-administration-level experience.

 

What Powers ServerCentral’s Clouds?

  • Carrier-grade networking hardware
  • Top-of-the-line compute and memory
  • High-performance, multi-site redundant SANs

Customers receive the same resource quality and attention to detail with both Private and Enterprise Clouds.

What’s The Difference?

You will not receive lower-quality service by selecting a shared Enterprise Cloud over a Private Cloud. Both solutions are built from the ground up to support enterprise-class applications. Sure, the SLAs are different and top-end performance will vary, but they should.

Our job is to provide a cloud that can meet or exceed YOUR operational/financial/compliance requirements.

Which Cloud Is Right for You?

As we say time and time again, the right solution is your solution. Share what you want to accomplish with us and we’ll make it happen.

If you’d like to discuss how to apply these solutions to your business or applications, let us know.

We’ll dive into more detail on this topic in our next post.

Topics: Products and Services

What Is Object Storage?

Object storage uses inexpensive commodity hardware to provide petabytes of resilient space.

To understand object storage, we first need to understand how it differs from traditional block and file storage.

Block storage is the base for all storage.

This storage can be a single disk, found in a laptop, a RAID array in a database server, or an iSCSI volume stored on a central SAN. Block storage devices are accessed by the operating system itself and allow granular data control through a filesystem at the byte level.

Your database server utilizes block storage to allow it to make small but rapid changes to large files each time data is inserted into your database.

File storage builds on block storage by exposing data to network computers.

Multiple computers can read and write files to a common location. NFS and CIFS are the widely used standards for this type of storage. Both allow for file locking and access control lists.

In both block and file storage, your operating system (Windows, Mac OS X, Linux) handles interfacing with the storage. Applications and other software do not need any special modifications and can access the storage transparently. This allows you to create directories and to arrange your data in a hierarchical fashion.

Object storage abstracts away the building blocks of storage.

When you send an object to an object store, a number of differences between block/file storage and object storage are highlighted:

  1. Object storage use a globally unique identifier for each object, rather than a filesystem path. This allows for access to the data without knowing which server the object is on, or even which data center the server is in.
  2. Metadata for the object can be stored. Since this metadata is arbitrary, it can be used to store access control lists, tags for pictures, or any other text used to define the object.

Access to object storage is done at the application level through APIs. These APIs allow a specific program to interact with an object storage platform through creating and deleting an object (file), updating the object with a new copy, and accessing the file (downloading it).

A major draw to object storage is durability. Data ingested by an object store is typically replicated across multiple physical disks, servers, and even data centers. This allows for huge resiliency in the face of drive failures and server failures because your data exists in multiple physical locations.

In addition to durability, object storage enables easy storage pool growth as your needs change. You simply add more disks and servers to the pools. This is unlike traditional filesystems within a single namespace, which can prove more difficult to scale under similar situations.

Use Cases

Object storage is ideal for unstructured data. Typical use cases include:

  • Backups
  • Log files
  • Media (pictures, music, videos)
  • Static web content

Block storage is better suited to structured data. Typical use cases include:

  • Databases
  • Files requiring frequent small, random changes

File storage is best for locally shared files. Typical use cases include:

  • Archival
  • Content repositories

Conclusion

While file and block storage are best for performance, meticulous metadata and limitless scalability make object storage useful all on its own.

Feel free to email sales@servercentral.com if you have questions.

Topics: Products and Services

Fault Tolerance in vSphere 6

In previous versions of VMware vSphere, real-time fault tolerance (FT) was only possible for single vCPU VMs. The problem was that most modern apps need more than one CPU. That’s why it's no surprise that virtualization admins everywhere are thrilled about vSphere 6 supporting 4 virtual CPUs in a fault-tolerant configuration.

Here's what's changing in vSphere 6:

A comparison of vSphere 6.0

SMP Fault Tolerance

With vSphere 6, the VMware team has redesigned FT from the ground up. Symmetric Multiprocessing-Fault Tolerance (SMP-FT), uses a new underlying feature called Fast Checkpointing. Instead of the traditional vLockstep technology, where every command is sent initially to the primary VM and then copied over the log network to the secondary VM, Fast Checkpointing uses snapshots and XvMotion to send a stream of snapshots that are constantly applied to a secondary VM. This is a major improvement.

Fast Checkpointing

The Advantages

  • SMP-FT supports a 4 vCPU per VM ratio. This means that all but the most intensive applications have the ability to be deployed in a FT environment.
  • Primary and Secondary VMs do not need to share storage. In vSphere 6, the secondary VM does not use the same VMDK as the primary. Not only does this increase an administrator’s flexibility in locating secondary VMs, such as hosts that don’t share storage, it gives administrators VMDK redundancy for fault-tolerant storage.
  • VMDKs can now be thin provisioned or lazy-zeroed thick provisioned. While this may seem like a small issue, the important takeaway here is that you can now enable FT on a VM that wasn’t initially built for a FT environment. This includes situations where administrators can use FT in a pinch to solve a specific problem, or when a previously low-priority VM suddenly becomes much higher priority.
  • In this same vein, FT is now able to be hot-added. This enables you to quickly make a VM fault tolerant without first needing to require downtime (which is the exact opposite of what you’re trying to achieve with FT in the first place!).
  • Finally, FT enabled VMs are now able to be non-disruptively snapshotted with vStorage APIs for Data Protection (VADP). While FT is an excellent availability and disaster avoidance technology, it still only protects at the hardware level. Issues like software crashes, data corruption, and other OS-level issues will not be protected via FT (as the “problem” would also be copied over to the secondary VM). With vSphere 6, you no longer need to rely on in-guest backup solutions for your FT-enabled VMs.

The “Gotchas”

  • 4x vCPU VMs consume a significant amount of network traffic. Make sure you have a 10-Gbps network.
  • The new maximum fault-tolerant VMs per host is 4 FT VMs or 8 FT-protected vCPUs, whichever limit you hit first. This includes both primary and secondary VMs, regardless of host performance and size.

At ServerCentral, we’re incredibly excited to begin using this new version of FT in current and future vSphere environments. Almost any application with high performance requirements, especially those that are difficult or impossible to cluster can now be protected at the virtualization layer. Furthermore, the added benefit of redundant VMDKs resolves one of the main issues that even single vCPU VMs could still face at the hardware level.

We're closer to running 100% virtualized environments than ever before.

If you’re interested in learning more about vSphere 6 or how we’ll use the new capabilities, please let us know.

 

Topics: Products and Services

vSphere 6.0 Management – What’s Different?

VMware vCenter Server

VMware vCenter Server provides a centralized platform for managing VMware vSphere environments. With the new release of vSphere 6, managing vSphere becomes a whole lot easier.

vCenter Server Appliance (VCSA)

The most significant change in the previous release of vSphere is the introduction of the production-ready vCenter Server Appliance (VCSA). This is a lightweight, all-in-one deployment model. This version of vCenter did not require a complex Windows-based setup with four or five different services to install, or a separate MSSQL database (for large deployments).

Instead, the VCSA utilizes SUSE Linux and an embedded database to deliver vCenter in a quick and simple installation with the option of using an external Oracle database for scalability. Unfortunately, this came with several restrictions, including the inability to provide a single-pane of glass utilizing Linked Mode in vCenter. Consequently, this was not the optimal platform for service providers that needed such a configuration. With the release of vSphere 6, VMware is determined to improve upon vCenter and change the way we manage our vSphere environments.

The Good

vSphere 6 introduces some significant changes to vCenter Server and vSphere management:

  • Simplified deployment model. All services are installed as one “Platform Services Controller.” The PSC can be installed on the same server as the vCenter Server instance or it can be installed on a separate box.
  • PSCs are now designed to replicate natively between each other.
  • VCSA and traditional Windows-based deployments now have the same feature set, and are interoperable in Linked-Mode.
  • VCSA can now scale to the same size and numbers as Windows-based deployments, even with the embedded database.
  • vSphere Web Client has been optimized to perform logins up to 13x faster and menu selections/changes up to 50% faster.
  • vMotion is now supported for long-distance up to 100 ms between endpoints and includes migrating between different vCenter deployments.
  • Multi-site content library has been introduced so resources (templates, etc.) are shared between multiple different vCenter instances.

The Bad

  • VMware Update Manager is still a separate installation that requires a Windows server to utilize its functionality.
  • This is the last iteration of vSphere that will include the desktop-based vSphere Client.
  • Desktop client has read-only access to Hardware Version 10 and 11 new features (but can manage them).

What This Means for Us

The changes made with vSphere 6 and vCenter Server offer a more scalable vSphere management solution. Previous generations required special services such as vCenter Server Heartbeat to provide a highly-available management system. With the simplification of deployment, native replication, and the ability to use Linked-mode between VCSA and traditional vCenter Server instances, we will be able to offer a more robust and centralized management platform for VMware while keeping a high level of efficiency and scale. This increased efficiency and scale translate directly to improved system performance and availability.

Follow-up

If you’re interested in learning how we’re working with the new capabilities in this release, please let us know. More information about the changes to vSphere 6 can be found here:

 

Topics: Products and Services