<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1078844192138377&amp;ev=PageView&amp;noscript=1">


7 PowerCLI Learning and Coding Tools I Learned about at VMworld 2016 US

With VMworld 2016 US coming to a close, there’s no time like the present to review one of the major tracks from the conference. Up first, one of the main pillars of VMware automation and scripting: PowerCLI.

A PowerCLI primer:

PowerCLI is a VMware-focused extension to Microsoft PowerShell. While on the surface it looks and feels very much like Windows Command Prompt (cmd.exe), spend any time in PowerShell or PowerCLI and you'll notice that it is an altogether different beast. PowerCLI is an object-oriented scripting language designed to take the difficulty and time-consumption out of your most commonly repeated tasks. It uses “cmdlets” to issue commands in the “verb-noun” syntax. For example, “Get-VM” returns all of the Virtual Machines in an environment, whereas “Set-VMHostNetworkAdapter” makes changes to the NIC of a VMware ESXi Host. 

PowerCLI has been used for a multitude of different purposes, from simple scripts to entire deployments and configuration changes. It can even be used for running scheduled configuration checks and automatic remediation to make sure hosts, switches, VMs, and other objects are exactly where they should be. This is similar to Host Profiles, but is executed at a scripting level as opposed to being a vCenter feature. This means you won't need Enterprise Plus licensing, saving you some serious cash on your VMware deployment. 

Whether you’re just getting started with PowerCLI or are an experienced veteran, I’d like to share some of the resources from VMworld sessions that I’ve compiled for our team here at ServerCentral. Hopefully they can help you succeed with this incredibly powerful tool, too. 

Session 1: Getting Started with PowerShell and PowerCLI for Your VMware Environment [INF8038R]

View Session Video | Session Info

Chris Wahl (wahlnetwork.com) and Kyle Ruddy (thatcouldbeaproblem.com) did an excellent session on an introduction to PowerCLI. It was great in that it not only covered some basic cmdlets and ways to get around PowerCLI, but also had a couple gems in the Add-ons that are common for PowerCLI admins.

Tip: If you’re just starting out on PowerCLI, head over to the excellent VMware Hands-On Labs to get your feet wet. Go to http://labs.hol.vmware.com/HOL/catalogs/catalog/123 and search for HOL-SDC-1607, From Beginner to Advanced Features with PowerCLI. You'll need to create an account if you don't have one already, but it's worthwhile. Here's the lab documentation.) 

Tools and resources from this session:

Session 2: Enforcing a vSphere Cluster Design with PowerCLI Automation [INF8036]

View Session Video | Session Info

If you’re past the basics and your environment is all set up, then a great practical application session was Enforcing a vSphere Cluster Design with PowerCLI Automation. Presented by Duncan Epping (yellow-bricks.com) and Chris Wahl (wahlnetwork.com), this session dives into a great example of automated cluster checking and remediation. Furthermore, they dig into the vSphere API, which can be called by PowerCLI (typically via the “get-view” cmdlet), for advanced functionality not found in the top-level PowerCLI cmdlets. 

Tools and resources from this session:

But wait, there's more!

Example scripts and pre-built tools:

Session 3: The Power Hour, Deep Dive, DevOps, and New Features of PowerCLI [INF8092]

View Session Video | Session Info

If you’re a scripting veteran or if you’re just curious and want to see what PowerCLI is really capable of, there was an incredible Deep Dive session by two masters of PowerCLI: Alan Renouf (http://www.virtu-al.net) and Luc Dekens (http://www.lucd.info). In 60 minutes, Alan and Luc go heavily into the API, covering features like script optimization, streamlining report generation speed, and a lot more. Perhaps most importantly, Alan discusses a major shift in PowerShell itself: the transition from Windows-only to Linux and Mac OSs. This is a huge jump for VMware as a company, which is aggressively expanding beyond its traditional, Microsoft-only roots (as evidenced by its move away from the Windows-only C# client to the OS-agnostic Web Client and its move to the vCenter Server Appliance.

FYI, this session is going to require a few viewings. There's just that much good stuff in there. Still, it's a great resource to see some atypical examples of PowerCLI usage. 

Now get out and start scripting!

It's amazing how much time you'll save once common tasks can be automated. Stay tuned for more VMworld recaps, including a post on the future of the vSphere Client for those of you who hate CLIs and love your GUI (and if that’s the case, why are you still reading this article?!).

Topics: Tips

Get an in-browser remote desktop with Mojolicious and noVNC

Note: This is an excerpt from my blog post originally published on PerlTricks.com

While SSH is a staple of remote system administration, sometimes only a GUI will do. Perhaps the remote system doesn’t have a terminal environment to connect to; perhaps the target application doesn’t present an adequate command line interface; perhaps there is an existing GUI session you need to interact with. There can be all kinds of reasons.

For this purpose, a generic type of remote desktop service called VNC is commonly used. The servers are easy to install, start on seemingly all platforms, and lots of hardware has a VNC server embedded for remote administration. Clients are similarly easy to use, but when building a management console in the web, wouldn’t it be nice to have the console view right in your browser?

Luckily, there is a pure JavaScript VNC client called noVNC.


noVNC listens for VNC traffic over WebSockets, which is convenient for browsers but isn’t supported by most VNC servers. To overcome this problem, they provide a command-line application called Websockify.

Websockify is a relay that connects to a TCP connection (the VNC server) and exposes the traffic as a WebSocket stream that a browser client can listen on. While this does fix the problem, it isn’t an elegant solution. Each VNC Server needs its own instance of Websockify requiring a separate port. Further, you either need to leave these connected at all times in case of a web client or else spawn them on demand and clean them up later.

Mojolicious to the Rescue

Mojolicious has a built-in event-based TCP Client and native WebSocket handling. If you are already serving your site with Mojolicious, why not let it do the TCP/WebSocket relay work too? Even if you aren’t, the on-demand nature of the solution I’m going to show would be useful as a stand-alone app for this single purpose versus the websockify application.

Here is a Mojolicious::Lite application which serves the noVNC client when you request a url like / When the page loads, the client requests the WebSocket route at /proxy?target= which establishes the bridge. This example is bundled with my forthcoming wrapper module with a working name of Mojo::Websockify. The code is remarkably simple:

use Mojolicious::Lite;

use Mojo::IOLoop;

websocket '/proxy' => sub {
  my $c = shift;
  $c->render_later->on(finish => sub { warn 'websocket closing' });

  my $tx = $c->tx;

  my $host = $c->param('target') || '';
  my $port = $host =~ s{:(\d+)$}{} ? $1 : 5901;

  Mojo::IOLoop->client(address => $host, port => $port, sub {
    my ($loop, $err, $tcp) = @_;

    $tx->finish(4500, "TCP connection error: $err") if $err;
    $tcp->on(error => sub { $tx->finish(4500, "TCP error: $_[1]") });

    $tcp->on(read => sub {
      my ($tcp, $bytes) = @_;
      $tx->send({binary => $bytes});

    $tx->on(binary => sub {
      my ($tx, $bytes) = @_;

    $tx->on(finish => sub {
      undef $tcp;
      undef $tx;

get '/*target' => sub {
  my $c = shift;
  my $target = $c->stash('target');
  my $url = $c->url_for('proxy')->query(target => $target);
  $url->path->leading_slash(0); # novnc assumes no leading slash :(
    vnc  =>
    base => $c->tx->req->url->to_abs,
    path => $url,


Read the rest on PerlTricks.com.

Topics: Tips

The 7 Biggest Data Center Migration Mistakes (And How to Avoid Them)


Slow. Pain. Ouch. Nope. (Not the words you were thinking?) This is for good reason.

In helping hundreds of companies migrate everything from single applications to full data centers, we’ve identified seven common mistakes people make during data center migrations, and more importantly, how to avoid them. 

Read all seven mistakes on datacenterknowledge.com.

Topics: Data Center Tips

How VMware Virtual SAN 6.1 Can Support Your Remote Applications And PoPs

With the ecommerce industry growing each year, international business is no longer an enterprise-only sport. With small and midsize companies entering the global footprint game, their IT infrastructure needs to follow suit as they seek to engage and keep customers around the world.

How Small and Midsize Companies Can Expand Globally

The issue many companies face, however, is in providing a redundant, reliable solution to house servers in their secondary locations. These locations are often considerably smaller than a main Point of Presence (PoP) and bring with this unwanted latency. Add to this the need for redundancy at the storage layer, which often revolves around NAS or SAN devices, and you’re looking at a potentially large upfront cost.

With VMware’s latest release of its VSAN platform, businesses now have a solid foundation to support their production-level remote applications, without the large CapEx cost of multiple servers and a SAN backend. 

VSAN is a hyper-converged infrastructure platform that allows professionals to use storage inside the ESXi hypervisors as shared storage across the cluster. As a quick refresher, it works by presenting all storage inside of two or more hypervisors as a single datastore that all hypervisors can mount. VSAN also stores copies of all data in multiple locations, providing redundancy during a total hypervisor failure. In more complex setups, VSAN can also be used with multiple fault domains, which can support the failure of entire cabinets or even entire sites with no loss of data availability. In short, many of the benefits which traditionally have been in the realm of dedicated SANs are now available for a much lower cost (especially when deployed as a managed service).

While VSAN has always been able to scale up into large clusters as primary storage for central data centers, VSAN 6.1 offers a couple of new features that allow it to also scale down to support a remote branch office or a small/emerging market PoP.

What You Couldn’t Do Before Virtual SAN 6.1

2-Node VSAN
Perhaps the biggest addition making this functionality possible is the choice to deploy a 2-node VSAN cluster. With this new feature, VSAN can now scale down in parity with other important VMware technologies such as vMotion, HA, and DRS. In older versions of VSAN, 3-node clusters were the absolute minimum. This added size, complexity, and (most importantly) cost to a remote solution - which typically prevented VSAN’s use in these scenarios.

While a 2-node VSAN requires a third virtual appliance to act as a witness in another data center (which prevents the possibility of a split-brain scenario should networking be cut between the two hosts), this remote site would most likely be connected to an existing, larger vCenter environment. It’s important to note that this virtual appliance is free, unlike an extra, unneeded hypervisor.

With vSphere 6.0, VMware overhauled the abilities of VMware fault tolerance. With vSphere 6.0, it became possible to deploy a fault-tolerant VM (which consisted of a VM running on two hypervisors simultaneously), resulting in zero downtime during a hardware failure on either hypervisor. With VSAN 6.1, VMware extended this feature onto hypervisors running with VSAN. Even with just two nodes, remote sites still find themselves overbuilt. Utilizing SMP-FT is an easy way to take advantage of these extra resources and increase uptime.

Windows Server Failover Clustering Support
WSFC has become a core tenant of any highly available Windows Server environment. With technologies such as Exchange, SQL Server, and DFS all utilizing aspects of failover clustering, many organizations found VSAN lacking in support for their primary applications. With VSAN 6.1, supporting a remote SQL cluster allows redundancy at the storage, hypervisor, service, and application level.

All-Flash Support
While technically a VSAN 6.0 feature, this bears mentioning when discussing remote PoP VSAN environments. Traditionally, one major issue with storage in remote locations is the lack of performance. To get high performance arrays, dozens of spinning disks would need to be deployed in a SAN and carefully maintained. This not only increases cost, but complexity and failure rate also. With ever-faster flash based disks arriving to the market at a blistering pace, VSAN can use all-flash arrays to get a very high level of performance out of a very small number of drives. For an even higher level of performance, VSAN 6.1 supports cutting-edge technologies such as ULLtra DIMM and NVMe, reducing or eliminating traditional SSD issues such as connectivity, controller, and bus bottlenecks to allow even lower latencies for critical applications.

The Verdict

With its support for a wide variety of applications, very high IOPS, low-latency performance, and a small, highly redundant environment that can grow as you need, VMware’s VSAN 6.1 platform is proving to be an excellent choice when requirements dictate an enterprise-grade solution without an enterprise-grade cost.

Topics: Networking Tips

5 Key Questions: Replication as a Service

In a previous post, we highlighted 5 Key Questions to Ask Your BaaS Provider. In this post, we’ll take a look at the solution one step further on the Business Continuity/Disaster Recovery continuum: Replication as a Service.

Defining Replication as a Service

It’s important to have a common understanding of a service so that everyone shares the same expectations. Nothing takes a conversation sideways faster than everyone working with different definitions of the same product.

At ServerCentral, Replication as a Service is ServerCentral-operated hardware and software that replicate and recover applications and data from a customer premise to one or more of our data centers.

Replication differs from backups in that it involves the frequent updating of data between multiple systems, whereas backups save a copy of data that remains unchanged for a period of time.

For the past year, we’ve studied the questions our customers, prospects, and partners have asked about replication solutions. We compared our findings with that of leading industry analysts to compile this list of 5 key questions you should ask about any RaaS or managed replication solution:

1. What Recovery Point Objective (RPO)/Recovery Time Objective (RTO) windows are supported?

When looking at replication solutions, it’s important to know how granular you can get with your RTO/RPO windows on an application-by-application basis. In most instances, there isn’t a need for immediate RTO or RPO—and the costs associated with providing an immediate level of service can become prohibitive. Flexibility between RPO/RTO windows per application enable you to tune the replication solution to meet specific requirements vs. a one-size-fits-all approach.

2. Are failover compute resources available on demand?

What you want to know here is whether or not you have to preorder or pre-purchase cold resources should there be a disaster. Many times, replication solutions do not include these resources (or they’re not even mentioned until the minute they’re needed). Plan ahead. Costs can become very real very quickly.

3. Can I replicate physical and virtual environments?

This is the money question. Are you limited to one type of environment? Many replication solutions only support virtual environments. In instances where replication solutions support physical and virtual environments, they may have wildly differing RPO/RTO windows. By asking “can” and “how” in advance, you can save yourself from headaches down the road.

4. Is replication possible at the data and application level?

Similar to the previous question, it’s important to know whether or not you can work at the data and app levels—or if this is only available on a case-by-case basis. The last thing you want here is a surprise. The difference between data- and app-level replication will have a material impact on your RPO/RTO windows, too.

5. Can I regularly execute failover tests?

There are two parts to this question:

  1. Are you able to perform this task on your own? In many instances, it will only be offered as a service.
  2. Are there licensing implications for the failover environment? You may incur additional costs for each failover test you perform because it trips your licensing on.
Know in advance what you can (and can’t) do—and what the associated costs will be.

If you’re interested in discussing any of these questions in more detail or have questions of your own, don’t hesitate to contact us!

Topics: Tips

Scaling ZFS on Linux to many CPUs

In February 2015, I posted the first pull request for a port of Prakash Surya's multilist and ARC re-work to ZFS on Linux. The goal was to reduce the lock contention on arcs_mtx, a single mutex embedded within each of the per-state ARC lists. The new multilist facility provided as part of this pull request includes an almost drop-in replacement to the standard linked list typelist_t.  Rather than maintaining a single lock for a single linked list (per ARC state), the lists were split up into a number of sub-lists, each of which had their own mutex.

The benchmark used for testing this work consists of numerous concurrent 4K reads of 100% cached data.  In the original OpenZFS ARC implementation, with a single arcs_mtx lock, the benchmark didn't scale well as additional reader tasks were added. There was a great deal of contention on the single mutex.  (The before-and-after results for illumos are described here.)

Given ZoL's divergent development with respect to the "upstream" OpenZFS code (from illumos), porting this patch required dealing with a number of conflicts which developed over time. Some of the issues are documented in the final commit.

Once the code was ported and in working condition, my next step was to try to duplicate the benchmark results under Linux. My initial results were not encouraging: The performance wasn't improved much at all and in some cases, was even worse. My benchmarking was also handicapped by the lack of access to sufficiently "big" hardware. The largest system which I had direct access to was a 2x6-core Opteron (2-node NUMA system) with only 64GiB RAM. I began using large spot instances on Amazon EC2 to run the tests but it wasn't very convenient. It also brought to light the differences in the locking primitives under a virtualized (Xen) environment as opposed to running on bare metal.

I was eventually put in touch with the good people at ServerCentral, who, in the name of furthering the ZoL development effort, gave me access to a dedicated server with 4 E7-4850 CPUs, each of which has 10 cores and 2 threads per core. The system has 80 threads available. Backing it is 512GiB of RAM and a bunch of hard drives in several JBODs.  In short, it's a perfect system on which to perform this type of testing.

Using this 4xE7 system, not only was I able to find some (rather trivial) bottlenecks which greatly improved the performance of the benchmark mentioned above, but I also found several other similar bottlenecks, some of which have been fixed, some of which have not yet.

In subsequent postings, I'll outline some of the specific bottlenecks I encountered and their fixes, if any. Pretty much any scaling-related fix or issue I posted or commented on regarding ZoL (zfs or spl repositories) were discovered through testing on the E7 system.

- Tim

This guest post was published with permission from Tim Chase. It originally appeared here.

Topics: Tips

5 Key Questions: Backup as a Service

Whether they’re a stand-alone solution or part of a larger Business Continuity/Disaster Recovery strategy, backups are a critical component of every IT strategy.

At ServerCentral, Backup as a Service (BaaS)/Managed Backup is defined as ServerCentral-operated hardware and software that enable the backup, storage, and in some cases, the recovery of applications and data from a customer premise to one or more of our data centers.

It’s important to have a common understanding of a service so that everyone shares the same expectations. Nothing takes a conversation sideways faster than everyone working with different definitions of the same product.

For the past year, we’ve studied the questions our customers, prospects, and partners have asked about BaaS/Managed Backup solutions. We compared our findings with that of leading industry analysts to compile this list of 5 key questions you should ask about your next BaaS or Managed Backup solution:

1. How can I connect to the backup environment? 

First, this question helps determine whether you can access the backup environment via 10 GbE (or faster) connections. Second, it’s to find out if those connections are redundant. It’s also important to know whether or not these connections are made via HTTPS, VPN, dedicated circuits, etc., as this will have a direct impact on deployment, speed, and recovery times.

2. What access protocols are are supported?

You’d be surprised how many times we hear, “We needed block support for this application and it wasn’t available.” A proper backup platform should support both file (NFS) and block (SCSI) access. This support is critical because it enables you to use whichever protocol is best for each of your application and data sets. 

3. Are physical or virtual appliances/agents deployed? 

You have to know what you’re walking into with any as-a-service offering. All too often, the need for a virtual appliance or agent to be deployed gets hung-up in security or compliance review and prolongs the project implementation. These unexpected delays unnecessarily increase costs and risk.

4. Is the backup environment production-ready?

In the event of a disaster, can you add compute resources to quickly deploy your applications or spin up VMs from the backup environment? At ServerCentral, these are called production-ready backups because we’ve architected all of our backup infrastructure to support this requirement. You never know when this type of support will be needed, but it’s far better to be ready than not. It is a must to know whether or not your backup environment supports the addition of compute resources for restore requirements well before they happen.

5. Do you store multiple copies of my data?

It’s worthwhile to ask whether or not your provider’s backup environment is actually backed up. You’ll be surprised by the answer. I guarantee it.

If you’re interested in discussing any of these questions in more detail or have questions of your own, don’t hesitate to contact us!

Topics: Tips

Choose Your Cloud Partner Wisely

Over the past few weeks, I’ve been talking with dozens of technology leaders from startups and early-stage companies. I wanted to learn about the technical challenges they face today and understand what they feel they need to be successful.

Out of the hundreds of data points gathered in these conversations, the biggest issue I’ve seen is that these individuals (and their organizations) spend roughly 50% of their time just getting to the point where they can actually do work. In this case, “work” means writing code and supporting customers. 

This half of their time is spent on countless technology infrastructure tasks such as DNS, network architecture and management, security policies, DDoS mitigation, firewall management, load balancing, CDN configuration, etc. All of these are important to the company’s success, but do not involve writing code and supporting customers.

Startups are too busy maintaining their IT operations to start up much of anything.

The question, “Who can take care of my infrastructure for me so I have time to  focus on my app?” became all too common.

As the provocateur, it was impossible not to ask, “Isn’t this why you architected and built everything in the cloud?”

Not a fair question, I’ll admit. I just had to to be sure that I did, in fact, know their answers:

I don’t have the time to spend learning someone else’s blackbox platform.

I need to know what’s under my apps so I know how to keep them streamlined and portable.

I need to know there’s a security expert watching over what I’m doing or simply there to do it for me.

My business runs 24/7 and I need to know I have real people supporting me 24/7.

The moral of the story? Choose your cloud partner wisely.

Your cloud partner should be supporting you. You shouldn’t be supporting your cloud partner.

Topics: Tips Products and Services

5 Keys for Successful Startup Partnerships

How many times have you started an email, text, or phone call with this statement:

Sorry I’m so behind in getting back to you…

More times than you can count, right?

typical-startup-emailWe’ve all been there.

Don’t worry. Having worked with more than a handful of startups (some successful, some not), I’ve been there.

As an entrepreneur or startup, you’re short on resources and long on challenges. The best advice I’ve heard for managing this balance comes from a very well respected entrepreneur:

“The most successful companies are the ones that leverage all of the support infrastructure available to them.”
Joe Gits, Founder, Social Market Analytics

With partnerships representing the largest part of your professional support infrastructure, it’s important to remember these five keys to successful startup partnerships:

1. Partnerships matter more than you think.

Your partners may be  the largest component of your support infrastructure. Strategic partnerships can help you overcome business challenges by pairing you up with companies that share the same goals.

2. Partnerships are a two-way street.

Whether it’s technology, support, whatever—a partnership must be mutually beneficial. The best partnership delivers what your organization needs and returns what your partner needs.

Your partner’s technology, solutions, and support need to bend to meet your business, and your technology, solutions, and support need to bend to meet theirs. Quid pro quo.

3. Partnerships must be forgiving.

The phrase, “It’s not personal, it’s business” is really important here. When a partnership is no longer useful, it’s okay to move on.

There will be times when a partnership that was once outstanding for your growth no longer returns value. Don’t hesitate to make a change—the success of your organization is at stake.

It’s important to know that you are able to take your ball and go home. If it has been a true partnership, the transition will be smooth because both organizations will recognize the changes in the other’s business.

4. Partners must share their experience with you.

When discussing how you can work with and support other businesses, ask what resources are available to help you with your tactical and strategic planning.

If your partners aren’t willing to go out of their way to impart their knowledge to you and your organization so both parties can grow more efficiently, why invest your time in them?

5. Partnerships are also about the network.

Everyone’s social network is visible. Whether they prefer Twitter, LinkedIn, Instagram, or Facebook, it doesn’t matter. Everyone’s reach and influence are visible.

Partners offer open access to their personal and professional networks to help you succeed.

If you don’t think this is a big deal, ask around. See how quickly someone will (or won’t) make an introduction for you. You’ll be extremely surprised at the response you get.

Obviously I’m biased here (and so is ServerCentral) because we are incredibly open to helping people…but this really matters.

You need to know your partners are ready/willing/able to help you make important connections whenever/however you need.

What do all of these things have in common? 

The people.

As the core of your support infrastructure, your partners have your and your customers’ best interests in mind at every interaction. Partners need to get the big picture of what you’re trying to do.

Now think about how much more pleasant building and managing a startup will be when you know you have a network of people there to help you.

Are you leveraging your support infrastructure? Your startup shouldn’t be killing you.

Topics: Tips

Making An IT Budget Checklist? Check It Twice!

We are at year end and the holiday season is nipping at our noses. For many of us, it’s budget time. Are you making your infrastructure wish list? If so, do you think you’ll get what you asked for?

Many times, our customers share with us fantastic ideas about how to enhance the services they provide to their internal and external customers, only to be shut down for one reason (budget) or another (budget). It appears that, for many organizations, the needs of IT are treated as the red-headed stepchild of the organization (no offense to our red-headed friends). How do we change that?

Perhaps we can help:

#1: Know your audience.

Who will champion your plan? Tailor your wish list to the person who will ultimately approve it and provide you with the support and budget needed to make your wish list a reality.

Board members, CEO, CFO:

Present high level—refrain from using techy, geeky language and getting into the weeds with the technology. You can still be excited about your proposal, but remember:

The instant you use a 3-letter acronym to a non-technical leader, you’ve lost them entirely.

Instead, provide information on how the technology will benefit customers, the company, and ultimately revenue.

For example:

“With this new system in place, online shoppers won’t lose their shopping carts when they navigate away from our website. Items are saved in their carts the next time they return, meaning they can conduct their online transactions faster, which results in more completed transactions in a day, adding up to an estimated $xxx,xxx in sales per month.”


Equip them with the information to share your vision and be your messenger as they will understand the technology. Again, refrain from going too technical.

Do most of the work for them so that all they need to do is paste your slides into their presentation.

If you’re the decision maker:

Skip to step #4.

#2: Think metrics.

Make sure to present numerical data in your proposal. Your focus should be on the impact to the business. What would the human resources and revenue results look like after implementing new IT infrastructure and/or applications?

Would there be 2 hours/day of improved productivity per employee? Sales transaction times reduced from 20 minutes to 10 minutes?

Know your facts. Data is king!

#3: Select your words wisely.

The thesaurus is your new best friend. Replace “buy,” “cost,” and “spend” with “contribute,” “value,” and “investment.”

Let’s work together to change the paradigm of IT being a cost center. We all know if it wasn’t for you, your IT team, and the infrastructure you have in place, there would be no business.

Bounce your script and/or presentation off of a non-technical counterpart. Any words that may seem negative to them will most likely sound negative to your audience.

#4: Be incremental.

Although we say change is good, we all know that it’s a challenge met with hesitation, anxiety, and risk. A phased approach to your plan will most likely be easier for upper and executive management to swallow instead of an all-at-once, complete overhaul.

This isn’t a pie eating contest.

Prepare a step-by-step plan to achieving your ultimate goal. Communicate in blocks of time, investments, and positive impacts to the business.

Typically, the IT department is invisible until someone can’t get into their email. Let’s change that.

If you do these steps already, bravo! I hope that your plans are in play. If not, I hope that this has been informative. If you’d like us to assist you with developing your plan, please let us know. We’ll be happy to help you fulfill your wish list.

Topics: Data Center Tips