Lately, we’ve written a few blogs about how data center infrastructure management (DCIM) software helps data centers achieve greater efficiency which made me think, it would be interesting to look at where we started over a decade ago and where we are today. Did you work in data centers ten years ago? What changes have you seen? Have things become more efficient?

I wanted to take stock of where we were ten years ago, before DCIM software really became widely adopted. I put on my detective hat and found an article that was archived by the WayBackMachine, published by Dell, about the state of data center efficiency in 2010. This is interesting as it’s before DCIM became more mainstream, so the data in this article more or less reflects the state of data centers before DCIM.

What were the key takeaways?

Server Utilization in 2010 vs. 2021

In 2010, the author Alan Redding wrote that most organizations were running x86 servers at a rate of less than 20% utilization: many organizations were running at a rate of 10% or even less.
A 2009 report from the OMB actually reported that server utilization rates were as low as 5 percent across federal government data centers.
Though exact data isn’t in yet, in 2019 the Commerce Office of the CIO (and other agencies) determined that any server operating below 50 percent is to be considered “underutilized”. That’s a big leap from 2010 when most organizations reported utilization rates of less than 20%…

Storage Utilization in 2010 vs. 2021

In 2010, the author of the Dell article wrote that many managers were forced to over-buy capacity because budgeting processes were too convoluted. What did that mean? Wasted money. In 2010, 40% utilization rates for storage were considered very good.

How’s storage utilization faring in 2021? Well, when it comes to how much storage is actually being used vs. what’s being bought, here’s some good news: Google recently reported that, compared to five years ago, its data centers render seven times more computing power with the same amount of electric power.

How Has DCIM Software Helped?

DCIM helps transform storage utilization rates and make capacity planning simple. It also helps identify zombie and underutilized servers in the data center. With DCIM, you can put your servers to work or stop them from drawing power altogether.

With a good DCIM solution, you get the ability to discover your server’s utilization rates through either an agent or an agentless discovery, such as IPMI. You gain access down to the granular subcomponents of your server and obtain real-time use data, such as CPU, disk, and memory.

These servers are running in your data center, consuming rack space, drawing power, and performing no functions. You can determine if you want to remove them from your rack and start to utilize them again. With DCIM software, you will have the ability to Identify under-provisioned racks and maximize server count per rack for increased data center use and increase rack density.

The same applies to managing your storage rates too. Monitor your storage through SNMP discovery and SSH using device MIB files. As mentioned above, you can also use IPMI to manage and monitor server-based storage. For items that cannot be discovered, you can manually enter in static values to any device, if it cannot be discovered. For SSD’s, monitor statistics such as Power On Hours, Power cycle, SSD Temperature sensor, and SSD Usage and even configure SSDs, as well.

Obviously, there’s far more to DCIM than just using it to manage the data center rack capacity. You can help manage the server and storage utilization rate of your servers in real-time too. If you’d like to test drive netTerrain DCIM (a free DCIM trial is simple, with no credit cards required and no strings attached), click here now.

About Fred Koh

As a seasoned sales executive, Fred Koh serves as Director of Sales and is responsible for Graphical Networks sales and channel partner program, marketing strategy, and operations.