image

At the time of writing this, we have just started 2017. As it is tradition, we could spend some time to reminisce on what we have accomplished last year. Or, if we give in to deeper thoughts on how we got here, we could rethink our life decisions.

It seems the best idea would be to gaze ahead… far ahead.

We will be going opposite of the past. Having marveled at the many innovations that have changed our lives, why don’t we have some fun in wondering what the future, no matter how distant, holds for us and our trade.

When looking at DCIM, we should take stock of the current situation and assess what is working and what has not yet reached the full potential. Presently, several challenges are relieved with the current crop of DCIM offerings, namely by providing answers to questions like: what do I have and where is it? (Asset Management). How is it configured and where is it connected? (Connectivity Management). Or: Am I running out of stuff and do I have enough power for it? If I get more stuff, where do I put it? (Capacity Management). Has somebody moved my stuff? How can I explain to somebody where to put it? (Change Management).

Other tasks like Power and Environmental Monitoring and Energy Management are also fairly covered.

Even so, the current wish list of what DCIM should provide is still quite long and getting longer (i.e. http://searchdatacenter.techtarget.com/tip/Evolving-DCIM-market-shows-automation-convergence-top-ITs-wish-list). And those wishes usually command the offers. If we examine these wishful demands, we can perhaps extrapolate what future versions of DCIM platforms might bring about, essentially propelling ourselves in the realm of wild speculation.

For example, would it not be great if we had automated asset localisation? Currently, some short term solutions use RFID tags, others use tags to physically connect to cabinet management controller. One could dream that one day a tiny, oh-so-precise GPS tag could be embedded in any type of asset that could continuously broadcast its position. Perhaps the most popular dream among IT or Data Center managers is the asset auto-discovery. In the future, as soon as a server is connected, it would report for duty by transmitting its properties (CPU, RAM, disks and any other pertinent information) and of course its location and its operating temperature.

Such functionalities would finally align the data center with the dynamic nature of IT management

We have been making parallels between cloud services and Power Utilities from which we draw energy on demand. From the cloud we now use computing on demand. Maybe in due course, as with the Power grid where we can inject back our energy surplus, we will one day be able to put compute cycles that we don’t use back into the cloud and lease them to others. That would certainly complicate things by creating temporary ownership of assets.

Predictive Analytics anyone? (http://searchbusinessanalytics.techtarget.com/definition/predictive-analytics). By foreseeing potential issues, it could even auto-correct problems and outages with a dynamic and automated reconfiguration of the network and the assets connected to it and of course reporting on it. The data center could inform us on its current level of reliability. Essentially telling us how it “feels” today.

We are speaking of feelings on purpose. DCIM solutions are implementing “expert systems” techniques that will inevitably evolve into Artificial Intelligences one day. How else can we hope to manage the coming complexity? One often-cited request from clients is the integration with other data and assets management tools, such as order management systems, financial software, automated charge backs and/or trouble tickets systems possibly equipped with Root Cause analysis algorithms. Keeping track and extracting useful reports from such heterogeneous integrations will be quite the challenge and may need the help of an A.I. (https://en.wikipedia.org/wiki/Artificial_intelligence). Yes, in the future we may shorten “Data Center“ to just “Data” like the Star Trek character, and we will be able to speak to it (although it’s unlikely it will ever understand a joke).

There is some agreement that the data centers of the future may have the dual nature of both concentrated and dispersed. As of now, dispersion is used mostly for disaster recovery, but it could also be used to take advantages of power rates and availability. For example, this could mean activating and disengaging resources where it is most advantageous (assuming of course that the communications infrastructure of the future will be high performance and ubiquitous). A protocol may develop so that data centers can exchange with external and dynamic systems. The wearable technology being developed today may become not only a client for reporting and storing data, but also an extension of the data center itself.

With the Software Defined Data-Center approach (SDDC http://www.webopedia.com/TERM/S/software_defined_data_center_SDDC.html), the future Data Center will acquire extreme virtualization functionalities that will support the dynamic character we are seeking. EMC is already starting to simulate the DC infrastructure (http://www.theregister.co.uk/2016/02/05/emc_can_simulate_a_data_center/ )

We can continue to speculate and make conjectures on the makeup of the data center in a far-fetched future; how it will evolve and what it will transform into. Eventually it will make sense that a planetary integration will take place where optimal resources utilization and energy management will be implemented. Whether or not humans will allow it remains uncertain, but we are in the domain of wild imagination right now.

Whether these Sci-Fi scenarios will come to fruition in one form or another, it is indubitable that a management platform will have to be in place. It will be difficult to survive said future otherwise. Will it still be called DCIM? Well, we do know that the Dark Side of the Force will have to use it for the Evil Empire to succeed…