I am guessing your first reaction to the headline is "that's crazy - why would I ever need to do that?" Information technology experts are forward thinkers - CIOs and their teams have to anticipate compute demands well into the future to make sure the technology choices they make support the organization's growth objectives. Growth calls for agility, and agility requires a whole different architecture than most enterprise software solutions have been built around.
For example, Oracle database and SQL server were designed with deployment in a single data center in mind. Likewise, most VDI deployments have been scaled vertically in one data center. That's not by choice. It's because the complexity and expense of implementing VDI across multiple data centers is prohibitive. That inhibits agility and limits growth. Let's walk through why the architecture for VDI solutions had to change (we did that part already!), what it looks like now, and what the future holds.
When you think about it, the need to put compute power closer to users is only going to grow - in leaps and bounds - so why wouldn't you need to have a virtual desktop solution with that kind of horizontal scalability?
Everyone Gets 100+ "Data Centers"
Right now! You have access to 100+ data centers at this very moment. What could you do with all that compute power? Could you put IT resources to better use if you didn't have to manage any of that infrastructure? What opportunities arise with that kind of global presence? The possibilities that just came to mind are real now. When you combine the cloud regions that the largest public clouds - Amazon, Microsoft, Google, Alibaba, and IBM - have today, there's more than 150 "data centers", around the globe, ready and waiting for you! What you should get ready for is the day, not in the too distant future, when every business will have 1000 data centers by accessing 1000+ cloud regions.
More Horsepower Please!
Of course, demand for compute power just keeps rising, and data centers have always had more raw compute horsepower than anything that is in the office on a user's desk. The reason for this is simple: putting all that horsepower in an office setting makes that office uninhabitable by humans. To get the horsepower needed, you'd have to cool the office down to around 68-72 degrees, and then employees would have to shout at it each other over the din of all the fancy cooling equipment. A cold, noisy office with lots of yelling is not a happy place, so, it's better that all that compute power be corralled into a data center. Data centers became packed with large amounts of compute power in smaller and smaller spaces. What's the trade off? Well, you can have a cold, noisy, unhappy office with lots of compute power in it, or you can have... latency!
Latency Killed the On-Premises Data Center
Latency. It's the consequence of how expensive and management-intensive on-premises data centers are; most organizations can barely afford one data center, let alone two or three. Having a single data center is fine if your users are close to it, but when users are remote, they experience latency. Until recently, the raw horsepower of the data center was of limited use for interactive computing because of latency's impact on application performance. If the data center is 150ms away from the user, latency kills any performance advantages for interactive workloads. The good news is that this is old news. The death knell is ringing for your on-prem data center; once your financial commitments for that infrastructure are over, it's time to put it to rest make the move to a public cloud - here's why.
The Public Cloud Obliterates Latency
Let's use Microsoft Azure as our example. We built a tool that allows you to determine how far your target user is from an Azure cloud region - a "data center". We find that almost every user on the planet is now within 50ms of an Azure data center. That's a formula for some seriously good computing performance!
That's the situation today. It's not like Microsoft, Amazon, Google, and others are going to stop expanding, building new data centers and deploying new cloud regions. Collectively will there be 1000+ cloud regions in the near future? We believe it's not a question of if, but when. When that happens, users will be even closer to the nearest data center. What happens when latency goes to zero?
A Globally Distributed Architecture for Planet-Scale Solutions
Most enterprise solutions that were built for the previous three decades were built with the design thesis of single data center deployment. With some difficulty, and quite a bit of expense, IT could sometimes operate these solutions in multiple data centers. These solutions - databases, application servers, file servers, server virtualization, desktop virtualization, and others - were never intended for a world in which there are 1000 data centers - they aren't designed to function in that world. Most of these solutions will need to be re-architected from the ground-up for organizations to be able to benefit from all the compute power 1000 data centers delivers.
Today, there are solutions designed on a globally distributed architecture, intended from the get-go to run across hundreds of cloud regions (your new data centers!). By definition, a planet-scale solution is manageable from a single pane of glass: IT should be able to easily provision, operate, and monitor the solution across the planet.
Unlike the Oracle database, SQL Server or legacy VDI solutions that were designed for single data center deployment, solutions such as Microsoft's database service - CosmosDB - and Workspot Desktop Cloud are globally distributed, and planet-scale. New solutions will emerge for core infrastructure services that enable organizations to take advantage of planet-scale architectures. That's a whole new world; businesses of all sizes benefit from the agility planet-scale solutions bring with them and the possibilities they present.