I just stumbled upon a curious headline “Sun Plans To Close Its Data Centers” describing a post by Brian Cinque. Brian’s post was about Sun’s aggressive work within SunIT to reduce data center costs, including power and cooling requirements as well as the continuing vision of utility computing as seen by SunIT. The program has a target of zero Sun data centers by 2015.
The blog post was a healthy perspective on the future of SunIT from the viewpoint of the writer as an architect in SunIT operations, not Sun Microsystems in its entirety. That is, there will surely always be a need for Sun to maintain infrastructure internally to develop the machines and products that Sun sells to its customers to service their businesses. These businesses include Sun’s own business operations though SunIT.
Does SunIT need to own and manage its own data centers for all production systems, all desktop infrastructure and all other Sun business operations? Sun needs this capability no more than any other business for most standard business operations, but Sun certainly does need data center footprint insofar as it assists Sun in developing and perfecting Sun technology for sale.
For example, my home directory is managed on a very busy SunIT managed server running the bleeding edge of Nevada. This is good for Sun’s customers and good for Sun’s strategic ability to understand and develop solutions for our customers. We drink our own kool-aide before we sell it to our cstomers. Does everyone at Sun have their home directory on a server at or close to the tip of Nevada? Certainly not. Most of the Sun home directory infrastructure is on standard Solaris, current generally available release.
Will it always make cost effective sense for SunIT to manage typical, generally available, home directory infrastructure on captive hardware in SunIT managed data centers? Perhaps not. Sun certainly has a right, and an obligation to its stockholders, to leverage the same cost-effective utility and cloud computing models that Sun is helping to develop and foster. Again, to be clear, I’m not referring to SunIT participation in the management and maintenance of bleeding edge Nevada or whatever happens to be the next generation of Sun infrastructure and Sun software. This sort of activity is, and likely always will be, a valuable SunIT contribution to Sun’s ability to serve its customers.
Somehow, the healthy reduction in Sun’s compute footprint, increasing utilization, lowering costs, reducing the number of data centers, including through vendor managed Sun servers, is by some erroneous extension an indictment of Sun products and services. I don’t get that at all. If computing with Sun products and services is simpler, easier, less expensive, and enterprise services and applications can be more rapidly provisioned and are less expensive to manage and operate than the competitive products, won’t the world need a lot more Sun servers? Does it really matter where these servers are housed? Have the laws of supply and demand been repealed? What am I missing?
Somehow it seems that housing and managing servers has a value proposition that can only be met by captive IT operations. I don’t get that either. The truth is we are facing a future where computing will be 25kW a rack or more for fully populated racks, roughly 5 times the current, typical rack power densities and data center power designs. True, these densities come with beneficial orders of magnitude increases in compute capacity, but is a 5kW per rack data center with a few servers per rack sensible? This reality challenges the physics and financial viability of in-house data centers. As computing demands continue to increase, the laws of physics and economics will tend to compel more and more data center consolidation, probably as close to power plants as possible to increase efficiency ad reduce line loss. In the future, power will cost more than the computer hardware, utilization rates will have to improve, and new application deployments will have to take minutes not months. None of this should be news to anyone.
Knowing this, the preponderance of negative and emotional reactions to Bran’s post have all at once puzzled, surprised and alarmed me.
Some of the more telling reactions are “Is this half-baked plan/news something you should post in a public blog. Moron!” and “you’re confusing some people ….” or “Look how foolish parts of Sun’s management behave…” These are fearful reactions. These seem like reactions from people who find their cheese is moving.
If so, I agree, your cheese is moving.
Perhaps the wording in the blog itself, or the preponderance of other trade press and blogs elsewhere regarding industry trends have been unclear, but whether it is 2015, earlier or later by a bit, the world view though the looking glass of captive enterprise data center computing, just like the now defunct view of the world through mainframe glass-house computing, is approaching its end. We will still obviously need general purpose computing infrastructure and data centers somewhere, but what is it about the way we currently provision and manage computing infrastructure that is immutable or unassailable? What is it about the likely end-games and outcomes for current direction of the industry, and trends toward utility and cloud computing, SOA and SaaS that is confusing, hard to understand, inscrutable or surprising?
If all the business and technical mechanisms were in place today to enable businesses to use utility and cloud computing securely, cheaply, quickly and efficiently, eliminating the need to design, build, staff and maintain costly and complicated captive data centers, why would the data center as we know it today exist? What is so insanely great about owning and managing a data center that, in the future, provides no strategic advantage to the business that owns it? Are all of the IT departments, mechanisms, operations and infrastructure patterns of today somehow more entitled to survival than the glass-house operations they replaced? I think not.
Admittedly, all of the mature mechanisms for all enterprise computing to be provisioned external to the enterprise on a compute grid are not yet here. We face plenty of technical and physical challenges, but it seems like, based on the sort of reactions seen to Brian’s post, the political challenges to the changes will be the greatest.
One example given as an objection to “zero Sun data centers” in the blog post reactions was a question concerning how would Sun host it’s own Sun Ray desktop infrastructure on some vendor managed compute cloud or utility facility. The challenges to this are more around data security than around network performance and latency, since much of Sun’s work at home staff already use Sun Rays connected though VPN and high-latency broadband or DSL connections to the Sun network. Also, as a corollary trend, I can assure you that as high-speed wired and wireless networking becomes more ubiquitous, I’ll be eying a Sun Ray laptop. Lugging around gigabytes of disk will become silly, needlessly backbreaking and dangerous. I’m not alone. Several of my colleagues in various Sun groups have been playing with this technology. In any event, my Sun Ray needs an IP address that provides my desktop services, not a data center. How the service is provided and where the service is on the network is irrelevant to the Sun Ray or me as a desktop user.
As the business and technical solutions for data and network security in virtualized, multi-tenant environments mature, nothing stands in the way of hosting Sun Ray sessions on vendor managed infrastructure. In fact, because of Sun’s ruthless elimination of heavy-weight desktop infrastructure in favor of Sun Ray, Sun is in much better shape to leverage and execute on utility or cloud computing for desktops than most businesses.
I’d put a much different spin on the problem of secure cloud computing or, more correctly, the opportunity. What if we could dynamically create networked computer environments that we could share securely with selected customers, vendors and partners? What if we could create virtual data centers, just for individual projects, programs or any business interaction, in a few minutes? The problems going forward revolve around dynamic management of secure, persistent interactions and virtual computing facilities for any imaginable grouping of individuals, groups and businesses. Do you really think this is a pipe dream? How far are the current social networking technologies from this reality?
I imagine the biggest problem people are having visualizing the not-too-distant future state comes from comparisons to the currently prevalent enterprise outsourcing models that are, in my humble opinion, largely a wrong-headed, expensive disasters. Simply turning your data center staff and equipment over to a vendor doesn’t add a lot of value, or at least not very interesting value to me.
A better sense for the future can be gleaned from visualizing a developer writing a Ruby on Rails or PHP application for deployment to a full-service Solaris Zone or “kernel in user space” provider on the Internet or Facebook, Amazon EC2 or some other “social networking” framework. How great is the leap in technology and vendor capability to an enterprise developer deploying an application to an external host for use by an internal department or selected partners? Not very far, I think. In fact, surprise of surprises, it is already happening.
On more than one occasion, my group has “turned up” small but important applications that our customers’ business departments are hosting externally as part of our application inventory efforts supporting consolidation initiatives. How much easier, cheaper and secure does the application outsourcing avenue need to be before this trickle of occasions becomes an avalanche? The IT departments where this has happened are almost always dismayed and surprised, but then again, they maintain inventories of servers, not applications, right? How would they know what is going on with their customers?
The best, albeit weak, argument I’ve heard against this trend is the lack of control and security presented by externally hosted applications that is required by regulatory drivers like Sarbanes-Oxley. This is true, but not compelling, and sounds more like an excuse than an argument. Do you think the right, bright hosting vendor in the not-too-distant future might enable better, more cost-effective, configuration management, security and control than many IT operations?
Here is the bit that surprises me the most about the surprise. There is very little discussion of why this happens and how to provide the types of IT services the business needs to avoid these situations. Is enterprise IT is entitled to provide all computing services for all time solely in its captive or traditional out-source data centers? We live in a competitive, capitalist economy, where innovation rules, and the pieces are coming together for disruptive changes to the way enterprises purchase and manage computing.
To me, the visceral reactions to Brian’s post are far more alarming than the prediction itself. The reactions indicate we have a long way to go in this industry to embrace the inevitable changes that are coming. If you don’t want your cheese to be moved for you, you had better move it yourself. Enterprise IT needs to find more innovative ways to provide secure, rapidly provisioned, highly-available, cost-effective solutions to its business customers before the world overtakes it. Sun is working furiously to support all enterprises in this effort with products and services designed to address these needs. SunIT is working furiously to deploy and use Sun’s products and tools first, both to reduce Sun’s internal IT costs and to support Sun in developing and delivering field-proven products to its external customers.
Perhaps the current trends to virtualization will soften the blow for many, but those of us that don’t make the inevitable shift away from managing servers to managing services may not find their cheese.
, utility computing