Canonical Guest Post: IT Directors Must Understand the Economics of OpenStack

by Ostatic Staff - Dec. 26, 2016

In the big software era, understanding the economics of OpenStack is essential. That's a big message coming from the folks at Canonical, which is increasingly focused on OpenStack and cloud computing.

Mark Baker is OpenStack Product Manager at the company, and we caught up with him for a guest post on the topic. Here are Mark's thoughts.

 IT Directors Must Understand the Economics of OpenStack

By Mark Baker, OpenStack Product Manager at Canonical

The world of business technology is under tremendous pressure and most organizations are ill-equipped to deal with the challenges and opportunities that are arising. Software as a Service, big data, cloud, scale-out, containers, OpenStack, and microservices are not just buzz-words, they are disrupting traditional business models. Further, while these terms and technologies represent a new world of opportunity, they also bring complexity that most IT departments are ill-equipped to pursue. This has become the Big Software era.

To address the realities of Big Software, companies need an entirely new way of thinking. Where applications were once simple to manage and deploy with a couple of solutions across a couple of machines; companies must now roll out many applications, components, and integration points spread across tens of thousands of on-premise and hosted physical and virtual machines.

Organizations must have the right mix of products, services, and tools to match the requirements of the business yet many IT departments are undertaking these challenges with the approaches and tools developed over a decade ago.

Over the past decade, IT Directors turned to public cloud providers like AWS (Amazon Web Services), Microsoft Azure, and GPC (Google Public Cloud) as a way to offset much of the CAPEX (capital expenses) of deploying hardware and software by moving it to the cloud.  They wanted to consume applications as services and offset most of the costs to OPEX (Operating Expenses).  Initially, public cloud delivered on the CAPEX to OPEX promise, Moor Insights & Strategy analysts state, upwards of 45% in capital reductions in some cases, but organizations needing to deploy solutions at scale found themselves locked into a single cloud provider, fluctuating pricing models, and a rigid scale-up model that inhibits the organization's ability to get the most out of their legacy hardware and software investments. Forward thinking IT directors realized they must disaggregate (put into units) their current data center environments to support scale-out. Consequently, OpenStack was introduced as a public cloud alternative for enterprises wishing to manage their IT operations as a cost effective private or hybrid cloud environment.

Further, telecom operators have been losing enterprise market share to public cloud providers due to inability to quickly respond to the needs of its enterprise customers. OpenStack represents an opportunity to offer competitive services faster at a lower cost with standards based Open Source cloud infrastructure. To date, operators had few options when deploying core network functions and services. They typically bought expensive proprietary boxes from NEPs (network equipment providers) to manage their data and communications infrastructure. OpenStack provides a framework for operators to virtualize core network functions and implemented along with SDN (Software Defined Networking)  allows telecoms to disaggregate expensive hardware from software to lower cost whilst accelerating their ability to launch new services.. The use of virtualization, containers and snaps is revolutionizing other areas within carriers too as devices such as TOR (Top of Rack) switches or CPE (Customer Premises Equipment)  can be rapidly updated to deliver new capabilities that previously would have needed new hardware to be purchased and implemented.  The combination of scalable hybrid cloud with innovation in edge of network devices means telcos are now able to create more attractive and cost-effective solutions improving operational efficiencies with a lower TCO (Total Cost of Ownership).

The Economics: Challenges & Opportunities with OpenStack

OpenStack is a way for organizations to deploy IaaS (Infrastructure as a Service) and PaaS (Platform as a Service) solutions in an open source environment on commodity hardware. Many customers look at OpenStack as an opportunity to reduce the cost of application deployment and management.  While it is true, the cost to deploy OpenStack is relatively low, the ongoing investment in maintenance, labor, and operations are not. In fact, labor is one of the most expensive budget items that will undoubtedly continue to rise over time.

One of the main challenges with OpenStack is determining where the year-over-year operating costs and benefits of managing the solution reaches parity, not just public cloud, but with their software licensing and other critical infrastructure investments. In a typical multi-year OpenStack deployment, labor makes up >25% of the overall costs, hardware maintenance and software license fees combined are around 20%, while hardware depreciation, networking, storage, and engineering combine to make-up the remainder according to HDS. That said, the main advantage of moving to the public cloud is still the short-term reduction in the cost per head count, but the year-over-year public cloud expenses are aligning more closely with on-premise or hybrid cloud. The only way to fully benefit from OpenStack is by adopting a new model for deploying and managing IT Operations.

OpenStack is Big Software: A New Deployment Model is Needed

Building a private cloud infrastructure on OpenStack, is an example of the big software challenge. Significant complexity exists in the design, configuration, and deployment of all production ready OpenStack private cloud projects. While the upfront costs are negligible, the true costs are in the ongoing operations; upgrading and patching of the deployment can be expensive. This is a stark example of how Canonical’s big software solutions address these challenges with a new breed of tools that includes Juju. In fact, Juju helps customers to build and deploy proof of concepts faster, integrate solutions more efficiently, and expand their organizations capabilities more broadly. Imagine using a solution that enables the deployment of revenue-generating cloud services by only dropping and dragging a few commands.

Through Canonical’s Juju Charms (a set of scripts that simplify the deployment and management tasks of specific services), organizations can connect, integrate, and deploy new services automatically without the need for consultants, integrator, additional costs or resources. Companies can choose from hundreds of microservices that enable everything from cloud communications via WebRTC, IoT enablement, big data, web services, mobile applications, to security and data management tools. With the rise of open source, enterprises, telcos and programmers can leverage the power of a vast library and a community of developers to design, develop, and deploy solutions at scale, and much faster than with other OpenStack solutions.

What the Future Holds for OpenStack

It is important to keep in mind that OpenStack is not a destination, rather a part of the scale-out journey to becoming cloud native. CIOs know they must have cloud as part of their overall strategy.  From a long-term perspective OpenStack will remain the key driver and enabler for hybrid cloud adoption.  However, IT organizations will continue to struggle with service and applications integration while working to keep their operational costs from rising too much.  The good news is, companies like Canonical are developing solutions to help companies with the insight, solutions, and leadership to engage in the Big Software era. For more information about OpenStack or Canonical please download our E-book OpenStack Made Easy.