As technology change rapidly within the Industry – Companies and Organizations are trying to balance between CAPEX & OPEX; it’s probably safer to say – looking to lower CAPEX &OPEX. Which, in turn, lowers their TCO [total cost of ownership].
As this becomes, if not already, a common theme – Enterprises are entertaining the idea of a ‘Cloud-like’ experience internally rather than exclusively with a Cloud Provider.
If we step back, and from a high level look at what ‘Cloud’ is. Cloud is Cloud Computing – which is as on-demand pool of resources [compute, storage, application, and others] at one’s disposal for a finite period of time [depending on your SLA’s in place] offered by Cloud Service Providers over the Internet.
We can look at this from another point of view; everything you would normally purchase for your Datacentre, only its offsite and you’re paying for its use, monthly.
With Cloud Computing – There are three main models for cloud computing. Each model represents a different part of the cloud computing stack. The diagram below shows what is manged by the Enterprise and what is managed by the vendor.
Amazon Web Services did an excellent job on explain these models.
Each one of these models can apply to an Enterprise, of course depending on the business drivers and product delivery to customers, but it comes with an attached cost.
The graph below from Amazon Web Services is an initial way of illustrating Capacity vs Time for workload demand.
Does is make sense to have every component sitting with a Cloud Provider and paying a monthly cost (which can be get outrageous if workloads are low and compute are idle) or does it make sense to have everything On-Prem, waiting for workload demand (high CAPEX/OPEX costs)? But one thing’s for certain, there’s a need for Software Defined…
It’s that SD – ‘X’ [Software Defined – ‘X’] factor that is becoming more compelling for Enterprises to have On-Prem. Because price-per-port are decreasing and ‘pay as you grow’ models are being introduced in the last few years – that large CAPEX cost as shown in graph above begins to shrink. Which is very attractive to Finance teams.
With Virtualization, SD-WAN, SD-Storage, SDN Overlays already existing to bring ‘Cloud like’ experience to the Enterprise. It’s obvious one major piece is still missing…Can you guess?
Paradigm is shifting
As SD beginnings to ramp up, there’s a paradigm shift on how we view the ‘network’… again, this is a legacy term we need to remove from our vernacular and instead, talk about the Infrastructure Fabric. This Fabric needs to adapt to changes that are governed by the applications – the compute & storage that they reside on as well as what they need access too.
It’s the Fabric that binds endpoints together. It’s the Fabric that provides the reliable path to-n-fro between endpoints. Ultimately – it’s the Fabric that drives business results. So how do we make it more adaptable based on the needs of the application and their endpoints?
With HPE’s acquisition of Plexxi, they’re taking a leading provider of software-defined networking technology and transforming it into a Composable Fabric.
As Ric Lewis, HPE SVP & GM of Software – Defined and Cloud Group; states:
‘With Plexxi’s technology, HPE will deliver a true Cloud-Like experience in the DataCenter’
Plexxi, now called HPE Composable Fabric [HPE-CF], can address the IaaS services companies & organizations are looking for, by providing a platform to support fully the needs of compute, storage, and their applications.
Another common theme in today’s IT world has been simplification with an eye toward achieving better agility, lowering costs, and optimizing performance.
You can see the evolution of IT infrastructure: from Traditional data center architectures, which are complex, labour intensive… to hyperconverged compute and storage; more efficient and but yet a complex, albeit more manageable spine/leaf architecture… to a hyperconverged compute, storage and fabric – where the Fabric is now part of the overall Software Defined solution; working in conjunction with compute, storage, and applications.
When looking at the Composable Cloud holistically – The Fabric in an integral part of the Composable Infrastructure…
HPE’s Composable Fabric is a mesh, event-based architecture driven by workload intent. Meaning application and/or infrastructure ‘behaviour’ will dictate how endpoints interact, what is the optimal path, etc. Not to going into great detail – the underlay pinnings is StackStorm which follows the principal IF This Then That [IFTTT].
With HPE-CF there is integration with many of HPE’s ecosystem partners as well as API integration with some of the popular automation tools.
As the paradigm continues to shift towards adapting a general DevOps approach, Enterprises can be more flexible, more agile, increase & drive business value from a Fabric [Composable] that can adapt to applications needs.
With an intent of providing a dynamic, software – defined cloud architecture; HPE’s Composable Fabric is poised as a solution to lower CAPEX and OPEX costs, while providing the highest level of agility as possible…
Whether its integration with Simplivity [HCI], Prolaint DL Servers [Composable Rack/Cloud], and Synergy direct integration…
HPE’s compute, storage, ecosystem, and now Fabric – defines a complete Software Defined solution for customers’ wanting that ‘Cloud Like’ experience for their NextGen Datacentres.
For more information check out:
Hope you enjoyed reading this blog!