Simplifying IT infrastructure

Sponsored by

A solid foundation is a virtual one

As flexible working becomes more prevalent, virtual infrastructure can help IT teams to support and enable remote employees

Even before the outbreak of the COVID-19 coronavirus, the gig economy was populated by not just self-employed workers, but employees who choose to work remotely on the fly.

However, working from home or elsewhere can be an intense burden on an organisation’s IT infrastructure, which needs to both handle and predict user load and capacity or risk downtime that could hurt productivity and the bottom line.

The long-term effects of COVID-19 are far from clear at this stage, but many argue that life may never be quite the same. Certainly, the stipulation to stay at home under lockdown in nations around the world has driven people to use an increasing number of online connectivity and collaboration applications, such as Microsoft Teams, Zoom, WhatsApp, Skype, Slack, Webex and so on.

Quite apart from the connectivity challenge this places on internet service providers, there is a corresponding profusion of data created by workers accessing data streams and subsequently sending information to each other. Where “normal” work activity is centralised close to company headquarters, the new normal is altogether more distributed and more virtual. Consequently, we must now manage increasingly fragmented and unpredictable data workloads. 

Paradoxically, for such virtual business to run on a solid basis, the foundations need to be virtualised through techniques including hyperconvergence and software-defined architecture design.

Foundations need to be virtualised through techniques including hyperconvergence and software-defined architecture design

Virtual, modular, composable

One of the core challenges with the recent drive to remote working is the data displacement it has caused. For the most part, people are doing the same jobs, but they are connecting in different ways, at different times and they are doing so across different application channels. All of which leads to data bottlenecks, which have a direct knock-on effect on the performance of applications and services in use.

At the start of Europe’s COVID-19 lockdown, we saw Microsoft’s flagship collaboration software tool go down. Microsoft Teams experienced what the company called “messaging-related functionality problems” in Europe. In other words, the message functions worked fine, but the underlying IT infrastructure substrate they ran on couldn’t cope with the influx and throughput of information it had to process.

The antidote, or at least part of the cure, to these IT operational challenges lies in reflecting virtualised working practices, such as video conferencing and collaboration document editing, with an underlying IT infrastructure that is itself virtualised.

Using software-defined network controls, including hyperconvergence, enterprises can create a more modular computing platform capable of balancing the new workloads that stem from current pressures. Businesses need to realise that they don’t just “buy a chunk of IT” and power it up anymore. 

The new world of work demands more changeable, composable and individually configurable portions of processing, data storage, networking control and more. Such scalability and versatility provides the foundation for more contemporary, post-millennial IT advancements such as big data analytics, artificial intelligence, with the machine-learning that drives it, and ultimately quantum computing functions when they finally arrive.

Modernising IT infrastructure is critical to ensuring business success

IT Directors perceive multiple benefits of infrastructure modernisation

As a service

Users tap into these cloud services, quite literally on an as-a-service basis. But being able to deliver the requisite service levels requires a more flexible and intelligent virtualised back end. Taking a modular software-defined approach to delivering the infrastructure on which these services are built allows organisations to scale them upwards when needed and to reduce their capacity when demand is lower.

Going deeper, virtualising IT infrastructures means you are more easily able to perform tasks like patching applications and databases while they continue to deliver live access for users. Because it’s virtual, workloads can be moved from one area to another and essential maintenance can be carried out.

The very nature of software-defined networking, or SDN, means the IT department can make sure user requests, and increasingly those made by smart devices and machines, are not all hitting the same entry point on the network and leading towards a potential degradation of service performance.

Given COVID-19’s story line so far, many of us will be thinking of the time when normal life, or something close to it, returns. This would be the point at which some of these virtualised infrastructure blocks can be scaled back. More likely, and more efficiently, is a scenario where these compute resources are repurposed to provide us with the ability to perform the same levels of work, but now in a more intelligently defined way.

Keep it simple

At a time of rapidly increasing technological change, it is important to address IT complexity by keeping infrastructure simple

Technological innovation continues to be pivotal in the growth of organisations worldwide. However, alongside innovation often sits increasingly sophisticated and complex IT.

2019 research noted that digital transformation, migration to the enterprise cloud and increasing customer demands are creating a surge in IT complexity, and the associated costs of managing it. 

Indeed, IT departments are finding it hard enough to keep the lights on, let alone respond to the needs of the line of business. As this complexity continues to grow, almost three quarters of chief information officers (CIOs) say it could soon become extremely difficult to manage performance efficiently.

“Simply put, managing complex IT requires more people, more investment in technical training and more IT infrastructure,” says Lee Wilkinson, technology lead, datacentre and edge practice, at IT services provider Insight.

“This in turn means complex environments cost more, in both time and money, to power, cool, operate, maintain and monitor. The more complex the IT environment, and the bigger the mix of individual products and software solutions in use in a typical customer infrastructure, the higher the overall cost of ownership becomes.”

For many organisations, IT complexity is compounded by their use of multi-tier, multi-vendor solutions. Such approaches stifle their ability to be agile, a necessity in a fast-changing and competitive business landscape, as it often requires long-term leases to be signed, which make it harder to upgrade and change systems on the go in line with the needs of the business.

“Multi-tier, multi-vendor solutions require individual design, management, updates and security patching to ensure they stay available and secure. Multiple solutions are not always designed to run together straight out of the box. While some do integrate, there is always a management overhead and cost associated with making this happen,” says Wilkinson.

Simplify provisioning, support and upgrades

He cites a typical multi-tier architecture of network, server and storage – the backbone of the IT virtualisation infrastructure most organisations use – as an example. If the network, server and storage elements are all separate products, a typical IT team will have to perform three times as much work in deployment, monitoring, upgrading and management. 

“Each product will demand its own skillset and need multiple people to cover operational shifts and ensure the organisation possesses all the skills required. As organisations add more and more complexity to an infrastructure, this scenario extrapolates outwards to the point where only having to manage three separate products would seem an unattainable dream.” 

This type of classic three-tier datacentre architecture has historically dominated in the enterprise with IT vendors becoming dependent on three-year investments, support costs, and a renewal and upgrade process. This has seen innovation became the victim of profit, says Dominic Maidment, technology architect at business energy provider Total Gas & Power.

“The sensible thing to do is simplify provisioning, support and upgrades through one software-defined datacentre abstraction layer, standardise on hardware and spread the talent of the teams so you can react when you have a genuine crisis or unexpected challenge,” he says.

As technology becomes more advanced, organisations need to ditch IT complexity and embrace simplicity to become more agile. This will mean streamlining processes and working with smaller teams. As the environment becomes simpler, teams can begin to pull back from resource-intensive, repetitive tasks and instead refocus on business needs and priority projects.

It also means using the right technology that can help you drive success. Thankfully, IT is moving away from siloed servers, storage, data and processes and is embracing converged, virtualised technologies. Rather than having to go through a lengthy procurement process, organisations can instead use software-defined solutions that have been designed to add simplicity.

“Moving away from dependencies on siloed, complicated technology tiers makes total sense if you can provide a unified view for configuration, maintenance and support, and you can empower the infrastructure teams to provision through the same abstraction layer and reduce the overall support exposure,” says Maidment.

Address IT complexity

Barrie Desmond, senior vice president of marketing and communications at IT distributor Exclusive Networks, says the outlook is optimistic, given the numbers of digital natives and millennials now making up an increasing proportion of the workforce. They are more inclined to automate and trust previously untouchable functions within the IT department and organisations are now more willing to buy “outcomes”.

For even greater simplicity, he argues, there is a need for organisations to embrace more managed services or as-a-service elastic consumption models, which are not only right-sized dynamically, but offer evergreen solutions that can change and adapt as business needs evolve.

“Moving to these more dynamic, virtualised elastic-based consumption models will happen, but will take a change in mindset. Buyers will need to subtly change their buying approach from ‘buy and adopt’ to a ‘subscribe and adapt’. However, it’s nothing we haven’t seen before and this change will come in time,” says Desmond.

“Cloud subscription models with no or low commit – virtualised infrastructure, VDI [virtual desktop infrastructure], storage and so on – that are acquired through intelligent platforms offer everything from self-service to automated deployment, licence management and renewals and even customer support credits. 

“This is what we term ‘platform economy’ and will transform the way in which technology and services are consumed by users less interested in ownership and know-how, and more interested in renting, results and frictionless experiences.”

Addressing IT complexity and realising the benefits of simplicity has to be a priority

Bobby Cameron, vice president and principal analyst serving CIOs, at Forrester, adds: “Technology complexity has different meanings for different CIOs, depending on the level of business and technology maturity of their firms. IT simplification, then, means the CIO must continuously reset their organisation’s approach to parallel the pace of business change.” 

There will be challenges ahead for enterprises, undoubtedly. Just one example is the emergence of multi-cloud and the options, which now exist for organisations to store and manage their applications and workloads, mean there is an urgent need to cut through some of the complexity surrounding cloud.

“If simplicity isn’t near the top of the agenda, it should be,” says Wilkinson. “It costs money and time that organisations cannot afford, especially as they struggle to adapt to rapid changes in the world outside. Addressing IT complexity and realising the benefits of simplicity has to be a priority.”

Cloud Nine: The importance of cloud infrastructure

The use of the cloud to support organisational IT needs is growing, with many businesses now utilising a public or private cloud

How businesses are currently using the cloud

And there's a number of reasons driving this adoption...

The drivers for cloud adoption

Whether they use a public or private clouds (or a hybrid mix of both), businesses see real benefits from cloud adoption:

A hybrid cloud infrastructure has enabled Deutsche Bank to run

However, cloud adoption does not come without its challenges...

Top six challenges of cloud adoption

But this isn't deterring organisations from pressing on with cloud adoption, as spend on Infrastructure-as-a-Service (IaaS) grows year-on-year.

The cloud infrastructure market segment looks set to keep growing

Worldwide public cloud system infrastructure services (IaaS) revenue forecast (billions of U.S. Dollars)

2018

2019

2020

2021

2022

Commercial feature

No bumps on the hyperconvergence highway

With the right advice and hyperconverged IT infrastructure, businesses can reap the full benefits of cloud computing

As organisations move their IT infrastructure and data to the cloud, businesses can keep growing while reducing both their datacentre footprint and the resources required to maintain and run the infrastructure they need to operate. 

From cost-savings to environmental benefits that reflect well on corporate social responsibility, embracing hyperconvergence within the realms of cloud computing makes complete business sense.

But let’s take a step back and remind ourselves what hyperconvergence is and why it came about in the first place. In the simplest terms, we can describe hyperconvergence as a means of building IT infrastructure from software-centric first principles. 

The public cloud backbone is supplied by cloud service providers and many companies will also want to opt for a degree of on-premises private cloud installations. They will also straddle the middle ground and build hybrid clouds. 

Hyperconvergence gives us a means of defining IT infrastructure across and throughout various cloud-computing domains. This means pre-engineering of individual components of processing, data storage, networking and virtualisation resources all exist as modular blocks that can be brought together to suit individual use-case requirements.

Hyperconvergence gives us a means of defining IT infrastructure across and throughout various cloud-computing domains

How did we get hyperconverged?

The reason we moved to start innovating towards hyperconverged infrastructures (HCIs) in the first place is computers in the so-termed “white-box” era (think pre-millennial, IBM-standard desktops) simply didn’t have the grunt power to run the types of modern workloads we were building as cloud computing started to emerge. 

So, the shift to cloud started, but there was, and still is, a selection box of vendor products and tools available. This provides choice of course, but it also presents a challenge. When you build everything from software, you still need a set of rock-solid underlying standards upon which to build your computing foundations.

When you buy storage from one vendor, networking from another and server infrastructure from yet another source, the result is a multiplicity of permutations that need to be managed. Organisations end up spending more time on interoperating, maintenance, upgrades, patching and so on. Consequently, they spend less time building up the application functionality they need and what really adds value to the business. Hyperconvergence provides the antidote to these disconnects. 

Getting your weekends back

Now that we can look at building applications that run on a more powerful infrastructure backbone, we can move to operating with software that can predict which data sets any application will need at any given time from different nodes around the system. It’s at this point that we can start to do some smart things.

If organisations standardise on a distributed cluster of cloud-computing resources, the engineers who look after these systems can finally start to get their weekends back. Typically, systems administrators and other operations staff must spend hours maintaining, tuning and upgrading every individual element of the stack. But through the use of HCI, all these aspects can be managed by software-defined controls. 

The benefits stretch wider still. From a skills perspective, you don’t need to be a storage expert, or any other tech domain expert, because you can collapse those functions into one single operational team. This is not always a popular advancement initially; in a bank, for example, the storage engineering team holds a lot of weight. But these staff can, and should, be repurposed in roles where they can add more value.

Reasons why CIOs are pursuing infrastructure modernization

Better performance

A further benefit of a hyperconverged platform is better performance through faster data access. If you take a look at the vast majority of storage advancements over the last few years, they have concentrated on reducing the data latency factor of the storage function, that is how fast it can get data in, and indeed out, of the data-storage layer.

But despite these efforts, there is still an inherent bottleneck if the data is on a remote storage device, some distance from the application trying to access it, however fast your network and storage devices are.

Instead then, placing the data on a storage device that resides on the same “system bus”, or the computer’s internal transport system, as the application, which could exist as virtual machine or container and is consuming the data, takes full advantage of the decreased latencies of those new storage devices. This therefore drives the most value from the hardware investments. A software-defined platform uses machine-learning to predict where the data should be located to minimise latency based on patterns of data access from a given application.

Hybrid cloud 

As hybrid multi-cloud environments become a global industry de-facto standard, organisations are realising they need a combination of clouds in the mix if they want to compute with maximum efficiency. Hyperconvergence gives organisations the opportunity to navigate this information superhighway from a higher plane, so IT operators don’t need to worry about which workload goes where on which cloud cluster.

Companies setting out on cloud-migration programmes often go through a process of examining where they should run any particular workload for any individual virtual-machine instance. But application and database workloads rarely fit an off-the-peg size for any single cloud configuration, so a degree of load balancing is needed and the customer needs to have choice. 

In the longer term, applications will very typically mature from needing one set of cloud resources to another. So the ability to move from private to public, and vice versa, may happen several times throughout the application’s life cycle, depending on how static and predictable the workload user-base is, which any given application has to shoulder. Once again, hyperconvergence enables companies to steer business interests without having to worry about these internal mechanics at a granular level.

Hyperconvergence in the boardroom

With this new state of the cloud nation, should we ask whether we will finally get to a point where the C-suite starts to discuss whether the organisation has an appropriate level of hyperconvergence applied to its IT stack? 

Actually, we absolutely hope not. Hyperconvergence offers a highly efficient lower substrate level of IT infrastructure utility that the boardroom never needs to discuss. As much as the board doesn’t worry about its supplies facilities management function with its gas, electricity and water supplies, a wholly efficient fully abstracted hyperconverged backbone should just work. So the enterprise can get on with business. Navigated correctly, there are no bumps on the hyperconvergence superhighway.

Dom Poloniecki is the general manager for Nutanix in Western Europe and Sub-Saharan Africa

Data security and the hybrid cloud

Can a hybrid cloud approach enable businesses to combine the benefits of on-site data security with the advantages of flexibility and agility?

Organisations are inundated with choice when it comes to where they store and manage their critical workloads and applications. 

The public cloud in particular has seen enormous growth, with the worldwide public cloud services market forecast to grow 17 per cent in 2020 to $266.4 billion, according to Gartner.

Using the public cloud frees up internal resources, which means organisations don’t have the cost or burden of managing their IT infrastructure in-house. However, this convenience can come at the price of heightened security concern and risk.

“Independent research shows us that the top cloud security concern among two thirds of cybersecurity pros is an increased risk of data loss and leakage, while nearly half are worried about maintaining access controls and keeping the misuse of employee login credentials for cloud services in check,” says Chris Green, head of communications, Europe, Middle East and Africa, at (ISC)2, a non-profit organisation that specialises in training and certifications for cybersecurity professionals.

Exposed data

The same research shows that a third are struggling with issues of compliance, while a similar number are concerned about the lack of visibility they have into off-site infrastructure security. This includes concerns about exposing data via the cloud, or having data in the cloud exposed by the platform operator, and with it opening up the organisation to possible breaches of the European Union General Data Protection Regulation (GDPR) and similar legislative infractions.

In 2019, 540 million records of Facebook users on an improperly secured AWS server were exposed. These records contained data that could be used to profile the users in great detail and included user IDs, account names, likes and comments. 

Percentage of organisational workload running on public- and private-cloud platforms

Organisations not only face the strict penalties now imposed for data breaches since the introduction of GDPR, but huge additional financial damage associated with loss of business, customer loyalty and reputation. 

“It becomes all too easy for an ‘out of sight, out of mind’ attitude to manifest among users and departments that do not see the wider organisational risk picture. Not to mention the cost considerations of allowing unchecked use of on-demand cloud application, storage and virtual machine services that bill by uptime or deployment, even if they are not being used,” says Green.

Hybrid cloud

One potential solution is to maintain a hybrid cloud model. According to Forrester, 74 per cent of enterprises describe their strategy as hybrid or multi-cloud. It is common for enterprises to generate and store data across a combination of on-premise, private-cloud and public-cloud resources.

It can make sense to keep the most critical workloads and data stores in-house, while outsourcing edge or scale-intensive business processes and infrastructure to the cloud

“Where complexity is the enemy of security, it can make sense to keep the most critical workloads and data stores in-house or on a set of collocated services, while outsourcing edge or scale-intensive business processes and infrastructure to the cloud,” says Thomas Owen, head of security at cloud hosting provider Memset.

This, he says, allows organisations to take advantage of the benefits of cloud deployment at scale, while also maintaining a small-scoped, well-understood and highly secured estate internally.

“Even the most secure, high-tech, digitally protected bank vault still has a safe, a big lump of steel, somewhere at its heart. The cloud is a fantastic resource and an opportunity for businesses to access infrastructure and capabilities that would have been unimaginable a decade ago. The point is one size of cloud doesn’t fit all use-cases and sometimes simplicity brings its own kind of security and performance,” says Owen.

A hybrid approach that mixes both private and public-cloud environments, including the benefits of virtualised infrastructure, enables organisations to experience the benefits associated with keeping data on-site, such as internal control of security and transparency, with all the flexibility and agility associated with the cloud.