高速以太网

New Performance Pressure for Latest App-Driven Data Center Deals

作者:

Every network and application brings unique characteristics and vulnerability to network impairments. Having a deep understanding of how apps behave in a range of expected scenarios is a critical step in assuring performance. Read how pre-deployment testing and verification will help relieve doubts and reveal the unknown.

A perfect storm of tech readiness, big data demands and an unstoppable trend toward cloud-hosted apps and services has been a boon for data centers.

Suddenly, these network operators are serving unprecedented network IT needs. They’re capitalizing on mass IoT deployments that are seeing physical devices, vehicles and home appliances sending a flurry of data for real-time processing.

As mobile network operators seek to strategically offload certain network management elements in a shift to 5G, a bevy of virtual network functions are being spun up constantly. Data centers have been prepared to meet this demand. Ubiquitous 100G Ethernet technology has helped instill confidence around widescale high-speed availability providing flexible breakout options for high-density, cost-effective way to transition into 400G and early 800G deployments.

And just in time.

The sizeable increase in the volume of data being constantly stored and accessed makes 100G not a nice-to-have, but a necessity.

Data center switches migrating to 25/50/100G Ethernet. Source: Dell’Oro July 2021 - Long Term Ethernet Switch Forecast

100G just a piece of the performance puzzle

Data center operators understand that 100G on its own isn’t adequate for meeting the exacting performance requirements that are accompanying these new opportunities.

After all, these data centers are not just offering the latest network speeds. They are fundamentally entering new areas of business. They’re having to become experts in network function virtualization (NFV), edge computing, webscale networks and network slicing architectures.

The revenue possibilities are limitless. But so are, potentially, the headaches that accompany them. That’s because these new lines of business also bring responsibility for maintaining strict quality of service (QoS).

Suddenly, the pressure is on to assure performance in line with contracted service level agreements (SLAs). The challenge here is that data center operators don’t have an efficient, accurate way to verify app performance under real network conditions. This stokes potential app failure scenarios, threatening lucrative new lines of business.

Why pre-deployment insights are key to a high-performing future

While performance has traditionally been a post-deployment concern, the high stakes for emerging apps being spun up in data centers insists upon better insight into issues that may occur and risk mitigation strategies to contain them.

Every network and application brings unique characteristics and vulnerability to network impairments. Therefore, having a deep understanding of how apps behave in a range of expected scenarios is a critical step in assuring performance.

In our recent work with data center operators, we’ve begun to identify pre-deployment verification use cases in the areas of data center migration and data center interconnect:

  • Emulating customer networks to demonstrate risk mitigation. For enterprises conducting migrations, cost efficiencies must be weighed against a potential impact to mission critical applications. By emulating customer networks in advance of deployment, impairment impacts caused by common issues like latency and packet loss can be examined. When customers understand how real-world challenges arising from complex interactions between network components will be addressed, a plan can be developed for mitigating risk and demonstrating how SLAs will be met.

  • Assuring 100G performance. In data center interconnect scenarios, apps will be hosted remotely, with high-speed network connectivity between locations representing a potential weak link. These deployment scenarios can require both east to west and north to south access, potentially introducing latency-driven performance issues. By emulating specific network environments, and introducing latency and packet loss, application performance boundaries can be identified before deployment. This helps reveal where implementation of load balancing and WAN optimization technologies will support improved QoS.

Removing the guesswork, instilling confidence

Data center operators stand at the precipice of massive revenue and service evolution opportunities. As they traverse this new terrain, pre-deployment testing and verification will help relieve doubts and reveal the unknown. Armed with the right insight, they will be able to make confident decisions about network architectures and the SLAs they’ll support.

Learn about the latest test solutions for ensuring predictable infrastructure performance.

Guest contributor: David Robertson, Product Manager, Calnex Solutions

喜欢我们的内容吗?

在这里订阅我们的博客

博客订阅

Malathi Malla
Malathi Malla

Malathi Malla主管思博伦的云、数据中心和虚拟化分部。她负责的是产品营销、技术营销和产品管理业务,推进各类云和IP解决方案的入市战略。她在多家硅谷创业企业和大型企业拥有超过14年的高技术从业经验,包括Citrix、IBM、Sterling Commerce(AT&T的软件分部)和Comergent Technologies。Malathi还在开放联网基金会和OpenDayLight等多个开源社区担任思博伦的首席营销代表。欢迎通过LinkedIn或关注Twitter上的@malathimalla,与Malathi交流并建立联系。