Table of Contents
Introduction
For decades, network architecture was designed around a simple exchange between a client and a host. However, as applications become more complex and distributed, we are seeing a transition where the majority of data movement no longer involves the public internet at all.
Historically, the priority for any network administrator was the North-South path, which represents the data entering from a user and returning to them. But in today’s digital climate, the East-West path, or traffic moving laterally between servers, has become the dominant force. This shift represents a massive change in how modern facilities are built and managed.
This blog will answer the following questions:
- What exactly is east-west traffic data centre communication?
- How does internal traffic differ from internal vs external network traffic (north-south)?
- Why has the volume of internal data surpassed external requests?
- What infrastructure changes are required to manage this growth?
Understanding the Compass: Internal vs External Network Traffic
To understand the modern data centre, one must first look at the North-South and East-West terminology. These terms are derived from network diagrams where external connections are typically drawn from north to south and internal connections are drawn horizontally (East to West).
North-South Traffic (External)
This refers to data that enters or leaves the data centre. When you open a browser in London and request a webpage hosted in a Mumbai facility, that request is North-South. It passes through the main gateway, firewalls, and edge routers to reach the internal network. While this was once the primary focus, it now accounts for a much smaller portion of total data centre activity compared to the internal environment.
East-West Traffic (Internal)
This is the communication between devices, virtual machines (VMs), or containers within the same facility or cluster. It never crosses the external gateway. Examples include a web server talking to a database, or data being replicated across storage nodes for redundancy. In modern cloud environments, east-west traffic data centre (E-W traffic) now accounts for the vast majority of all traffic. The explosion of east-west traffic is driven by applications that are distributed across many different servers.
Why Internal Traffic is Taking Over
The reversal of traffic volumes is not accidental. Several technological shifts have fundamentally changed how applications are built and deployed.
1. The Death of the Monolith
In the past, applications were monolithic, meaning all functions lived on one large server. Today, applications are broken into microservices. A single action, like clicking the purchase button on an e-commerce site, might require the web server to talk to the inventory server, the payment server, the shipping server, while simultaneously updating a logging database.. Each of these interactions stays within the data centre, creating a massive amount of horizontal data flow.
2. Virtualisation and Hyper-convergence
With the rise of software-defined networking, physical servers now host dozens of virtual machines. These VMs frequently migrate between physical hosts for load balancing or maintenance. This migration, along with the constant chatter between virtualised components, stays entirely within the internal network. Modern data centres must be built to support these high-density virtual environments.
3. AI and Big Data Analytics
Artificial intelligence is perhaps the biggest driver of internal traffic today. Training a large language model involves thousands of GPUs (Graphics Processing Units) sharing massive datasets in real-time. This requires ultra-high bandwidth and extremely low latency between servers. This internal processing happens far away from the public internet.
The Infrastructure Challenge: Moving Away from Three-Tier Networks
Traditional data centre designs relied on a Three-Tier model consisting of the Core, Aggregation, and Access layers. In this hierarchy, the Access layer connects to the servers, the Aggregation layer (or Distribution layer) links various Access switches, and the Core provides the high-speed packet switching backbone for the entire network.
This structure was ideal for North-South traffic because data followed a clear vertical path. Requests from a user would flow through the Core, down to the Aggregation layer, and finally to the specific server at the Access layer. However, for internal East-West traffic, this design is inefficient. If Server A needs to communicate with Server B in a different rack, the data must often travel “up” through the Aggregation layer to the Core and then back “down” again. This creates unnecessary bottlenecks, increases latency, and puts a heavy burden on the Core switches.
To solve this, modern facilities are adopting the Leaf-Spine architecture. In this setup, every Leaf switch (connected to servers) is connected to every Spine switch. This ensures that any two servers are always only two hops away from each other, significantly improving the speed of east-west traffic data centre communications. This flatter architecture is essential for reducing the delays that plague older network designs.
About Us
Invenia provides essential infrastructure services to support growing internal data needs. Through high-performance connectivity, modern network design, and scalable data centre solutions, we help organisations build fast, secure, and reliable internal networks across regions. Explore our wide range of services and connect with our team today!
Analytical Perspective: Security in a Horizontal World
The shift to internal traffic has significant security implications. Traditional security focused on the perimeter, which is the gateway where North-South traffic enters. However, if an attacker gets past the front door, they can move laterally (East-West) across the network with little resistance.
Understanding the unique characteristics of this traffic is vital for performance and security. Internal traffic patterns are often bursty and unpredictable, making them harder to monitor than steady external streams. To mitigate risks, modern data centres use micro-segmentation to treat every server as its own secure zone.
Conclusion
The data centre is no longer just a post office for external requests. It has become a bustling city where the vast majority of movement happens within the city walls. As east-west traffic data centre volumes continue to outpace internal vs external network traffic, the focus must shift toward internal bandwidth and low-latency switching. By understanding these patterns and investing in modern infrastructure through partners like Invenia, businesses can ensure their applications remain responsive. Optimising these internal routes is the key to efficient internet routing in the modern age.
FAQs
- What is the difference between a Leaf switch and a Spine switch?
In a Leaf-Spine architecture, Leaf switches connect directly to servers and storage. Spine switches act as the backbone, connecting all Leaf switches together. This creates a non-blocking fabric where all servers can communicate with equal speed. - Why is latency more critical for East-West traffic?
Because microservices involve many interactions to complete a single user task, even a few milliseconds of delay between internal servers can add up, causing a slow experience for the end-user.