Thinness and thickness are architectural concepts, to the point that both types of routers might even be virtualized. Strictly speaking, thin and thick routers is SDN (Software Defined Network) jargon rather than industry standard language. These terms are getting more and more widely used and they are so meaningful and graphical that they deserve an article on their own.
The first time I heard the terms “thin” or “thick” router was barely a few months ago. By that time, the very recent O’Reilly book MPLS in the SDN Era: Interoperable Scenarios to Make Networks Scale to New Services was already published. Luckily, Krzysztof Szarkowicz and I speak about thin and thick routers for pages and pages on the book, but using different words. we just didn’t use that terminology!
OK, let’s lay some foundations before we explain what thin and thick routers are. When we speak ‘SDN’, it is essential to fully master the underlay and overlay concepts. If you don’t, you may want to have a look at book’s Chapter 10, whose title is precisely “Underlays and Overlays”. After that, you should find it much easier to go through this article.
Imagine that you need to build a modern greenfield data centre. Your colleagues are very proactive and have already deployed an IP fabric underlay, composed of silicon-based IP switches that are connected in a leaf-and-spine topology. Each leaf or spine IP switch belongs to its own private Autonomous System and speaks single-hop eBGP with its neighbouring IP switches. True thing, this is not the only way you can build an IP fabric, but it’s frequently used and it’s a robust one.
This fabric provides resilient and scalable IP connectivity between any edge IP hosts connected to leaf or spine switches. What are these edge IP hosts? Typically, hundreds or thousands of servers (with or without a hypervisor) and a couple of DC gateways that interconnect the data centre to the outside world.
These edge hosts exchange two types of IP traffic:
- plain IP control packets (for example, multiprotocol iBGP, or API-related traffic).
- IP packets that contain a tunnel header whose payload is a full end user/application packet or frame. For example, packets that are sourced from or destined to VMs running on one of the edge IP hosts.
These tunnelled IP packets have a rich structure. Think of Russian dolls: we start with the outermost one, and we move on to the inner one in order.
- The outermost header is an IP header whose “global” source and destination IP addresses are reachable within the IP fabric. We call this an underlay or transport header.
- Next comes an IP tunnelling header like GRE or UDP. This is one of the two overlay headers.
- Next comes another overlay header that carries a virtual identifier such as a service MPLS label or a VXLAN network identifier (VNI). The role of this identifier is somewhat similar to a VLAN in legacy networks, except that service MPLS labels and VNIs are only exposed to edge hosts: the IP fabric is totally unaware of them.
- Finally, the inner payload contains a L2 (Layer 2) or L3 (Layer 3) user frame/packet. Depending on whether the tunnelled payload is a L2 frame (with its Ethernet header) or a “naked” L3 packet, we have a L2 or a L3 overlay. In any case, the underlay is L3 because even a tunnelled L2 frame gets routed hop by hop in the overlay according to the external IP header.
By the way, do you need L2 connectivity between VMs? Well, the overlay takes care of that by just tunnelling the original L2 frame. If this rings a bell to L2VPN, you are ready for the first analogy.
We can legitimately compare a modern data centre to an ISP IP/MPLS core as follows:
- An IP switch is to a data centre like a P-router is to an ISP core.
- The set of edge hosts is to a data centre like the set of PE-routers is to an ISP core.
- A user VM or container running in an edge host is to a data centre like a CE is to an ISP network.
- L2 and L3 overlays are equivalent to L2VPN and L3VPN. To the point that very often the control plane is the same: multiprotocol BGP.
- Underlay IP headers are to a data centre like a MPLS transport label is to an ISP core.
- eBGP running between leaf and spine switches is equivalent to the IGP in an ISP core.
What is a Thick Router?
The concept applies to the edge hosts. Consider these hypervisors acting as PEs and hosting many CE-like compute entities (VMs or containers). If these hosts expose an API to orchestrators and have a complete control plane (for example, by implementing multiprotocol iBGP), then they are thick routers. One example is Juniper’s (physical or virtual) MX. Thick routers make sense at the data centre border (the connection of the data centre to the WAN), or for virtual CPE applications. However, inside the data centre thin routers are typically a better fit.
What is a Thin Router?
Modern data centres and cloud architectures embrace virtualization in the real sense of this word. Virtualization is not just about “running in x86 platform” (which is a really narrow definition). Instead, virtualization is about having a pool of physical resources and using an orchestrator to decouple them from the physical layer. What physical resources? Compute resources (CPU, memory), storage resources, and… network resources: and here is where SDN comes.
Scalable cloud designs dedicate specific hosts to different functions:
- Overlay controllers implement API and control plane functions. For example, they can speak multiprotocol iBGP and in this way signal overlay routing or bridging information (by setting the BGP next hop attribute to the appropriate compute host IP address). Controllers send the result of their route computations, as well as security policies, to the compute hosts. There are typically a handful of controllers forming a cluster in each data centre.
- Compute hosts implement the forwarding plane functions. They host the tenant VMs (or containers) and are capable of pushing and popping overlay encapsulation headers. They have a slave control plane, depending on the controller for route and policy programming. There are typically hundreds or thousands of compute hosts in each data centre. These compute hosts are what we call thin routers.
We are ready for Analogy #2! In this case, we compare the overlay of a data centre (based on the thin router model) to a multi-component high-end router chassis like a Juniper MX Series or a Cisco ASR.
- Controllers are to a data centre overlay like controller cards (for example, Routing Engines in MX series) are to a multi-component router chassis.
- Compute hosts are to a data centre overlay like line cards are to a multi-component router chassis.
- The IP fabric is to a data centre overlay like the switching fabric is to a multi-component router chassis.
[box type=”download” align=”aligncenter” class=”” width=””]
Curious to see all this in action? Have a look at MPLS in the SDN Era: Interoperable Scenarios to Make Networks Scale to New Services !
[/box]