BLOG: Modernizing Multiple Datacenter Interconnectivity: Implementing EVPN/VXLAN as the Datacenter Interconnect in a Cisco Nexus 9K Environment

December 14th, 2022 BLOG: Modernizing Multiple Datacenter Interconnectivity:  Implementing EVPN/VXLAN as the Datacenter Interconnect in a Cisco Nexus 9K Environment
Robert Kinney
Sr. Network Systems Engineer

 

Awhile back we had a customer that needed a datacenter network refresh. Their network consisted of two datacenters, about a mile apart from each other, connected with a pair of Nexus 7010s in vPC which provided their necessary layer 2 connectivity between the two datacenters using HSRP as their first-hop redundancy protocol. While this provided the needed layer 2 interconnect and redundancy between the datacenters, they did not have full redundancy within each datacenter.

Cisco ACI was of interest, but the customer wasn’t quite ready to bite the bullet of full automation. Instead, we provided them with a fully redundant pair of ACI-ready datacenters consisting of a pair of 100G N9K “Spine” switches in vPC connecting to 3 or 4 pair of 1/10/25 N9K “Leaf” switches (each pair in vPC) in each datacenter and a full mesh of L3 DCI interconnects between the 4 “Spine” switches. This allowed them full redundancy within as well as between each datacenter while still maintaining the look and feel of a traditional network (NX-OS, vPC, etc.) while saving them the cost of acquisition and deployment of APICs and learning how to manage an automated network.

EVPN/VXLAN was chosen as the overlay technology to provide the layer 2 connectivity between the datacenters along with Anycast Gateway to provide the FHRP. EIGRP was chosen as the routing protocol for the underlay. Other protocols for consideration could be OSPF or IS-IS. EIGRP was chosen since it was already running on other parts of the network and there was no reason to introduce another IGP and all the nuances that go along with it.

 

What is VXLAN?

 

VXLAN, or Virtual Extensible LAN, is an overlay technology providing layer 2 connectivity over a layer 3 underlay network. It uses a MAC-in-IP/UDP tunneling encapsulation to provide layer 2 extension across the layer 3 datacenter interconnect (DCI) between Virtual Tunnel Endpoints (VTEPs). The VTEPs in this case reside on each of the N9K Spine switches that connect to each other over the DCI.

The VTEP is the interface between the local LAN and the IP network. It discovers remote VTEPs and learns remote MAC/IP-to-VTEP information. It also performs VXLAN encapsulation and decapsulation which is regular traffic sent between hosts between the datacenters.

 

How to Enable the Underlay Network

 

The underlay network is the physical network that connects everything together. Its only purpose is to provide connectivity between VTEPs. An IGP, such as EIGRP, OSPF or IS-IS, is used to advertise VTEP source information and provide load-balancing between redundant links. The underlay network is also called the transport network.

The following are the steps necessary to enable the underlay network.

 

Enable the underlay IGP feature:

 

The IGP chosen for the underlay was EIGRP. The first step is to enable the EIGRP feature on all 4 N9K Spines:

feature eigrp

 

Define the EIGRP process on all 4 N9K Spines:

router eigrp 1

 

Don’t forget to add the L3 DCI interfaces to the process.

 

Define the VTEP loopback:

 

VTEPs in vPC not only need a unique IP address to identify themselves, the vPC pair need a separate IP address shared between them to be used as the Anycast VTEP address:

DC1 N9K-1:

int loopback0

description VTEP IP Address

ip address 1.1.1.1/32
ip address 10.10.10.10/32 secondary

ip router eigrp 1
ip passive-interface eigrp 1

!

DC1 N9K-2:

int loopback0
description VTEP IP Address
ip address 1.1.1.2/32
ip address 10.10.10.10/32 secondary
ip router eigrp 1
ip passive-interface eigrp 1
!

DC2 N9K-1:

int loopback0
description VTEP IP Address
ip address 2.2.2.1/32
ip address 20.20.20.20/32 secondary
ip router eigrp 1
ip passive-interface eigrp 1
!

DC2 N9K-2:

int loopback0
description VTEP IP Address
ip address 2.2.2.1/32
ip address 20.20.20.20/32 secondary
ip router eigrp 1
ip passive-interface eigrp 1
!

Configuring the Overlay Network

The overlay is the virtual network that runs on top of the underlay. EVPN is the overlay control plane protocol used to exchange host information. It uses BGP extensions to exchange end host MAC and IP address reachability information between the VTEPs.

The following outlines the necessary steps for configuring the overlay network.

BGP is used as part of the overlay control plane.

Enable the BGP feature:

feature bgp

The VN-Segment feature configures the switch for VXLAN domains. It allows for Vlan to L2 VNI mapping.

Enable the VN-Segment feature:

feature vn-segment-vlan-based

Enable the overlay VXLAN feature:

feature nv overlay

Enable the EVPN Control Plane on all for Spine switches:

nv overlay evpn

If not enabled on the Spine switches already, enable the L3 SVI feature:

feature interface-vlan

How to Enable the Anycast Gateway Feature

VXLAN EVPN Distributed Anycast Gateway is a default gateway addressing feature that allows for the same default gateway IP/MAC address across all switches in the VXLAN network. With this feature, regardless of where an end host is, it will always send it’s traffic to the closest next-hop gateway. This feature replaces FHRPs like HSRP or VRRP. This is configured on all 4 Spine switches and enables the Anycast Gateway feature and defines the virtual MAC address (this can be anything).

Enable the Anycast Gateway Feature:

fabric forwarding anycast-gateway-mac 0000.2222.3333


Configure the BGP process.

iBGP is used and the adjacencies will be a full mesh between the loopback addresses of all 4 Spines switches. BGP Extended Communities will be sent to support EVPN:

DC1-N9K-1:

router bgp 65535
router-id 1.1.1.1
neighbor 1.1.1.2
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
neighbor 2.2.2.1
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
neighbor 2.2.2.2
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
!

DC1-N9K-2:

router bgp 65535
router-id 1.1.1.2
neighbor 1.1.1.1
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
neighbor 2.2.2.1
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
neighbor 2.2.2.2
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended

!

DC2-N9K-1:

router bgp 65535
router-id 2.2.2.1
neighbor 2.2.2.2
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
neighbor 1.1.1.1
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
neighbor 1.1.1.2
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
!

DC2-N9K-2:

router bgp 65535
router-id 2.2.2.2
neighbor 2.2.2.1
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
neighbor 1.1.1.1
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
neighbor 1.1.1.2
remote-as 65535
update-source loopback0
address-family l2vpn evpn
send-community
send-community extended
!

Creating the VRF Overlay VLAN

EVPN/VXLAN supports multi-tenancy. Each tenant has its own VRF and a VLAN and SVI associated with it. Since this is not a multi-tenant environment only one of each is needed. A Layer 3 VNI is associated with this VLAN and it’s SVI is associated with the VRF. This is done on all 4 Spine switches:

Create the Overlay VLAN and associate it to a VNI:

vlan 3000
name Overlay_Vlan
vn-segment 3000
!

Define the overlay SVI and associate with the VXLAN VRF

interface vlan 3000
description overlay SVI
no shutdown
vrf member vxlan-vrf
no ip redirects
ip forward
no ipv6 redirects
!

Create the VXLAN VRF Context and associate the L3 VNI.

All SVIs and the uplinks are associated with this VRF. This is done on all 4 Spine switches:

vrf context vxlan-vrf
vni 3000
rd auto
address-family ipv4 unicast
route-target both auto
route-target both auto evpn
!

Define the NVE interface

This is the VTEP interface. It is a virtual interface that gets a /32 IP address from the loopback and is the way that the VTEP gets access to the overlay network LAN segments. On this interface, BGP is defined as the mechanism for host reachability advertisement. Also, the layer 3 VNI (the VNI associated with the overlay VLAN) is associated to this interface. This is configured on all 4 Spine switches:

interface nve1
no shut
source-interface loopback 0
host-reachability protocol bgp
member vni 3000 associate-vrf
!

Final Step of Pre-Configuration

The final step of the pre-configuration is to add the VXLAN VRF to the BGP process and enable the advertisement of EVPN routes. This is done on all 4 Spine switches.

Add the VXLAN VRF to the BGP Process:

router bgp 65535
vrf vxlan-vrf
address-family ipv4 unicast
advertise l2vpn evpn
!

Adding VLANs, VNI, and Enable Control Plane

Now that the EVPN/VXLAN pre-configuration is done it’s time to start adding VLANs to be stretched across the DCI. For each VLAN a corresponding VNI is associated. Each SVI is configured under the VXLAN VRF, the next-hop gateway IP address (same on all Spines) is configured and associate the SVI with Anycast Gateway.

Stretch VLAN configuration and VNI Association:

vlan 201
name VXLAN_Test1
vn-segment 201
!

SVI configuration:

interface vlan 201
description VXLAN_Test1
no shutdown
vrf member vxlan-vrf
no ip redirects
ip address 10.10.201.1/24
no ipv6 redirects
fabric forwarding mode anycast-gateway
!

Next, the VNI needs to added to the NVE interface. This is done for each VLAN that needs to be stretched across the VXLAN fabric. Under the VNI, ARP suppression is enabled as well as enabling BGP for host replication. This is done on all Spine switches.

Add the VNI to the NVE interface:

interface nve1
member vni 201
suppress-arp
ingress-replication protocol bgp
!

The final step is to enable the EVPN control plane for layer 2 services on the VNIs. Under each VNI, the VRF RD and RT import and export policies are configured. These are auto-generated. This is done on all 4 Spine switches.

Enable EVPN on the VNIs:

evpn
vni 201 l2
rd auto
route-target import auto
route-target export auto
!

For all remaining VLANs to be stretched, just configure the VLAN, associate with a VNI, configure the SVI associated to the VXLAN VRF and add the VNI to the NVE interface and under the EVPN Control Plane and that’s it.

Additional Notes

For our implementation we had the luxury of adding the new N9K VXLAN environment in parallel with the existing N7K/5K environment and connecting it up directly to the rest of the network. I just added all external L3 interfaces to the VXLAN-VRF VRF and there was no need to worry about route leaking or routing in general. All I had to do was add the VXLAN VRF to the EIGRP process and add all external L3 interfaces (excluding the DCI interfaces) to the EIGRP process. Easy-peasy. The physical migration from existing to new is another story.

More Information:

Mainline’s partnership with Cisco delivers business solutions with unprecedented value to our customers and helps companies seize the opportunities for the future of work. Some of our Cisco business partner certifications and accolades include:

» Premier Certified Partner
» Cloud Partner
» Cisco Channel Customer Satisfaction Excellence

For more information on Cisco networking, collaboration, and hybrid work solutions, contact your Mainline Account Executive directly or reach out to us here with any questions.

 

You may be interested in:

BLOG: Planning, Designing and Implementing a Cisco Catalyst Campus LAN Infrastructure

Mainline