Site icon Karneliuk

DC 15. Segment-routing/MPLS on the data centre white box switch and VNF/PNF networking (Nokia, Cisco and Mellanox/Cumulus).

Hello my friend,

the article today would be very special because of three following points. First of all, we’ll talk about the segment routing, which is the leading technology today for building service providers and emerging for DC. Second, you will learn how to connect VNFs with the real network devices. Third, we will fork Cumulus Linux with modified FRR. Thrilled? Let’s go!


1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

Thanks

Special thanks for Avi Alkobi from Mellanox and Pete Crocker and Attilla de Groot from Cumulus for providing me the Mellanox switch and Cumulus license for the tests. Additional thank to Anton Degtyarev from Cumulus for consulting me on FRR details.

Disclaimer

This blogpost is the continuation of the previous, where we have brought the Mellanox SN 2010 to the operational with Cumulus Linux 3.7.9 on board. If you want to learn the details about this process, you are welcomed to read that article.

Brief description

In the introduction part you have learned about many details, so we feel necessary to explain them in a greater detail. In my blog you can find many articles about the Segment Routing in multivendor environment including automated Segment Routing configuration. In the Service Provider world, the Segment Routing actively conquers the first place for building the data plane, as it provides the possibilities both for simple best-effort routing/ECMP and for traffic engineering. I’ve worked with this technology both in lab and real live environment and even was delivering the talk on the Cisco Live 2019 on the dynamic Segment Routing Traffic Engineering, and I really like that technology. On the other hand, if we take a look on the data centre business, we typically build EVPN overlay using VXLAN data plane. But we can build the EVPN with MPLS data plane as well, so it is absolutely possible to replace the VXLAN with Segment Routing *IF*, and we need to emphasize this fact, the hardware supports the Segment Routing data plane. Working with VNFs we don’t have these dependencies; therefore, they are often undiscovered until the deployment. In today’s article we’ll test the Segment Routing data plane for data centre. The good thing is that the it is possible to run the Segment Routing on the Mellanox switches with Cumulus Linux.

Segment Routing on Cumulus Linux

The second point highlighted in the intro is not less important than the Segment Routing itself. All the discussions so far was about connecting VNFs to each other, even using various VM applications (link). But we have never discussed, how to connect VNFs to the PNFs (physical network functions), which is itself very interesting and challenging topic.

The third point is about modifying the standard Cumulus Linux, which the latest version is 3.7.9 to the date of the article. One of the key components of the Cumulus Linux is FRR, which is used to perform all the control-plane functions for the switch. The version is quite an outdated one, as Cumulus FRR is 4.0.*, whereas the latest stable is 7.1.*. There were a lot of functions added since 4.0, and the Segment Routing is one of them. To be able to work properly with this technology we need to upgrade FRR. When you update one of the core components of the tested system, you need to be prepared that something might go wrong. That’s why you need to be very careful and spend a lot of time on various tests to make sure your upgrade didn’t break something.

Doesn’t do it in production network, unless you have thoroughly tested everything in lab environment and discussed with Cumulus support, if you have their support. 

What are we going to test?

As you can see, there are three major building blocks of the articles, which needs to be put in the proper sequence to get the desired result. Therefore, we do these steps:

  1. Upgrade the FRR on the Mellanox SN 2010 running Cumulus Linux.
  2. Deploy network between PNF (Mellanox/Cumulus) and VNFs (Nokia VSR 19.5.R1 and Cisco IOS XRv 6.5.1).
  3. Configure Segment Routing in multivendor environment of data centre suppliers (Cisco, Nokia, Mellanox/Cumulus). 

Software version

The following software components are used in this lab. 

Management host:

Enabler and monitoring infrastructure:

The Data Centre Fabric:

More details about Data Centre Fabric you may find in the previous articles.

Topology

The physical topology we continue to use from the previous lab: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
+-----------------------------------------------------------------------+
|                                                                       |
|                                   /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\    |
|                                  /                   Docker cloud \   |
|    (c)karneliuk.com             /                        +------+  \  |
|                                 \                    +---+ DHCP |  /  |
|     Mellanox/Cumulus lab        /                    | .2+------+  \  |
|                                 \                    |             /  |
|                                  \   172.17.0.0/16   |   +------+  \  |
|  +-----------+                    \+------------+    +---+ DNS  |  /  |
|  |           |                     |            +----+ .3+------+  \  |
|  | Mellanox  |  169.254.255.0/24   | Management |.1  |             /  |
|  |  SN2010   | fc00:de:1:ffff::/64 |    host    |    |   +------+  \  |
|  |           |                     |            |    +---+ FTP  |  /  |
|  |  mlx+cl   | eth0       enp2s0f1 |  carrier   |\   | .4+------+  \  |
|  |           | .21              .1 |            | \  |             /  |
|  |           | :21              :1 |  +------+  | /  |   +------+  \  |
|  |           +------------------------+ br0  |  | \  +---+ HTTP |  /  |
|  |           |                     |  +------+  |  \   .5+------+  \  |
|  |           |                     |            |   \              /  |
|  |           | swp1         ens2f0 |  +------+  +--+ \/\/\/\/\/\/\/   |
|  |           +------------------------+ br1  +-+   |                  |
|  |           |                     |  +------+ |   +-------------+    |
|  |           |                     |           |                 |    |
|  |           | swp7         ens2f1 |  +------+ +--------------+  |    |
|  |           +------------------------+ br2  |     +--+       |  |    |
|  |           |                     |  +------+--+     |  SR1  |  |    |
|  |           |                     |            |  +--+       |  |    |
|  +-----------+                     |  +------+  |  |  +-------+  |    |
|                                    |  | br3  +-----+             |    |
|                                    |  +------+  |     +-------+  |    |
|                                    |            +-----+       |  |    |
|                                    |  +------+        |  XR1  |  |    |
|                                    |  | br4  +--------+       |  |    |
|                                    |  +------+        +-------+  |    |
|                                    |                             |    |
|                                    +-----------------------------+    |
|                                                                       |
+-----------------------------------------------------------------------+

Details about the connection of the Mellanox SN 2010 to the HP lab server you can find in the previous lab.

Comparing to the previous blogpost, the topology is extended with two VNFs, the connectivity of which will be described later in the article in greater details.

You can use any hypervisor of your choice (KVM, VMWare Player/ESXI, etc) to run guest VNFs. For KVM you can use corresponding cheat sheet for VM creation.

The logical topology for the lab is the following: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
+-----------------------------------------------------------------------------+
|                                                                             |
|   +---------+ 169.254.0.0/31 +----------+ 169.254.0.2/31 +---------+        |
|   |         |                |          |                |         |        |
|   |   SR1   +----------------+  mlx-cl  +----------------+   XR1   |        |
|   |AS:65001 | .0          .1 | AS:65002 | .2          .3 |AS:65003 |        |
|   +----+----+ <---eBGP-LU--> +----+-----+ <---eBGP-LU--> +----+----+        |
|        |                          |                           |             |
|       +++                        +++                         +++            |
|     system                     lo                          Lo0              |
|     IPv4:10.0.0.1/32           IPv4:10.0.0.2/32            IPv4:10.0.0.3/32 |
|                                                                             |
|        <----------------------eBGP-VPNV4---------------------->             |
|                                                                             |
|                         Segment Routing in DC                               |
|                         (c) karneliuk.com                                   |
|                                                                             |
+-----------------------------------------------------------------------------+

Actually, the topology is very close to the Service Provider Fabric (link), with the following deviations:

  1. Today we have 2 PE and 1 P router.
  2. As we speak about the data centre, we run eBGP with Segment Routing to build the fabric instead of the ISIS.

The further details of the configuration will be displayed later into the corresponding section.

The initial configuration files for the lab you can find on my GitHub page.

Updating FRR on the Mellanox/Cumulus

Some time ago we discussed the structure of the Cumulus files, especially the once necessary for the FRR. Before we upgrade the FRR to the newer version, we highly recommend you save the existing configuration files, as the structure of these files is different after the upgrade, and we need to rebuild it back. More precisely, we need to save the daemon configuration (as there is no FRR configuration done yet).


1
2
3
4
5
6
7
8
9
10
11
$ mkdir frr_backup
$ cd frr_backup/
$ scp cumulus@169.254.255.21:/etc/frr/* .
cumulus@169.254.255.21's password:
daemons                      100% 1025    48.0KB/s   00:00    
daemons.conf                 100% 1246   231.6KB/s   00:00    
frr.conf                     100%  120    21.9KB/s   00:00    
vtysh.conf                   100%   60     5.2KB/s   00:00
$ scp cumulus@169.254.255.21:/etc/default/frr .
cumulus@169.254.255.21's password:
frr                          100%  261     9.9KB/s   00:00

If you have FRR configuration, you need to save also /etc/frr/frr.conf file.

The last check before upgrade (surprise, surprise), the network connectivity of towards the internet (default gateway, DNS, NAT, everything you need to allow host to download packages): 


1
2
3
4
5
6
7
8
9
cumulus@mlx-cl:mgmt-vrf:~$ ping -I mgmt 8.8.8.8
ping: Warning: source address might be selected on device other than mgmt.
PING 8.8.8.8 (8.8.8.8) from 169.254.255.21 mgmt: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=53 time=7.21 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=53 time=7.14 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 7.149/7.181/7.213/0.032 ms

Once the backup and the verification are done, we can proceed with the upgrade. Cumulus Linux doesn’t have an official newer version (yet), so you need to find another way. Following the name Cumulus Linux, we are looking for its type:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
cumulus@mlx-cl:mgmt-vrf:~$ cat /etc/os-release
NAME="Cumulus Linux"
VERSION_ID=3.7.9
VERSION="Cumulus Linux 3.7.9"
PRETTY_NAME="Cumulus Linux"
ID=cumulus-linux
ID_LIKE=debian
CPE_NAME=cpe:/o:cumulusnetworks:cumulus_linux:3.7.9
HOME_URL="http://www.cumulusnetworks.com/"
SUPPORT_URL="http://support.cumulusnetworks.com/"


cumulus@mlx-cl:mgmt-vrf:~$ lsb_release -s -c
jessie

Now you need to find a way, how to upgrade the FRR on Debian. There are certain list of the commands, so you just follow these instructions (with slight modification of the last command, which you should pay attention to): 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
cumulus@mlx-cl:mgmt-vrf:~$ curl -s https://deb.frrouting.org/frr/keys.asc | sudo apt-key add -
OK


cumulus@mlx-cl:mgmt-vrf:~$ FRRVER="frr-stable"
cumulus@mlx-cl:mgmt-vrf:~$ echo deb https://deb.frrouting.org/frr $(lsb_release -s -c) $FRRVER | sudo tee -a /etc/apt/sources.list.d/frr.list
deb https://deb.frrouting.org/frr jessie frr-stable


cumulus@mlx-cl:mgmt-vrf:~$ sudo apt -y update && sudo apt install -y frr=7.1-1~deb8u1 --fix-missing
Get:1 http://repo3.cumulusnetworks.com CumulusLinux-3 InRelease [7,645 B]
!
! THE OUTPUT IS TRUNCATED
!
Get:10 https://deb.frrouting.org jessie/frr-stable Translation-en_US [320 B]
Get:11 https://deb.frrouting.org jessie/frr-stable Translation-en [317 B]
Get:12 https://deb.frrouting.org jessie/frr-stable Translation-en_US [320 B]
Get:13 https://deb.frrouting.org jessie/frr-stable Translation-en [317 B]
Get:14 https://deb.frrouting.org jessie/frr-stable Translation-en_US [320 B]
Get:15 https://deb.frrouting.org jessie/frr-stable Translation-en [317 B]
Get:16 https://deb.frrouting.org jessie/frr-stable Translation-en_US [320 B]
Ign https://deb.frrouting.org jessie/frr-stable Translation-en_US                  
Get:17 https://deb.frrouting.org jessie/frr-stable Translation-en [317 B]          
Ign https://deb.frrouting.org jessie/frr-stable Translation-e
!
! THE OUTPUT IS TRUNCATED
!

Once the installation is done, you can verify that the correct version is installed: 


1
2
3
4
5
6
7
8
9
10
cumulus@mlx-cl:mgmt-vrf:~$ sudo apt-cache policy frr
frr:
  Installed: 7.1-1~deb8u1
  Candidate: 7.1-1~deb8u1
  Version table:
 *** 7.1-1~deb8u1 0
        500 https://deb.frrouting.org/frr/ jessie/frr-stable amd64 Packages
        100 /var/lib/dpkg/status
     4.0+cl3u15 0
        991 http://repo3.cumulusnetworks.com/repo/ CumulusLinux-3-updates/cumulus amd64 Packages

The FRR is installed, and now we need to restore its configuration, as the logs show the mess with the daemon files (we hope, your saved them as we proposed): 


1
2
3
4
5
6
7
8
9
cumulus@mlx-cl:mgmt-vrf:~$ sudo cat /var/log/syslog | grep 'watchfrr'
2019-09-21T11:48:12.781171+00:00 cumulus frrinit.sh[13310]: watchfrr_options contains a bash array value. The configured value is intentionally ignored since it is likely wrong. Please remove or fix the setting. ... (warning).
2019-09-21T11:48:12.857795+00:00 cumulus frrinit.sh[13324]: watchfrr_options contains a bash array value. The configured value is intentionally ignored since it is likely wrong. Please remove or fix the setting. ... (warning).
2019-09-21T11:48:12.903893+00:00 cumulus watchfrr.sh: Reading deprecated /etc/frr/daemons.conf.  Please move its settings to /etc/frr/daemons and remove it.
2019-09-21T11:48:12.913864+00:00 cumulus watchfrr.sh: Reading deprecated /etc/default/frr.  Please move its settings to /etc/frr/daemons and remove it.
2019-09-21T11:48:12.923492+00:00 cumulus watchfrr.sh: watchfrr_options contains a bash array value. The configured value is intentionally ignored since it is likely wrong. Please remove or fix the setting.
2019-09-21T11:48:12.933364+00:00 cumulus watchfrr.sh: Cannot stop staticd: pid file not found
2019-09-21T11:48:12.933747+00:00 cumulus watchfrr.sh: Cannot stop zebra: pid file not found
2019-09-21T11:48:12.948807+00:00 cumulus watchfrr.sh: Failed to start zebra!

Now we need to compose the configuration out of different files into a single one /etc/frr/daemons. We copy that information and getting the resulting files as following: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
cumulus@mlx-cl:mgmt-vrf:~$ cat /etc/frr/daemons
# This file tells the frr package which daemons to start.
#
# Sample configurations for these daemons can be found in
# /usr/share/doc/frr/examples/.
#
# ATTENTION:
#
# When activating a daemon for the first time, a config file, even if it is
# empty, has to be present *and* be owned by the user and group "frr", else
# the daemon will not be started by /etc/init.d/frr. The permissions should
# be u=rw,g=r,o=.
# When using "vtysh" such a config file is also needed. It should be owned by
# group "frrvty" and set to ug=rw,o= though. Check /etc/pam.d/frr, too.
#
# The watchfrr and zebra daemons are always started.
#
bgpd=yes
ospfd=no
ospf6d=no
ripd=no
ripngd=no
isisd=no
pimd=no
ldpd=no
nhrpd=no
eigrpd=no
babeld=no
sharpd=no
pbrd=no
bfdd=no
fabricd=no
zebra=yes
#
# If this option is set the /etc/init.d/frr script automatically loads
# the config via "vtysh -b" when the servers are started.
# Check /etc/pam.d/frr if you intend to use "vtysh"!
#
vtysh_enable=yes
zebra_options="  -A 127.0.0.1 -s 90000000"
bgpd_options="   -A 127.0.0.1"
ospfd_options="  -A 127.0.0.1"
ospf6d_options=" -A ::1"
ripd_options="   -A 127.0.0.1"
ripngd_options=" -A ::1"
isisd_options="  -A 127.0.0.1"
pimd_options="   -A 127.0.0.1"
ldpd_options="   -A 127.0.0.1"
nhrpd_options="  -A 127.0.0.1"
eigrpd_options=" -A 127.0.0.1"
babeld_options=" -A 127.0.0.1"
sharpd_options=" -A 127.0.0.1"
pbrd_options="   -A 127.0.0.1"
staticd_options="-A 127.0.0.1"
bfdd_options="   -A 127.0.0.1"
fabricd_options="-A 127.0.0.1"

# The list of daemons to watch is automatically generated by the init script.
#watchfrr_options=""

# If valgrind_enable is 'yes' the frr daemons will be started via valgrind.
# The use case for doing so is tracking down memory leaks, etc in frr.
valgrind_enable=no
valgrind=/usr/bin/valgrind

# for debugging purposes, you can specify a "wrap" command to start instead
# of starting the daemon directly, e.g. to use valgrind on ospfd:
#   ospfd_wrap="/usr/bin/valgrind"
# or you can use "all_wrap" for all daemons, e.g. to use perf record:
#   all_wrap="/usr/bin/perf record --call-graph -"
# the normal daemon command is added to this at the end.

Once the file is composed, we remove these two files: 


1
2
cumulus@mlx-cl:mgmt-vrf:~$ sudo rm /etc/frr/daemons.conf
cumulus@mlx-cl:mgmt-vrf:~$ sudo rm /etc/default/frr

Afterwards you should restart the frr daemon to get the proper operation: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
cumulus@mlx-cl:mgmt-vrf:~$ sudo systemctl status frr.service
● frr.service - FRRouting
   Loaded: loaded (/lib/systemd/system/frr.service; enabled)
   Active: active (running) since Sat 2019-09-21 12:01:57 UTC; 6s ago
     Docs: https://frrouting.readthedocs.io/en/latest/setup.html
  Process: 13887 ExecStop=/usr/lib/frr/frrinit.sh stop (code=exited, status=0/SUCCESS)
  Process: 13907 ExecStart=/usr/lib/frr/frrinit.sh start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/frr.service
           ├─13914 /usr/lib/frr/watchfrr -d zebra bgpd staticd
           ├─13930 /usr/lib/frr/zebra -d -A 127.0.0.1 -s 90000000
           ├─13934 /usr/lib/frr/bgpd -d -A 127.0.0.1
           └─13941 /usr/lib/frr/staticd -d -A 127.0.0.1

Sep 21 12:01:56 mlx-cl watchfrr[13914]: [EC 100663303] Forked background command [pid 13915]: /usr/lib/frr/watchfrr.sh restart all
Sep 21 12:01:57 mlx-cl zebra[13930]: client 15 says hello and bids fair to announce only bgp routes vrf=0
Sep 21 12:01:57 mlx-cl zebra[13930]: client 25 says hello and bids fair to announce only vnc routes vrf=0
Sep 21 12:01:57 mlx-cl zebra[13930]: client 32 says hello and bids fair to announce only static routes vrf=0
Sep 21 12:01:57 mlx-cl watchfrr[13914]: zebra state -> up : connect succeeded
Sep 21 12:01:57 mlx-cl watchfrr[13914]: bgpd state -> up : connect succeeded
Sep 21 12:01:57 mlx-cl watchfrr[13914]: staticd state -> up : connect succeeded
Sep 21 12:01:57 mlx-cl watchfrr[13914]: all daemons up, doing startup-complete notify
Sep 21 12:01:57 mlx-cl frrinit.sh[13907]: Started watchfrr.
Sep 21 12:01:57 mlx-cl systemd[1]: Started FRRouting.

In such a way, we just have forked Cumulus Linux J Now have the newest version of the FRR on our switch and we can proceed with the connection of the virtual routers (Nokia VSR and Cisco IOS XRv) to the physical network switch Mellanox SN 2010 running Cumulus Linux 3.7.9 and FRR 7.1.

Connecting PNF to VNF

There are three major types, how the PNF can be connected to the VNF: PCI-passthrough, SR-IOV and paravirtualization. In a nutshell, they can be described as following:

  1. PCI-passthrough provides VNF the whole control over the network interface card (NIC). This provides typically the highest possible throughput for a particular VNF, but locks the NIC solely to the single VNF. In highly intensive data plane applications (virtual FW/NAT, Router, BNG) this could be very useful, given there is no other VNFs running on the same server.
  2. SR-IOV is possibility to share the NIC between different VNFs (typically, limited number) with quite a good performance. There is a possibility also to use the NIC as a network switch, which can interconnect VNFs on the same host. In this case hypervisor could be one of the NIC users, and the traffic from the different VNFs is sent to the network packed in different VLANs.
  3. Paravirtualization is where hypervisor controls all the network connectivity to VNFs and in a certain sense processes all the traffic. This is typically the least performant type of the network connectivity, though providing the highest flexibility to network elements in terms of VNF migration between hosts and so on.

In our lab we are mainly focused on the flexibility of building connectivity between the VNFs and not so much on the performance, hence we use the paravirtualization. In Linux world, one of the most popular switches is the OVS (OpenVSwitch), as it provides the highest flexibility, opportunity to start VXLAN encapsulation towards the data centre fabric directly from the host and so on. We don’t use it J We might start using that in future, as for our purposes the default Linux bridges tools brctl is enough. 

You can think about brctl as tool to create the microswitches, which just switch the traffic between the connected physical or virtual ports. It is possible to assign the IP address to such a bridge, what turns it into the SVI in Cisco terms interface (VLAN interface with is IP address). We don’t need that in our lab, that’s why just create the bridges and assign the physical ports to it. We have done that in the previous lab (link), but we’ll recall that here as well: 


1
2
3
4
5
6
7
8
9
brctl addif br0 enp2s0f1
brctl addbr br1
brctl addif br1 ens2f0
ifconfig br1 up
ifconfig ens2f0 up
brctl addbr br2
brctl addif br2 ens2f1
ifconfig br2 up
ifconfig ens2f1 up

In the beginning of this article, we’ve provided the topology, so here we provide just its subset to facilitate the discussion on the VNF/PNF networking: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
+-----------------------------------------------------------------------+
|                                                                       |
|                                                                       |
|                                                                       |
|    (c)karneliuk.com                                                   |
|                                                                       |
|     Mellanox/Cumulus lab                                              |
|                                                                       |
|                                                                       |
|  +-----------+                     +------------+                     |
|  |           |                     |            |                     |
|  | Mellanox  |  169.254.255.0/24   | Management |                     |
|  |  SN2010   | fc00:de:1:ffff::/64 |    host    |                     |
|  |           |                     |            |                     |
|  |  mlx+cl   | eth0       enp2s0f1 |  carrier   |                     |
|  |           | .21              .1 |            |                     |
|  |           | :21              :1 |  +------+  |                     |
|  |           +------------------------+ br0  |  |                     |
|  |           |                     |  +------+  |                     |
|  |           |                     |            |                     |
|  |           | swp1         ens2f0 |  +------+  +--+                  |
|  |           +------------------------+ br1  +-+   |                  |
|  |           |                     |  +------+ |   +-------------+    |
|  |           |                     |           |                 |    |
|  |           | swp7         ens2f1 |  +------+ +---+  +-------+  |    |
|  |           +------------------------+ br2  |     +--+       |  |    |
|  |           |                     |  +------+--+     |  SR1  |  |    |
|  |           |                     |            |  +--+       |  |    |
|  +-----------+                     |  +------+  |  |  +-------+  |    |
|                                    |  | br3  +-----+             |    |
|                                    |  +------+  |     +-------+  |    |
|                                    |            +-----+       |  |    |
|                                    |  +------+        |  XR1  |  |    |
|                                    |  | br4  +--------+       |  |    |
|                                    |  +------+        +-------+  |    |
|                                    |                             |    |
|                                    +-----------------------------+    |
|                                                                       |
+-----------------------------------------------------------------------+

When we launch the VNF in Linux, it automatically connects its port to the proper network bridge, so we don’t need to do it manually. 

On the topology above, the bridges br3 and br4 a created for future use and they will terminate the second connectivity from VNFs.

Therefore, we just create two more bridges and then launched the VNFs using such a bash script: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ cat sr_lab.sh
#! /usr/bin/bash

# Creating bridges
brctl addbr br3
ifconfig br3 up
brctl addbr br4
ifconfig br4 up

# Starting VMs
virsh start SR1
virsh start XR1

# Updating FW rules
iptables -I FORWARD 1 -s 10.0.0.0/8 -d 10.0.0.0/8 -j ACCEPT
iptables -I FORWARD 1 -s 169.254.0.0/16 -d 169.254.0.0/16 -j ACCEPT
[aaa@carrier sn2010]$ ls


$ sudo ./sr_lab.sh
[sudo] password for aaa:
Domain SR1 started

Domain XR1 started

Don’t forget to permit the traffic between VNF/PNF in the iptables, otherwise the connectivity won’t work.

The details of the VNF creation you will find in a dedicated article, so here is a sample of the Nokia VSR running SR OS 19.5.R1 for your reference: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ sudo virt-install \
  --name=SR1 \
  --description "SR1 VM" \
  --os-type=Linux \
  --sysinfo type='smbios',system_product='TIMOS:address=fc00:de:1:ffff::A1/112@active address=192.168.1.101/24@active static-route=fc00:ffff:1::/64@fc00:de:1:ffff::1 license-file=ftp://*:*@169.254.255.1/sros19.lic slot=A chassis=SR-1 card=iom-1 mda/1=me6-100gb-qsfp28'\
  --ram=4096 \
  --vcpus=2 \
  --boot hd \
  --disk path=/var/lib/libvirt/images/SR1.qcow2,bus=virtio,size=4 \
  --import \
  --graphics vnc \
  --serial tcp,host=0.0.0.0:3301,mode=bind,protocol=telnet \
  --network=bridge:br0,mac=52:54:00:02:02:00,model=virtio \
  --network=bridge:br1,mac=52:54:00:02:02:01,model=virtio \
  --network=bridge:br3,mac=52:54:00:02:02:02,model=virtio

Check the VMs are booted: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ telnet 0.0.0.0 3301
Trying 0.0.0.0...
Connected to 0.0.0.0.
Escape character is '^]'.

Login: admin
Password:

 SR OS Software
 Copyright (c) Nokia 2019.  All Rights Reserved.


$ telnet 0.0.0.0 2251
Trying 0.0.0.0...
Connected to 0.0.0.0.
Escape character is '^]'.


IMPORTANT:  READ CAREFULLY
Welcome to the Demo Version of Cisco IOS XRv (the "Software").

The basic configuration of the devices you can find in the corresponding article.

And we are approaching our last key point, which is the Segment Routing.

Building eBGP – Segment Routing underlay fabric for Nokia SR OS, Cisco IOS XR and Mellanox/Cumulus

For building the data centre fabric we use eBGP following this RFC. As just want to change the data plane to Segment Routing / MPLS, and not the overlay itself, we need to enable MPLS data plane and reconfigure the BGP properly. 

As Cisco IOS XRv has a limitation with EVPN, we will show the operation of the overlay using L3 VPN.

As we did with the physical topology, it makes sense to remind the logical topology to avoid you scrolling here and there: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
+-----------------------------------------------------------------------------+
|                                                                             |
|   +---------+ 169.254.0.0/31 +----------+ 169.254.0.2/31 +---------+        |
|   |         |                |          |                |         |        |
|   |   SR1   +----------------+  mlx-cl  +----------------+   XR1   |        |
|   |AS:65001 | .0          .1 | AS:65002 | .2          .3 |AS:65003 |        |
|   +----+----+ <---eBGP-LU--> +----+-----+ <---eBGP-LU--> +----+----+        |
|        |                          |                           |             |
|       +++                        +++                         +++            |
|     system                     lo                          Lo0              |
|     IPv4:10.0.0.1/32           IPv4:10.0.0.2/32            IPv4:10.0.0.3/32 |
|                                                                             |
|        <----------------------eBGP-VPNV4---------------------->             |
|                                                                             |
|                         Segment Routing in DC                               |
|                         (c) karneliuk.com                                   |
|                                                                             |
+-----------------------------------------------------------------------------+

#1. Configuration of the network interfaces

The first essential task of building the data centre fabric is configure the connectivity between the devices. 

We start with the leaf running Nokia SR OS 19.5.R1:


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
A:admin@SR1# configure global
INFO: CLI #2054: Entering global configuration mode

[gl:configure]
A:admin@SR1#
    port 1/1/c1 {
        admin-state enable
        connector {
            breakout c4-10g
        }
    }
    port 1/1/c1/1 {
        admin-state enable
        ethernet {
            mode network
            mtu 1514
        }
    }
    router "Base" {
        interface "system" {
            ipv4 {
                primary {
                    address 10.0.0.1
                    prefix-length 32
                }
            }
        }
        interface "to_mlx-cl" {
            admin-state enable
            port 1/1/c1/1
            ipv4 {
                primary {
                    address 169.254.0.0
                    prefix-length 31
                }
            }
        }
    }
A:admin@SR1# commit

Then our Mellanox SN 2010 running Cumulus Linux 3.7.9 with updated FRR 7.1: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
cumulus@mlx-cl:mgmt-vrf:~$ net add interface swp1 ip address 169.254.0.1/31
cumulus@mlx-cl:mgmt-vrf:~$ net add interface swp1 mtu 1514
cumulus@mlx-cl:mgmt-vrf:~$ net add interface swp7 ip address 169.254.0.2/31
cumulus@mlx-cl:mgmt-vrf:~$ net add interface swp7 mtu 1514
cumulus@mlx-cl:mgmt-vrf:~$ net add loopback lo ip address 10.0.0.2/32
cumulus@mlx-cl:mgmt-vrf:~$ net commit
--- /etc/network/interfaces 2019-09-21 13:05:12.173749398 +0000
+++ /run/nclu/ifupdown2/interfaces.tmp  2019-09-21 13:06:01.770751763 +0000
@@ -1,19 +1,31 @@
 # This file describes the network interfaces available on your system
 # and how to activate them. For more information, see interfaces(5).
 
 source /etc/network/interfaces.d/*.intf
 
 # The loopback network interface
 auto lo
 iface lo inet loopback
+    # The primary network interface
+    address 10.0.0.2/32
 
 # The primary network interface
 auto eth0
 iface eth0 inet dhcp
     vrf mgmt
 
+auto swp1
+iface swp1
+    address 169.254.0.1/31
+    mtu 1514
+
+auto swp7
+iface swp7
+    address 169.254.0.2/31
+    mtu 1514
+
 auto mgmt
 iface mgmt
     address 127.0.0.1/8
     vrf-table auto
 



net add/del commands since the last "net commit"
================================================

User     Timestamp                   Command
-------  --------------------------  ------------------------------------------------
cumulus  2019-09-21 13:05:16.031761  net add interface swp1 ip address 169.254.0.1/31
cumulus  2019-09-21 13:05:19.822497  net add interface swp1 mtu 1514
cumulus  2019-09-21 13:05:21.685968  net add interface swp7 ip address 169.254.0.2/31
cumulus  2019-09-21 13:05:24.317840  net add interface swp7 mtu 1514
cumulus  2019-09-21 13:05:57.325045  net add loopback lo ip address 10.0.0.2/32

And the final configuration is on the VNF leaf running Cisco IOS XR 6.5.1: 


1
2
3
4
5
6
7
8
9
10
11
12
RP/0/0/CPU0:XR1(config)#show conf
Sat Sep 21 13:06:52.997 UTC
Building configuration...
!! IOS XR Configuration 6.5.1.34I
!
interface Loopback0
 ipv4 address 10.0.0.3 255.255.255.255
!  
interface GigabitEthernet0/0/0/0
 ipv4 address 169.254.0.3 255.255.255.254
!
end

The only point, where we can verify if both links are running is the Mellanox/Cumulus spine in this topology: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cumulus@mlx-cl:mgmt-vrf:~$ ping 169.254.0.0 -c 1
PING 169.254.0.0 (169.254.0.0) 56(84) bytes of data.
64 bytes from 169.254.0.0: icmp_seq=1 ttl=64 time=1.93 ms

--- 169.254.0.0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.934/1.934/1.934/0.000 ms


cumulus@mlx-cl:mgmt-vrf:~$ ping 169.254.0.3 -c 1
PING 169.254.0.3 (169.254.0.3) 56(84) bytes of data.
64 bytes from 169.254.0.3: icmp_seq=1 ttl=255 time=2.69 ms

--- 169.254.0.3 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.694/2.694/2.694/0.000 ms

#2. Configuration of the eBGP and MPLS for underlay

The next step  (and the most important for the segment routing) is to enable MPLS data plane and to configure the BGP-LU, as this type of BGP allows us to build MPLS data plane without any IGP.

The configuration of Cisco IOS XR enabled VNF leaf would be the following: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
RP/0/0/CPU0:XR1(config)#show conf
Sat Sep 21 13:43:21.384 UTC
Building configuration...
!! IOS XR Configuration 6.5.1.34I
!
prefix-set PL_LO
  10.0.0.0/8 eq 32
end-set
!
route-policy SID($SID)
  set label-index $SID
end-policy
!        
route-policy RP_LO
  if destination in PL_LO then
    pass
  else
    drop
  endif
end-policy
!
route-policy RP_PASS_ALL
  pass
end-policy
!
router static
 address-family ipv4 unicast
  169.254.0.2/32 GigabitEthernet0/0/0/0
 !
!
router bgp 65003
 bgp router-id 10.0.0.3
 mpls activate
  interface GigabitEthernet0/0/0/0
 !
 bgp log neighbor changes detail
 address-family ipv4 unicast
  network 10.0.0.3/32 route-policy SID(3)
  allocate-label route-policy RP_LO
 !
 neighbor 169.254.0.2
  remote-as 65002
  address-family ipv4 labeled-unicast
   send-community-ebgp
   route-policy RP_PASS_ALL in
   route-policy RP_PASS_ALL out
   send-extended-community-ebgp
  !
 !
!
segment-routing
 global-block 16000 23999
!
end

The detailed explanation of the BGP-LU on Cisco IOS XR and Nokia SR OS you can find in the dedicated article.

In a nutshell, we enable BGP-LU. What enables the Segment Routing, is the route-policy SID, which adds label-index, and the segment-routing global-block.

The configuration of Nokia SR OS enabled VNF leaf would be the following: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
A:admin@SR1# configure global
INFO: CLI #2054: Entering global configuration mode

[gl:configure]
A:admin@SR1#
    policy-options {
        prefix-list "PL_IPV4_LO" {
            prefix 10.0.0.1/32 type exact {
            }
        }
        policy-statement "RP_PASS_ALL" {
            default-action {
                action-type accept
            }
        }
        policy-statement "RP_PASS_LO" {
            entry 10 {
                from {
                    prefix-list ["PL_IPV4_LO"]
                }
                action {
                    action-type accept
                }
            }
            default-action {
                action-type reject
            }
        }
    }
    router "Base" {
        autonomous-system 65001
        mpls-labels {
            static-label-range 15000
            sr-labels {
                start 16000
                end 23999
            }
        }
        bgp {
            next-hop-resolution {
                labeled-routes {
                    allow-static true
                }
            }
            group "FABRIC" {
                peer-as 65002
                local-address 169.254.0.0
                family {
                    ipv4 false
                    label-ipv4 true
                }
                import {
                    policy ["RP_PASS_ALL"]
                }
                export {
                    policy ["RP_PASS_LO"]
                }
            }
            neighbor "169.254.0.1" {
                group "FABRIC"
            }
        }
    }
A:admin@SR1# commit

The detailed explanation of the BGP-LU on Cisco IOS XR and Nokia SR OS you can find in the dedicated article.

To be frank, as of today the Nokia SR OS doesn’t support the Segment Routing for BGP, but that will be available quite soon. That’s why the ordinary BGP-LU does the job, as the MPLS label is distributed over BGP-LU session in any case, though not as expected.

Now let’s take a look on Mellanox/Cumulus host. There is an article, which explains the configuration of the Segment Routing, but it is a bit outdated and the configuration (especially for FRR) is a bit strange. So here the configuration we have for this lab: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
cumulus@mlx-cl:mgmt-vrf:~$ net pending
--- /run/nclu/frr/frr.conf.scratchpad.baseline  2019-09-21 13:54:19.181889923 +0000
+++ /run/nclu/frr/frr.conf.scratchpad   2019-09-21 13:56:10.891895249 +0000
@@ -1,8 +1,24 @@
 frr version 7.1
 frr defaults traditional
 hostname mlx-cl
 log syslog informational
 service integrated-vtysh-config
 line vty
 
 end
+router bgp 65002
+ bgp router-id 10.0.0.2
+ neighbor 169.254.0.0 remote-as 65001
+ neighbor 169.254.0.3 remote-as 65003
+ address-family ipv4 unicast
+  network 10.0.0.2/32 label-index 2
+  no neighbor 169.254.0.0 activate
+  no neighbor 169.254.0.3 activate
+ exit-address-family
+ address-family ipv4 labeled-unicast
+  neighbor 169.254.0.0 activate
+  neighbor 169.254.0.3 activate
+ exit-address-family
+end
+mpls label global-block 16000 23999
+end
--- /etc/network/interfaces 2019-09-21 13:37:10.133840854 +0000
+++ /run/nclu/ifupdown2/interfaces.tmp  2019-09-21 13:56:30.216896171 +0000
@@ -9,22 +9,24 @@
     address 10.0.0.2/32
 
 # The primary network interface
 auto eth0
 iface eth0 inet dhcp
     vrf mgmt
 
 auto swp1
 iface swp1
     address 169.254.0.1/31
+    mpls-enable yes
     mtu 1514
 
 auto swp7
 iface swp7
     address 169.254.0.2/31
+    mpls-enable yes
     mtu 1514
 
 auto mgmt
 iface mgmt
     address 127.0.0.1/8
     vrf-table auto
 



net add/del commands since the last "net commit"
================================================

User     Timestamp                   Command
-------  --------------------------  --------------------------------------------------------------
cumulus  2019-09-21 13:54:19.185421  net add bgp autonomous-system 65002
cumulus  2019-09-21 13:54:28.911902  net add bgp router-id 10.0.0.2
cumulus  2019-09-21 13:54:42.919900  net add bgp neighbor 169.254.0.0 remote-as 65001
cumulus  2019-09-21 13:54:49.334038  net add bgp neighbor 169.254.0.3 remote-as 65003
cumulus  2019-09-21 13:55:02.850414  net add bgp ipv4 unicast network 10.0.0.2/32 label-index 2
cumulus  2019-09-21 13:55:36.562592  net del bgp ipv4 unicast neighbor 169.254.0.0 activate
cumulus  2019-09-21 13:55:40.710395  net del bgp ipv4 unicast neighbor 169.254.0.3 activate
cumulus  2019-09-21 13:55:55.097135  net add bgp ipv4 labeled-unicast neighbor 169.254.0.0 activate
cumulus  2019-09-21 13:55:58.088066  net add bgp ipv4 labeled-unicast neighbor 169.254.0.3 activate
cumulus  2019-09-21 13:56:10.892428  net add mpls label global-block 16000 23999
cumulus  2019-09-21 13:56:24.416955  net add interface swp1 mpls-enable
cumulus  2019-09-21 13:56:29.024184  net add interface swp7 mpls-enable


cumulus@mlx-cl:mgmt-vrf:~$ net commit

Once the configuration is applied (if not, just copy proposed changes directly to /etc/network/interfaces and /etc/frr/frr.conf directly), the BGP-LU peering on Mellanox/Cumulus is up: 


1
2
3
4
5
6
7
8
9
10
11
cumulus@mlx-cl:mgmt-vrf:~$ net show bgp ipv4 labeled-unicast summary
BGP router identifier 10.0.0.2, local AS number 65002 vrf-id 0
BGP table version 0
RIB entries 0, using 0 bytes of memory
Peers 2, using 41 KiB of memory

Neighbor        V         AS MsgRcvd MsgSent   TblVer  InQ OutQ  Up/Down State/PfxRcd
169.254.0.0     4      65001      15      17        0    0    0 00:05:39            1
169.254.0.3     4      65003       9      11        0    0    0 00:05:39            1

Total number of neighbors 2

This node is kind of central one, as it operates the Segment Routing enabled BGP-LU and ordinary. The SR-BGP-LU has additional path attribute called Label Index: 


1
2
3
4
5
6
7
8
9
10
11
12
cumulus@mlx-cl:mgmt-vrf:~$ net show bgp ipv4 unicast 10.0.0.3/32
BGP routing table entry for 10.0.0.3/32
Local label: 16003
Paths: (1 available, best #1, table default)
  Advertised to non peer-group peers:
  169.254.0.0 169.254.0.3
  65003
    169.254.0.3 from 169.254.0.3 (10.0.0.3)
      Origin IGP, metric 0, valid, external, best
      Remote label: 3
      Label Index: 3
      Last update: Sat Sep 21 14:01:33 2019

Whereas for ordinary BGP-LU this attribute is missing: 


1
2
3
4
5
6
7
8
9
10
11
cumulus@mlx-cl:mgmt-vrf:~$ net show bgp ipv4 unicast 10.0.0.1/32
BGP routing table entry for 10.0.0.1/32
Local label: 16
Paths: (1 available, best #1, table default)
  Advertised to non peer-group peers:
  169.254.0.0 169.254.0.3
  65001
    169.254.0.0 from 169.254.0.0 (10.0.0.1)
      Origin IGP, valid, external, best
      Remote label: 524287
      Last update: Sat Sep 21 14:01:59 2019

Verification of thr BGP-LU for Cisco IOS XR and Nokia SR OS you can find in a dedicated article.

The last point is to verify the MPLS data plane, what we do on all the network elements.

Nokia SR OS: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
A:admin@SR1# show router tunnel-table detail

===============================================================================
Tunnel Table (Router: Base)
===============================================================================
Destination      : 10.0.0.2/32
NextHop          : 169.254.0.1
Tunnel Flags     : (Not Specified)
Age              : 00h04m09s
CBF Classes      : (Not Specified)
Owner            : bgp                  Encap            : MPLS
Tunnel ID        : 262145               Preference       : 12
Tunnel Label     : 3                    Tunnel Metric    : 1000
Tunnel MTU       :  -                   Max Label Stack  : 1
-------------------------------------------------------------------------------
Destination      : 10.0.0.3/32
NextHop          : 169.254.0.1
Tunnel Flags     : (Not Specified)
Age              : 00h03m56s
CBF Classes      : (Not Specified)
Owner            : bgp                  Encap            : MPLS
Tunnel ID        : 262146               Preference       : 12
Tunnel Label     : 16003                Tunnel Metric    : 1000
Tunnel MTU       :  -                   Max Label Stack  : 1
-------------------------------------------------------------------------------
Number of tunnel-table entries          : 2
Number of tunnel-table entries with LFA : 0
===============================================================================

Mellanox/Cumulus: 


1
2
3
4
5
6
cumulus@mlx-cl:mgmt-vrf:~$ net show mpls table
Inbound                            Outbound
   Label     Type          Nexthop     Label
--------  -------  ---------------  --------
      16      BGP      169.254.0.0    524287
   16003      BGP      169.254.0.3  implicit-null

As you can see, here is the clear visible that the switch changes the label for non-SR labeld route.

Cisco IOS XR: 


1
2
3
4
5
6
7
8
RP/0/0/CPU0:XR1#show mpls forwarding
Sat Sep 21 14:05:12.485 UTC
Local  Outgoing    Prefix             Outgoing     Next Hop        Bytes      
Label  Label       or ID              Interface                    Switched    
------ ----------- ------------------ ------------ --------------- ------------
16002  Pop         SR Pfx (idx 2)     Gi0/0/0/0    169.254.0.2     352        
24000  Pop         169.254.0.2/32     Gi0/0/0/0    169.254.0.2     5182        
24003  16          10.0.0.1/32        Gi0/0/0/0    169.254.0.2     1040

And Cisco installs that non-SR label.

We can check that the connectivity between Cisco and Nokia VNFs are established:


1
2
3
4
5
6
RP/0/0/CPU0:XR1#ping 10.0.0.1 so 10.0.0.3
Sat Sep 21 14:03:04.353 UTC
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/9 ms

#3. DEMO – configuration of IP VPN overlay between Leafs/PEs

The last step is to show you that Segment Routing / MPLS data plane is working. Do to that, we just deploy the basic IP VPN between the VNFs:

Cisco IOS XR side: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
RP/0/0/CPU0:XR1(config)#show conf
Sat Sep 21 14:20:51.420 UTC
Building configuration...
!! IOS XR Configuration 6.5.1.34I
!
vrf TEST
 address-family ipv4 unicast
  import route-target
   65000:1
  !
  export route-target
   65000:1
  !
 !
!
interface Loopback100
 vrf TEST
 ipv4 address 172.16.0.3 255.255.255.255
!
router bgp 65003
 address-family vpnv4 unicast
 !
 neighbor 10.0.0.1
  remote-as 65001
  ebgp-multihop 255
  update-source Loopback0
  address-family vpnv4 unicast
   route-policy RP_PASS_ALL in
   route-policy RP_PASS_ALL out
  !
 !
 vrf TEST
  rd 10.0.0.3:1
  address-family ipv4 unicast
   redistribute connected
  !
 !
!
end

Nokia SR OS side: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
*(gl)[]
A:admin@SR1# compare
    configure {
+       service {
+           vprn "TEST" {
+               admin-state enable
+               auto-bind-tunnel {
+                   resolution any
+               }
+               service-id 10
+               customer "1"
+               route-distinguisher "10.0.0.1:1"
+               vrf-target {
+                   community "target:65000:1"
+               }
+               interface "TEST_LO" {
+                   admin-state enable
+                   loopback true
+                   ipv4 {
+                       primary {
+                           address 172.16.0.1
+                           prefix-length 32
+                       }
+                   }
+               }
+           }
+       }
    }
        bgp {
            next-hop-resolution {
                use-bgp-routes true
                labeled-routes {
                    allow-static true
                    transport-tunnel {
                        family vpn {
                            resolution any
                        }
                    }
                }
            }
            group "VPN" {
                multihop 255
                peer-as 65003
                local-address 10.0.0.1
                family {
                    vpn-ipv4 true
                }
                import {
                    policy ["RP_PASS_ALL"]
                }
                export {
                    policy ["RP_PASS_ALL"]
                }
            }
            neighbor "10.0.0.3" {
                group "VPN"
            }
            }
*(gl)[]
A:admin@SR1# commit

Detailed IP VPN configuration’s explanation you can find in a separate article.

We briefly verify the status of the routing table within the VRF on Cisco IOS XRv: 


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
RP/0/0/CPU0:XR1#show bgp vpnv4 unicast
Sat Sep 21 20:54:26.892 UTC
BGP router identifier 10.0.0.3, local AS number 65003
BGP generic scan interval 60 secs
Non-stop routing is enabled
BGP table state: Active
Table ID: 0x0   RD version: 0
BGP main routing table version 7
BGP NSR Initial initsync version 5 (Reached)
BGP NSR/ISSU Sync-Group versions 0/0
BGP scan interval 60 secs

Status codes: s suppressed, d damped, h history, * valid, > best
              i - internal, r RIB-failure, S stale, N Nexthop-discard
Origin codes: i - IGP, e - EGP, ? - incomplete
   Network            Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 10.0.0.3:1 (default for vrf TEST)
*> 172.16.0.1/32      10.0.0.1                 0             0 65001 i
*> 172.16.0.3/32      0.0.0.0                  0         32768 ?

Processed 2 prefixes, 2 paths

And do the ping within the overlay service:


1
2
3
4
5
6
RP/0/0/CPU0:XR1#ping 172.16.0.1 vrf TEST
Sat Sep 21 20:58:35.475 UTC
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

Segment Routing MPLS with BGP-LU is working fine with this complex multivendor setup! Hurray, hurray!

You can find the final configuration files on the corresponding GitHub page.

Lessons learned

As you can image, I didn’t have an intention fork the Cumulus Linux, but it was a good experience. Moreover, it shows the advantage of the Linux comparing to the traditional vendor OS, where it is not possible to download new component of the SW and just to apply it.

The second lessons learned is more tough. During the article writing, the power supply in my server just burned out. So I have urgently search for the new and buy it:

new power supply for lab

Luckily I find the one with the next day delivery, what allowed me to stay on track with my blogpost plan. Long story short, you never know, when the HW could be damaged.

Conclusion

I hope you find this article as fascinating as I do. Building new emerging technologies in the data centre world is a fun. But if we take a look from the end-to-end prospective, we can deploy a single data plane technology for the data centres and service provider’s WAN/Backhaul, removing the necessity of any stitching. Additionally, now you know what the options are to connect PNF to VNF. There will be at least one more article about Mellanox/Cumulus in this raw. Take care and goodbye! 

Support us





P.S.

If you have further questions or you need help with your networks, I’m happy to assist you, just send me message. Also don’t forget to share the article on your social media, if you like it.

BR,

Anton Karneliuk 

Exit mobile version