Site icon Karneliuk

MVPN for IPv4/IPv6 in Nokia SR OS and Cisco IOS XR. Part 2 – mLDP transport

Hello my friend,

We continue discussion about MVPN, which we have started in the previous article. So you should start there in order to get full picture on what’s going on.

1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

We don’t have part “brief overview” here as it’s provided in the previous article.

What are we going to test?

In this article we focus on mLDP based MVPNs (again one uses PIM and another BGP for customer signaling):

At the end will deploy try to deploy only S-PMSI based profile, which is calles partitioned MDT in Cisco terms.

Software version

For tests in this lab I use the following versions of software for routers:

No changes in software versions are done since the previous article.

Topology

There are two Nokia (Alcatel-Lucent) VSR (SR 7750) and two Cisco IOS XRv (ASR 9000) in my sandbox as usual:

On the logical topology configuration there is nothing has changed since the previous lab:

Therefore, you should use the same initial configuration files to get your topology ready for NG-MVPN configuration: 092_config_initial_XR3 092_config_initial_SR2 092_config_initial_SR1 092_config_initial_linux 092_config_initial_XR4

Configuration of MVPN profile 1 (GRE transport, MLDP signalling for P Core and PIM for Clients) for IPv4/IPv6

It won’t work. It isn’t interoperable scenario, unfortunately. When I started writing the article, I haven’t cleared all the details and, as you, I was learning while writing. What is good, because it makes the live more interesting. Actually, I’ll show you later, MLDP in MP2MP flavor isn’t supported by Nokia (Alcatel-Lucent) SR OS, therefore we can’t configure this MVPN profile there.

Configuration of MVPN profile 12 (MLDP transport, BGP-MVPN signalling for P Core, for Clients and for Auto Discovery) for IPv4/IPv6

To provide the multicast service for the customer we need to do the following steps:

To accomplish this task, we need to apply the following configuration:

Nokia (Alcatel-Lucent) SR OS Cisco IOS XR
SR1 XR3

A:SR1>edit-cfg# candidate view
=========================
configure
router
bgp
rapid-withdrawal
rapid-update mvpn-ipv4 mvpn-ipv6
mvpn-vrf-import-subtype-new
group “IBGP_PEERS”
family vpn-ipv4 vpn-ipv6 mvpn-ipv4 mvpn-ipv6
exit
exit
exit
service
vprn 10 customer 1 create
igmp
interface “toCLIENT”
no shutdown
exit
no shutdown
exit
mld
interface “toCLIENT”
no shutdown
exit
no shutdown
exit
pim
no ipv6-multicast-disable
interface “toCLIENT”
priority 100
exit
apply-to all
rp
static
exit
bsr-candidate
shutdown
exit
rp-candidate
shutdown
exit
exit
no shutdown
exit
mvpn
auto-discovery default
c-mcast-signaling bgp
provider-tunnel
inclusive
mldp
no shutdown
exit
exit
selective
mldp
no shutdown
exit
no auto-discovery-disable
data-threshold 224.0.0.0/8 1
exit
exit
vrf-target unicast
exit
exit
exit
exit
exit
=========================

RP/0/0/CPU0:XR3#show run
!
vrf CUST
vpn id 1:10
address-family ipv4 multicast
import route-target
65000:10
!
export route-target
65000:10
!
!
address-family ipv6 multicast
import route-target
65000:10
!
export route-target
65000:10
!
!
!
route-policy RP_MCAST_CORE
set core-tree mldp-default
end-policy
!
router bgp 65000
address-family ipv4 mvpn
!
address-family ipv6 mvpn
!
af-group AF_IPV4_MVPN address-family ipv4 mvpn
next-hop-self
!
af-group AF_IPV6_MVPN address-family ipv6 mvpn
next-hop-self
!
neighbor-group IBGP_PEERS
address-family ipv4 mvpn
use af-group AF_IPV4_MVPN
!
address-family ipv6 mvpn
use af-group AF_IPV6_MVPN
!
!
neighbor 10.0.0.11
use neighbor-group IBGP_PEERS
!
neighbor 10.0.0.22
use neighbor-group IBGP_PEERS
!
vrf CUST
address-family ipv4 mvpn
!
address-family ipv6 mvpn
!
!
!
mpls ldp
mldp
logging notifications
!
!
multicast-routing
address-family ipv4
interface Loopback0
enable
!
mdt source Loopback0
accounting per-prefix
!
address-family ipv6
interface Loopback0
enable
!
mdt source Loopback0
accounting per-prefix
!
vrf CUST
address-family ipv4
mdt source Loopback0
log-traps
interface all enable
accounting per-prefix
bgp auto-discovery mldp
!
mdt default mldp p2mp
!
address-family ipv6
mdt source Loopback0
log-traps
interface all enable
accounting per-prefix
bgp auto-discovery mldp
!
mdt default mldp p2mp
!
!
!
router pim
address-family ipv4
log neighbor changes
!
vrf CUST
address-family ipv4
rpf topology route-policy RP_MCAST_CORE
mdt c-multicast-routing bgp
!
log neighbor changes
interface GigabitEthernet0/0/0/0.134
dr-priority 100
!
!
address-family ipv6
rpf topology route-policy RP_MCAST_CORE
mdt c-multicast-routing bgp
announce-pim-join-tlv
!
log neighbor changes
interface GigabitEthernet0/0/0/0.134
dr-priority 100
!
!
!
!
end

SR2 XR4

A:SR2>edit-cfg# candidate view
=========================
configure
router
bgp
rapid-withdrawal
rapid-update mvpn-ipv4 mvpn-ipv6
mvpn-vrf-import-subtype-new
group “IBGP_PEERS”
family vpn-ipv4 vpn-ipv6 mvpn-ipv4 mvpn-ipv6
exit
exit
exit
service
vprn 10 customer 1 create
igmp
interface “toCLIENT”
no shutdown
exit
no shutdown
exit
mld
interface “toCLIENT”
no shutdown
exit
no shutdown
exit
pim
no ipv6-multicast-disable
interface “toCLIENT”
priority 100
exit
apply-to all
rp
static
exit
bsr-candidate
shutdown
exit
rp-candidate
shutdown
exit
exit
no shutdown
exit
mvpn
auto-discovery default
c-mcast-signaling bgp
provider-tunnel
inclusive
mldp
no shutdown
exit
exit
selective
mldp
no shutdown
exit
no auto-discovery-disable
data-threshold 224.0.0.0/8 1
exit
exit
vrf-target unicast
exit
exit
exit
exit
exit
=========================

RP/0/0/CPU0:XR4#show run
!
mpls ldp
mldp
logging notifications
!
!
end

Interesting point in configuration of Nokia (Alcatel-Lucent) SR OS is that you don’t configure MLDP itself in global routing context, you just point to in VPRN. It means that MLDP capability is negotiated by default in Nokia (Alcatel-Lucent), whereas in Cisco IOS XR you need to turn on it explicitly. As LDP is preconfigured in our lab, please use initial configuration for details from the files above.

Let’s check LDP sessions with the respect of MLDP. Here is the output from Nokia (Alcatel-Lucent) SR OS router SR1:

A:SR1# show router ldp session 10.0.0.33 detail
===============================================================================
LDP IPv4 Sessions (Detail)
===============================================================================
Legend: DoD – Downstream on Demand (for address FEC’s only)
DU – Downstream Unsolicited
R – Capability value received from peer
===============================================================================
——————————————————————————-
Session with Peer 10.0.0.33:0, Local 10.0.0.11:0
——————————————————————————-
Adjacency Type : Link State : Established
Up Time : 0d 00:07:58
Max PDU Length : 4096 KA/Hold Time Remaining : 28
Link Adjacencies : 1 Targeted Adjacencies : 0
Local Address : 10.0.0.11 Peer Address : 10.0.0.33
Local UDP Port : 646 Peer UDP Port : 646
Local TCP Port : 646 Peer TCP Port : 646
Local KA Timeout : 30 Peer KA Timeout : 180
Mesg Sent : 183 Mesg Recv : 167
IPv4 Pfx FEC Sent : 3 IPv4 Pfx FEC Recv : 4
IPv6 Pfx FEC Sent : 0 IPv6 Pfx FEC Recv : 0
IPv4 P2MP FEC Sent : 2 IPv4 P2MP FEC Recv : 2
IPv6 P2MP FEC Sent : 0 IPv6 P2MP FEC Recv : 0
Svc Fec128 Sent : 0 Svc Fec128 Recv : 0
Svc Fec129 Sent : 0 Svc Fec129 Recv : 0
IPv4 Addrs Sent : 4 IPv4 Addrs Recv : 3
IPv6 Addrs Sent : 0 IPv6 Addrs Recv : 0
Local GR State : Not Capable Peer GR State : Not Capable
Local Nbr Liveness Time: 0 Peer Nbr Liveness Time : 0
Local Recovery Time : 0 Peer Recovery Time : 0
Number of Restart : 0 Last Restart Time : Never
Label Distribution : DU
Oper Fec Limit Thresho*: 0
Capabilities
Local P2MP : Capable Peer P2MP : Capable
Local MP MBB : Capable Peer MP MBB : Not Capable
Local Dynamic : Capable Peer Dynamic : Not Capable
Local LSR Overload : Capable Peer LSR Overload : Not Capable
Local IPv4 Pfx : Capable Peer IPv4 Pfx : Capable
Local IPv6 Pfx : Capable Peer IPv6 Pfx : Not Capable
Local SvcFec128 : Capable Peer SvcFec128 : Capable
Local SvcFec129 : Capable Peer SvcFec129 : Capable
Local UnregNoti : Capable Peer UnregNoti : Not Capable
Advertise : Address
IPv4 PfxFecOLoad Sent : No IPv4 PfxFecOLoad Recv : No
IPv6 PfxFecOLoad Sent : No IPv6 PfxFecOLoad Recv : No
IPv4 P2MPFecOLoad Sent : No IPv4 P2MPFecOLoad Recv : No
IPv6 P2MPFecOLoad Sent : No IPv6 P2MPFecOLoad Recv : No
Svc Fec128 OLoad Sent : No Svc Fec128 OLoad Recv : No
Svc Fec129 OLoad Sent : No Svc Fec129 OLoad Recv : No
IPv4 PfxFec EOL Sent : No IPv4 PfxFec EOL Recv : No
IPv6 PfxFec EOL Sent : No IPv6 PfxFec EOL Recv : No
IPv4 P2MPFec EOL Sent : No IPv4 P2MPFec EOL Recv : No
IPv6 P2MPFec EOL Sent : No IPv6 P2MPFec EOL Recv : No
Svc Fec128 EOL Sent : No Svc Fec128 EOL Recv : No
Svc Fec129 EOL Sent : No Svc Fec129 EOL Recv : No
===============================================================================
* indicates that the corresponding row element may have been truncated.
===============================================================================

MVPN 10 configuration data

In my eyes the output from Cisco IOS XR is a bit more informative and interesting:

RP/0/0/CPU0:XR3#show mpls mldp neighbors
Mon Nov 6 16:44:05.863 UTC
mLDP neighbor database
MLDP peer ID : 10.0.0.11:0, uptime 00:09:25 Up,
Capabilities : P2MP
Target Adj : No
Upstream count : 2
Branch count : 2
Label map timer : never
Policy filter in :
Path count : 1
Path(s) : 10.11.33.11 GigabitEthernet0/0/0/0.13 LDP
Adj list : 10.11.33.11 GigabitEthernet0/0/0/0.13
Peer addr list : 10.0.0.11
: 10.11.22.11
: 10.11.33.11
: 10.11.44.11
.
MLDP peer ID : 10.0.0.44:0, uptime 00:10:41 Up,
Capabilities : Typed Wildcard FEC, P2MP, MP2MP
Target Adj : No
Upstream count : 0
Branch count : 0
Label map timer : never
Policy filter in :
Path count : 1
Path(s) : 10.33.44.44 GigabitEthernet0/0/0/0.34 LDP
Adj list : 10.33.44.44 GigabitEthernet0/0/0/0.34
Peer addr list : 10.11.44.44
: 10.22.44.44
: 10.33.44.44
: 10.0.0.44

You remember, we have said that it’s impossible to configure MVPN profile 1 (based on MP2MP mLDP) between Nokia (Alcatel-Lucent) SR OS and Cisco IOS XR, because mLDP in MP2MP fashion isn’t supported in SR OS. The output above clearly shows only P2MP capability is negotiated between Nokia (Alcatel-Lucent) VSR (SR7750 and Cisco IOS XRv, whereas between Cisco IOS XR routers we have P2MP, MP2MP and wildcard-FEC negotiated.

The next useful output comes from BGP RIB for MVPN (we take IPv4 for simplicity) at Cisco IOS XR router XR3:

RP/0/0/CPU0:XR3#show bgp ipv4 mvpn vrf CUST
Status codes: s suppressed, d damped, h history, * valid, > best
i – internal, r RIB-failure, S stale, N Nexthop-discard
Origin codes: i – IGP, e – EGP, ? – incomplete
Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 65000:10 (default for vrf CUST)
*>i[1][10.0.0.11]/40 10.0.0.11 0 100 0 i
*>i[1][10.0.0.22]/40 10.0.0.22 0 100 0 i
*> [1][10.0.0.33]/40 0.0.0.0 0 i
.
Processed 3 prefixes, 3 paths
!
!
RP/0/0/CPU0:XR3#show bgp ipv4 mvpn vrf CUST [1][10.0.0.11]/40
BGP routing table entry for [1][10.0.0.11]/40, Route Distinguisher: 65000:10
Versions:
Process bRIB/RIB SendTblVer
Speaker 12 12
Last Modified: Nov 6 16:34:51.178 for 00:02:39
Paths: (1 available, best #1, not advertised to EBGP peer)
Not advertised to any peer
Path #1: Received by speaker 0
Not advertised to any peer
Local
10.0.0.11 (metric 10) from 10.0.0.11 (10.0.0.11)
Origin IGP, metric 0, localpref 100, valid, internal, best, group-best, import-candidate, imported
Received Path ID 0, Local Path ID 0, version 12
Community: no-export
Extended community: RT:65000:10
PMSI: flags 0x00, type 2, label 0, ID 0x060001040a00000b000701000400002001
Source AFI: IPv4 MVPN, Source VRF: CUST, Source Route Distinguisher: 65000:10

For route-type 1, which is inclusive PMSI (i-PMSI) tree, we can see very long string that contains information how it’s build. There we have encoded type of the tree (PIM, mLDP, mRSVP-TE), address of the host and other information for mLDP it also encoded opaque value (marked with bold font), which is used for signalling mLDP tree in core, which doesn’t speak BGP. For instance, mentioned value in hex 0x2001 is 8193 translated into decimal as, which value we can see in MLDP at the same Cisco IOS XR router XR3:

RP/0/0/CPU0:XR3#show mpls mldp database brief
LSM ID Type Root Up Down Decoded Opaque Value
0x00002 P2MP 10.0.0.33 0 2 [global-id 1]
0x00003 P2MP 10.0.0.11 1 1 [global-id 8193]
0x00004 P2MP 10.0.0.22 1 1 [global-id 8193]
0x00001 P2MP 10.0.0.33 0 2 [global-id 262146]
!
!
RP/0/0/CPU0:XR3#show mpls mldp database opaquetype global-id 8193
mLDP database
LSM-ID: 0x00003 Type: P2MP Uptime: 00:04:18
FEC Root : 10.0.0.11
Opaque decoded : [global-id 8193]
Upstream neighbor(s) :
10.0.0.11:0 [Active] Uptime: 00:04:18
Local Label (D) : 24007
Downstream client(s):
PIM MDT Uptime: 00:04:18
Egress intf : LmdtCUST
Table ID : IPv4: 0xe0000011 IPv6: 0xe0800011
RPF ID : 262147
RD : 65000:10
.
LSM-ID: 0x00004 Type: P2MP Uptime: 00:03:00
FEC Root : 10.0.0.22
Opaque decoded : [global-id 8193]
Upstream neighbor(s) :
10.0.0.11:0 [Active] Uptime: 00:02:32
Local Label (D) : 24009
Downstream client(s):
PIM MDT Uptime: 00:03:00
Egress intf : LmdtCUST
Table ID : IPv4: 0xe0000011 IPv6: 0xe0800011
RPF ID : 5
RD : 65000:10

Point out that each vendor has its default opaque value. Both Nokia (Alcatel-Lucent) SR OS routers SR1 and SR2 has the same opaque value for i-PMSI, so we need to check FEC source as well.

For Cisco IOS XR default opaque values are different for IPv4 and for IPv6: 1 for IPv4 and 262146 for IPv6

At Nokia (Alcatel-Lucent) SR OS router SR1 we can check this mLDP database by the next command:

A:SR1# show router ldp bindings active p2mp
===============================================================================
LDP Bindings (IPv4 LSR ID 10.0.0.11)
(IPv6 LSR ID fc00::10:0:0:11)
===============================================================================
Legend: U – Label In Use, N – Label Not In Use, W – Label Withdrawn
WP – Label Withdraw Pending, BU – Alternate For Fast Re-Route
LF – Lower FEC, UF – Upper FEC, e – Label ELC
===============================================================================
LDP Generic IPv4 P2MP Bindings (Active)
===============================================================================
P2MP-Id Interface
RootAddr Op IngLbl EgrLbl
EgrNH EgrIf/LspId
——————————————————————————-
8193 73728
10.0.0.11 Push — 262135
10.11.22.22 1/1/1:12
.
8193 73728
10.0.0.11 Push — 24007
10.11.33.33 1/1/1:13
.
8193 73731
10.0.0.22 Pop 262134 —
— —
.
8193 73731
10.0.0.22 Swap 262134 24009
10.11.33.33 1/1/1:13
.
1 73729
10.0.0.33 Pop 262137 —
— —
.
1 73729
10.0.0.33 Swap 262137 262137
10.11.22.22 1/1/1:12
.
262146 73730
10.0.0.33 Pop 262136 —
— —
.
262146 73730
10.0.0.33 Swap 262136 262136
10.11.22.22 1/1/1:12
——————————————————————————-
No. of Generic IPv4 P2MP Active Bindings: 8
===============================================================================

Remember that in NG-MVPN there is no transport/service label split. There is only one label for multicast stream in VPN which is both transport and service together.

The rest of the checks are related to certain VPRN/VRF instance and are the same as we saw in the first part of the article. That’s why I propose to start some data plane verification, what is ping to multicast address in a nutshell, and we’ll see outputs alongside.

IPv4 multicast customers (ASM, SSM) at NG-MVPN

To refresh our customer topology, refer to the following picture, please:

At the customer side we do the following configuration:

Cisco IOS XR – XR4

RP/0/0/CPU0:XR4#show run
!
vrf R41
address-family ipv4 multicast
!
address-family ipv6 multicast
!
!
vrf R42
address-family ipv4 multicast
!
address-family ipv6 multicast
!
!
vrf R43
address-family ipv4 multicast
!
address-family ipv6 multicast
!
!
multicast-routing
vrf R41
address-family ipv4
interface all enable
accounting per-prefix
!
address-family ipv6
interface all enable
accounting per-prefix
!
!
vrf R42
address-family ipv4
interface all enable
accounting per-prefix
!
address-family ipv6
interface all enable
accounting per-prefix
!
!
vrf R43
address-family ipv4
interface all enable
accounting per-prefix
!
address-family ipv6
interface all enable
accounting per-prefix
!
!
!
router mld
vrf R41
interface GigabitEthernet0/0/0/0.114
join-group ff35::232:0:0:11 fc00::10:255:134:44
!
!
vrf R43
interface GigabitEthernet0/0/0/0.134
join-group ff05::239:0:0:44
!
!
!
router igmp
vrf R41
interface GigabitEthernet0/0/0/0.114
join-group 232.0.0.11 10.255.134.44
!
!
vrf R43
interface GigabitEthernet0/0/0/0.134
join-group 239.0.0.44
!
!
!
router pim
vrf R41
address-family ipv4
log neighbor changes
!
address-family ipv6
log neighbor changes
!
!
vrf R42
address-family ipv4
log neighbor changes
bsr candidate-bsr 10.255.124.44 hash-mask-len 30 priority 1
bsr candidate-rp 10.255.124.44 priority 192 interval 30
!
address-family ipv6
log neighbor changes
bsr candidate-bsr fc00::10:255:124:44 hash-mask-len 126 priority 1
bsr candidate-rp fc00::10:255:124:44 priority 192 interval 60
!
!
vrf R43
address-family ipv4
log neighbor changes
!
address-family ipv6
log neighbor changes
!
!
!
end

As you see, we have the following IPv4 multicast streams:

Just in the same way we did previously, we start with IPv4 SSM group, as it’s fully signalled. Here is the output from ingress multicast router XR3 (output is reduced to highlight the interesting moments):

RP/0/0/CPU0:XR3#show bgp ipv4 mvpn vrf CUST route-type 7
Status codes: s suppressed, d damped, h history, * valid, > best
i – internal, r RIB-failure, S stale, N Nexthop-discard
Origin codes: i – IGP, e – EGP, ? – incomplete
Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 65000:10 (default for vrf CUST)
*>i[7][65000:10][65000][32][10.255.134.44][32][232.0.0.11]/184
10.0.0.11 0 100 0 i
Processed 1 prefixes, 1 paths
!
!
RP/0/0/CPU0:XR3#show mrib vrf CUST ipv4 route 232.0.0.11
IP Multicast Routing Information Base
.
Interface flags: F – Forward, A – Accept, IC – Internal Copy,
.
MA – Data MDT Assigned, LMI – mLDP MDT Interface, TMI – P2MP-TE MDT Interface
.
(10.255.134.44,232.0.0.11) RPF nbr: 10.255.134.44 Flags: RPF
Up: 00:04:55
Incoming Interface List
GigabitEthernet0/0/0/0.134 Flags: A, Up: 00:04:55
Outgoing Interface List
LmdtCUST Flags: F LMI, Up: 00:04:55

At the egress multicast router, which is Nokia (Alcatel-Lucent) VSR SR1 we also see this group:

A:SR1# show router 10 pim group ipv4 232.0.0.11 detail
===============================================================================
PIM Source Group ipv4
===============================================================================
Group Address : 232.0.0.11
Source Address : 10.255.134.44
RP Address : 10.255.124.44
Advt Router : 10.0.0.33
Flags : Type : (S,G)
Mode : sparse
MRIB Next Hop : 10.0.0.33
MRIB Src Flags : remote
Keepalive Timer : Not Running
Up Time : 0d 00:07:52 Resolved By : rtable-u
.
Up JP State : Joined Up JP Expiry : 0d 00:00:07
Up JP Rpt : Not Joined StarG Up JP Rpt Override : 0d 00:00:00
.
Register State : No Info
Reg From Anycast RP: No
.
Rpf Neighbor : 10.0.0.33
Incoming Intf : mpls-if-73730
Outgoing Intf List : toCLIENT
.
Curr Fwding Rate : 0.0 kbps
Forwarded Packets : 0 Discarded Packets : 0
Forwarded Octets : 0 RPF Mismatches : 0
Spt threshold : 0 kbps ECMP opt threshold : 7
Admin bandwidth : 1 kbps
——————————————————————————-
Groups : 1
===============================================================================

So far everything looks fine and we issue ping:

RP/0/0/CPU0:XR4#ping vrf R43 232.0.0.11 rep 5
Sending 5, 100-byte ICMP Echos to 232.0.0.11, timeout is 2 seconds:
Reply to request 0 from 10.255.114.44, 89 ms
Reply to request 1 from 10.255.114.44, 29 ms
Reply to request 2 from 10.255.114.44, 1 ms
Reply to request 3 from 10.255.114.44, 19 ms
Reply to request 4 from 10.255.114.44, 19 ms

Very good! Let’s go further and now we try IPv4 ASM group 239.0.0.44. Here we’ll issue ping to build proper SPT and then show some topology outputs:

RP/0/0/CPU0:XR4#ping vrf R41 239.0.0.44 rep 5
Sending 5, 100-byte ICMP Echos to 239.0.0.44, timeout is 2 seconds:
.
Reply to request 1 from 10.255.134.44, 99 ms
Reply to request 2 from 10.255.134.44, 39 ms
Reply to request 3 from 10.255.134.44, 9 ms
Reply to request 4 from 10.255.134.44, 29 ms

As usual first packet is dropped due to SPT switchover, but all the rest packets are OK. We start with ingress multicast router, which is Nokia (Alcatel-Lucent) SR OS router SR1:

A:SR1# show router 10 pim group ipv4 239.0.0.44 detail
===============================================================================
PIM Source Group ipv4
===============================================================================
Group Address : 239.0.0.44
Source Address : 10.255.114.44
RP Address : 10.255.124.44
Advt Router : 10.0.0.11
Flags : spt Type : (S,G)
Mode : sparse
MRIB Next Hop : 10.255.114.44
MRIB Src Flags : direct
Keepalive Timer Exp: 0d 00:03:00
Up Time : 0d 00:04:02 Resolved By : rtable-u
.
Up JP State : Joined Up JP Expiry : 0d 00:00:00
Up JP Rpt : Not Joined StarG Up JP Rpt Override : 0d 00:00:00
.
Register State : Pruned Register Stop Exp : 0d 00:00:26
Reg From Anycast RP: No
.
Rpf Neighbor : 10.255.114.44
Incoming Intf : toCLIENT
Outgoing Intf List : mpls-if-73728
.
Curr Fwding Rate : 0.0 kbps
Forwarded Packets : 4 Discarded Packets : 0
Forwarded Octets : 400 RPF Mismatches : 0
Spt threshold : 0 kbps ECMP opt threshold : 7
Admin bandwidth : 1 kbps
——————————————————————————-
Groups : 1
===============================================================================
!
!
A:SR1# show router 10 pim rp ipv4
===============================================================================
PIM RP Set ipv4
===============================================================================
Group Address Hold Expiry
RP Address Type Prio Time Time
——————————————————————————-
224.0.0.0/4
10.255.124.44 Dynamic 192 75 0d 00:01:55
——————————————————————————-
Group Prefixes : 1
===============================================================================

Output of the PIM groups at SR1 shows that multicast traffic comes from client’s interface and is sent to MLDP MDT. The next router to check is SR2, because we have RP connected there:

A:SR2# show router 10 pim group ipv4 239.0.0.44
===============================================================================
Legend: A = Active S = Standby
===============================================================================
PIM Groups ipv4
===============================================================================
Group Address Type Spt Bit Inc Intf No.Oifs
Source Address RP State Inc Intf(S)
——————————————————————————-
239.0.0.44 (*,G) toCLIENT 1
* 10.255.124.44
239.0.0.44 (S,G) spt mpls-if-73729 1
10.255.114.44 10.255.124.44
——————————————————————————-
Groups : 2
===============================================================================

We don’t show all the details here, but what is important is that we have two groups here. The initial group (*,G) was used in order to register multicast client R43 with RP. The new group (S,G) was built from multicast client R43 to multicast sender R41.

Now it’s turn of egress multicast router XR3, which is Cisco IOS XRv (ASR 9000):

RP/0/0/CPU0:XR3#show mrib vrf CUST ipv4 route 239.0.0.44
IP Multicast Routing Information Base
Entry flags: L – Domain-Local Source, E – External Source to the Domain,
C – Directly-Connected Check, S – Signal, IA – Inherit .
Interface flags: F – Forward, A – Accept, IC – Internal Copy,
NS – Negate Signal, DP – Don’t Preserve, SP – Signal Present,
.
MA – Data MDT Assigned, LMI – mLDP MDT Interface, TMI – P2MP-TE MDT Interface
.
(*,239.0.0.44) RPF nbr: 10.0.0.22 Flags: C RPF
Up: 00:52:12
Incoming Interface List
LmdtCUST Flags: A NS LMI, Up: 00:27:03
Outgoing Interface List
GigabitEthernet0/0/0/0.134 Flags: F NS LI, Up: 00:52:12
.
(10.255.114.44,239.0.0.44) RPF nbr: 10.0.0.11 Flags: RPF
Up: 00:10:35
Incoming Interface List
LmdtCUST Flags: A NS LMI, Up: 00:10:35
Outgoing Interface List
GigabitEthernet0/0/0/0.134 Flags: F NS, Up: 00:10:35

Point out that we have two different RPF neighbours for these two groups. For RPT we have SR2 as RPF neighbour, whereas SR1 is RPF neighbour for SPT.

Well, IPv4 is done, let’s go to IPv6 multicast.

IPv6 multicast customers (ASM, SSM) at NG-MVPN

We have configured already, what is necessary for IPv6 multicast, so I’ll just remind the streams:

And here the problems start.

Pay attention that problems might be related to virtual images solely and you might have it working with nodes (like, Nokia/ALU 7750 or Cisco ASR 9000). I personally know a lot of limitations in IOS XRv.

Problem No 1 – What is NOT working?

Cisco IOS XRv router XR3 can’t understand information about BSR-based RP R42 distributed from SR2. Here is the output about IPv6 RP information from all MPLS PE routers:

A:SR1# show router 10 pim rp ipv6
===============================================================================
PIM RP Set ipv6
===============================================================================
Group Address Hold Expiry
RP Address Type Prio Time Time
——————————————————————————-
ff00::/8
fc00::10:255:124:44 Dynamic 192 150 0d 00:01:44
——————————————————————————-
Group Prefixes : 1
===============================================================================
!
!
A:SR2# show router 10 pim rp ipv6
===============================================================================
PIM RP Set ipv6
===============================================================================
Group Address Hold Expiry
RP Address Type Prio Time Time
——————————————————————————-
ff00::/8
fc00::10:255:124:44 Dynamic 192 150 0d 00:01:14
——————————————————————————-
Group Prefixes : 1
===============================================================================
!
!
RP/0/0/CPU0:XR3# show pim vrf CUST ipv6 rp mapping | begin “ff00::/8”
! no output

What is worth mentioning, if I build single vendor environment with Cisco IOS XR, then RP information is propagated properly. I don’t say anything about single vendor environment with Nokia (Alcatel-Lucent) SR OS, because we have an RP and multicast client connected to two different PE, which are Nokia (Alcatel-Lucent) VSR, and it works fine.

Problem No 1 – What is working?

As we have proper PIMv6 and BGP MVPN-IPV6 information at SR1 and SR2, we can issue IPv6 multicast stream from RP R42 down to R41, if it were customer. Let’s register R41 to the same group as R43 registered for the group:

RP/0/0/CPU0:XR4(config-mld-R41-if)#show conf
Building configuration…
!! IOS XR Configuration 6.1.2
router mld
vrf R41
interface GigabitEthernet0/0/0/0.114
join-group ff05::239:0:0:44
!
!
!
router mld
!
end
!
!
RP/0/0/CPU0:XR4#ping vrf R42 ff05::239:0:0:44 source fc00::10:255:124:44 rep 5
Sending 5, 100-byte ICMP Echos to ff05::239:0:0:44, timeout is 2 seconds:
Reply to request 0 from fc00::10:255:114:44, 139 ms
Reply to request 0 from fc00::10:255:114:44, 139 ms
Reply to request 1 from fc00::10:255:114:44, 39 ms
Reply to request 2 from fc00::10:255:114:44, 29 ms
Reply to request 3 from fc00::10:255:114:44, 19 ms
Reply to request 4 from fc00::10:255:114:44, 29 ms

So in this flavour IPv6 multicast works.

To show a bit more MPLS data plane, I shutdown links SR1-XR3 and SR1-SR2, so that all the traffic flows through the P router XR4. There is new ping check:

RP/0/0/CPU0:XR4#ping vrf R42 ff05::239:0:0:44 source fc00::10:255:124:44 rep 5
Sending 5, 100-byte ICMP Echos to ff05::239:0:0:44, timeout is 2 seconds:
Reply to request 0 from fc00::10:255:114:44, 89 ms
Reply to request 0 from fc00::10:255:114:44, 99 ms
Reply to request 1 from fc00::10:255:114:44, 79 ms
Reply to request 2 from fc00::10:255:114:44, 39 ms
Reply to request 3 from fc00::10:255:114:44, 29 ms
Reply to request 4 from fc00::10:255:114:44, 49 ms

Here is also packet capture created by Wireshark to show how the encapsulation on the wire look like:

Problem No 2 – What is NOT working?

At a glance the signalling looks fine:

A:SR1# show router 10 pim group ipv6 ff35::232:0:0:11
===============================================================================
Legend: A = Active S = Standby
===============================================================================
PIM Groups ipv6
===============================================================================
Group Address Type Spt Bit Inc Intf No.Oifs
Source Address RP State Inc Intf(S)
——————————————————————————-
ff35::232:0:0:11 (S,G) mpls-if-73730 1
fc00::10:255:134:44 fc00::10:255:12*
——————————————————————————-
Groups : 1
===============================================================================
* indicates that the corresponding row element may have been truncated.
!
!
XR3
RP/0/0/CPU0:XR3#show mrib vrf CUST ipv6 route ff35::232:0:0:11
IP Multicast Routing Information Base
Entry flags: L – Domain-Local Source, E – External Source to the Domain,
.
MoFB – MoFRR Backup, RPFID – RPF ID Set, X – VXLAN
Interface flags: F – Forward, A – Accept, IC – Internal Copy,
.
MA – Data MDT Assigned, LMI – mLDP MDT Interface, TMI – P2MP-TE MDT Interface
.
(fc00::10:255:134:44,ff35::232:0:0:11)
RPF nbr: fe80::255:44 Flags: RPF
Up: 00:28:46
Incoming Interface List
GigabitEthernet0/0/0/0.134 Flags: A, Up: 00:28:46
Outgoing Interface List
LmdtCUST Flags: F LMI, Up: 00:07:50

What is not fine that despite the good signalling the data plane is broken:

RP/0/0/CPU0:XR4#ping vrf R43 ff35::232:0:0:11 source fc00::10:255:134:44 rep 5
Sending 5, 100-byte ICMP Echos to ff35::232:0:0:11, timeout is 2 seconds:
…..

And here the Wireshark does awesome job by showing us the root cause:

Remember, I was saying that there is a single label in NG-MVPN, which is both transport and service. For some reason Cisco IOS XRv (I guess, the problem is related only to my virtual router) adds another label, which is IPv6 Explicit-Null label. If you compare to the output of Wireshark in the previous problem, you see correct labelling.

Problem No 2 – What is working?

If we add new IPv6 SSM group at SR3 that points to R42, we got it working:

RP/0/0/CPU0:XR4(config)#show conf
Building configuration…
!! IOS XR Configuration 6.1.2
router mld
vrf R41
interface GigabitEthernet0/0/0/0.114
join-group ff35::232:0:0:11 fc00::10:255:124:44
!
!
!
router mld
!
!
RP/0/0/CPU0:XR4#ping vrf R42 ff35::232:0:0:11 so fc00::10:255:124:44 rep 5
Sending 5, 100-byte ICMP Echos to ff35::232:0:0:11, timeout is 2 seconds:
Reply to request 0 from fc00::10:255:114:44, 9 ms
Reply to request 1 from fc00::10:255:114:44, 19 ms
Reply to request 2 from fc00::10:255:114:44, 29 ms
Reply to request 3 from fc00::10:255:114:44, 79 ms
Reply to request 4 from fc00::10:255:114:44, 39 ms

The final configuration files are here: 094_config_final_SR2_profile_12 094_config_final_XR3_profile_12 094_config_final_XR4_profile_12 094_config_final_SR1_profile_12

Lessons learned

When I was preparing for CCIE SP this year, I’ve worked a lot with MVPNs. I knew that Cisco IOS XRv has problems with NG-MVPN and IPv6, therefore I have never made a packet capture. Now we have done it and I realised, where the problem comes from. In the same time, there is another virtual Cisco router, which is called CSR 1000v and utilize Cisco IOS XE operation system. It has no problems with data plane and there IPv6 NG-MVPN works, as well all other features that don’t work in Cisco IOS XRv like VPWS or VPLS data plane.

Conclusion

Multicast services are equally important to unicast for customers today. If Communication Service Provider (CSP) helps customer to extend its multicast network over IP VPN, it creates additional value for both of them, which can be easily converted into profit. In the same time, use of NG-MVPN relax the load on the network and therefore improve performance and stability. Even in multivendor environment with Nokia (Alcatel-Lucent) SR OS and Cisco IOS XR. Take care and good bye!

P.S.

If you have further questions or you need help with your networks, I’m happy to assist you, just send me message

Support us





.

BR,

Anton Karneliuk

Exit mobile version