IOS XR 6.0.0 Hands-ON LAB  LAB GUIDE

Document created by Guest on Feb 10, 2016
Version 1Show Document
  • View in full screen mode

Cisco Live Berlin 2016

Authors: Akshat Sharma, Patrick Warichet, Jose Liste

Date: February 2016

 

Setting the Stage:


EXERCISE 1 - Router Bootup and Auto-Provisioning

 

 

Objectives

  • Access and familiarize with the POD environment
  • Launch the router
  • Familiarize with IOS XRv 9000 virtualization environment
  • Experience the auto-provisioning process
  • Installation of Chef Client (a configuration management application) inside the router

 

TASK 1.1 - Access and Familiarize with POD environment

 

Proctor Notes

  • Include VNC instructions for connecting to POD
  • Familiarize with the main components of the POD
    • xfce4 desktop, main windows and desktop menu items
    • Include drawing of overall POD network / component diagram
    • POD services (e.g. Chef server, ROUTEM)
    • Docker containers (e.g. ELK)
    • Common Lab services (e.g. WEB server, DHCP, DNS, etc.)

Pre-requisites

Student MUST have installed a VNC Viewer client application on their laptopAny VNC client of your choice should work. We have verified Google Chrome VNC (VNC® Viewer for Google Chrome™ - Chrome Web Store)

Student Tasks

  • Student connects to corresponding POD via VNC
  • Student follows instructions to familiarize with POD environment

 

 

TASK 1.1.1 Understanding the Pod Architecture

The main component of the student Pod is a virtual machine hosting a Cisco IOS XRv 9000 virtual router (see background information on IOS XRv 9000 in the following section)

 

In addition, the Pod provides a number of additional functions such as: DHCP server, WEB server, Chef server, Routem (a Cisco internal tool used for control-plane emulation of routing protocols) and Docker containers hosting various other applications. (see Figure 1 below)

 

As shown below, each Pod has a primary interface Eth0 in the 192.168.0.0/24 subnet and a host ID equal to the <pod_id>

For example: POD 17 will have eth0 with an IP address of 192.168.0.17

 

A secondary bridge interface virbr0 will have IP address 192.168.122.1 on each pod. The virbr0 bridge is the default interface used by KVM to communicate with the POD. As you go on with the lab, several virtual interfaces and bridges will be created that allows xvr9k to communicate with the services provided by the POD.

 

Figure 1: Pod Architecture

 

Cisco IOS XRv 9000 Overview

The Cisco IOS® XRv 9000 Router implements the feature set of Cisco IOS XR Software. Running on virtualized general x86 compute platforms, it complements existing physical Cisco® router platforms that rely on Cisco IOS XR Software, such as Cisco Network Convergence System routers, Cisco ASR 9000 Series Routers, and Cisco Carrier Routing System (CRS) platforms. Now, service providers can enhance their operational excellence and offerings based on physical routers - and move them easily to a virtual form factor. The Cisco IOS XRv 9000 Router (Figure 1) offers greater agility, improved network efficiency, lower capital and operational expenditures, and the ability to efficiently scale network capacity up and down, based on demand.

 

IOS XRv 9000 runs on an architecture that provides separation of Control Plane (CP), Data Plane (DP) and Admin Plane (AP) using Linux Containers (LXC). IOS XRv 9000 separates the CP-DP by creating a DPC-DPA layering (Data Plane Controller / Data Plane Agent).

 

IOS XRv 9000 target platforms include virtual routers, physical routers, hybrid routers, merchant silicon based switches and routers. This lab concentrates on IOS XRv 9000 virtual router deliverable

 

In addition, the new architecture provides the foundation that allows for the installation of third party LXC containers. A third party container can be used to host applications and offer some visibility over the DP and CP network elements.

Figure 2: Cisco IOS XRv 9000 Architecture


This lab is performed using a Cisco IOS XRv 9000 router instance running IOS XR 6.0.0 image

The terms "virtual router" and  "XRv 9000" are used interchangeably in this document

 

TASK 1.1.2 Connecting to the Pod

The student Pod is accessible through VNC using the IP address or name of the lab gateway ("berlin.podzone.org") and the <pod-id> as VNC's desktop id

 

VNC to berlin.podzone.org:<pod_id> where pod_id = [1-30]

                           password: cisco123

 

Example: POD 17 is accessed by VNC to berlin.podzone.org:17

 

Once inside the POD, the student will be presented with a Linux XFCE4 graphical interface

Figure 3: Pod Linux Desktop

 

 

TASK 1.2 - Router Auto-Provisioning

 

 

Pre-requisites

The following is provided per POD:

  • ztp.sh script - script located in the router that runs during router initialization and responsible for downloading and running "xrv9k-boot.sh"
  • xrv9k-boot.sh script - user-defined script used after router is up to perform a set of predetermined tasks (see details below)
  • Chef client rpm
  • Telemetry (mgbl) rpm - to enable Streaming Telemetry
  • Cryptography (k9sec) rpm - to enable ssh

 

 

Student Tasks

  • Verify auto-provisioning process. As part of auto-provisioning, the router will go through multiple phases and execution of various scripts.
    • Student to review the logs associated with the ztp.sh script (in XR's Linux shell)
  • Scan through content of xrv9k-boot.sh script. In this lab, the script will perform the necessary steps to facilitate of Check Clint, including:
    • Copy Chef client rpm into IOS XR
    • Install (yum) Chef client rpm
  • Verify Chef client download and installation

Chef client setup will be completed in TASK 3.1

 

TASK 1.2.1 Understanding the Auto-Provisioning Process

 

The auto-provisioning process (ztp.sh) kicks in at the end of the XR container boot process (process level 999) and provides two functions:

  1. Apply a static configuration
  2. Execute a script

In order to operate, auto-provisioning requires IP connectivity. auto-provisioning’s first action is to fork a DHCP client process on the management port, and then wait for a valid response. If the response contains an option 67 (bootfile-name) the client will access the server over the protocol indicated in the DHCP response and download the file referenced by the DHCP response.

Auto-provisioning will analyze the first line of the file it has received. If the file starts with “!! IOS XR”, it will interpret the rest of the file as a static configuration. If the file starts with “#!/bin/bash” or “#!/bin/sh”, auto-provisioning will interpret it is as a script. Any other sequences in the header of the file will force auto-provisioning to discard the content of the file and stops execution.

Auto-provisioning inspects the current configuration of the system and stops further execution if a username is configured. If the file queried by auto-provisioning starts with the keyword “#! /bin/bash” or “#! /bin/sh”, the script is executed in the namespace of the XR container. After its execution, the auto-provisioning process is terminated and the DHCP address released. It is important to note that the DHCP address is configured on the management interface inside the shell of the XR container, and not in the XR configuration itself. Since the script is executed within the XR shell, it has access to all the standard tools provided by the Linux environment of the container.

To facilitate troubleshooting, auto-provisioning keeps several logs in “/disk0:/ztp” directory. Use a text editor (such as vi) and inspect the content of the file /disk0:/ztp/ztp.log You will notice that auto-provisioning works for IPv4 and IPv6. Take time and familiarize yourself with the content of the other logs.

 

 

Figure 4: IOS XR Bootup process with Auto-Provisioning

 

TASK 1.2.2 Inspect the Auto-provisioning Script

In the desktop of your POD, double-click on the icon labeled xrv9k-boot.sh in order to have a look at the auto provisioning script.

You will realized that some functions in the script access the HTTP server using the address 172.16.20.1 while other functions access it using 192.168.122.1. At initial boot IOS-XR does not have any configuration and all the interfaces are in "administratively down" state. The ZTP process capture the management interface and uses it the global network namespace of the Linux shell, ZTP will use this interface to initiate the DHCP request, and dowload the xrv9k-boot.sh script. The DHCP server is configured to offer ZTP an address in the 172.16.20.0/24 subnet. As the xrv9k-boot.sh script get executed it applies the xr.config configuration file. Once the configuration is applied the management interface is assigned the address 192.168.122.40, this address is also available to the Linux shell but in a different network namespace.

 

Have a look at  the following 2 functions:

 

function download_config(){
    ap_log "### Downloading system config ###";
    /usr/bin/wget http://172.16.20.1:8080/xr.config -O /disk0:/new-config 2>&1 >> $LOGFILE 
    ap_log "### Downloading system config complete ###";
}

function apply_check_config(){
# Apply configuration
  ap_log "Applying config..."
  while :
  do
      /pkg/bin/config -p15 -X -f /disk0:/new-config -c "ZTP" &> $CONFIGLOGFILE
      if [ $? -ne 0 ] ; then
          # If we need to wait for stable system, let's wait, otherwise we will quit (likely issue in config)
          if [ `grep "SYSTEM CONFIGURATION IS STILL IN PROGRESS" $CONFIGLOGFILE | wc -l` -eq 1 ]; then
              ap_log "SYSTEM CONFIGURATION going on, let us retry";
              sleep 5;
          elif [ `grep "Successfully entered exclusive" $CONFIGLOGFILE | wc -l` -eq 1 ]; then
              /pkg/bin/cfgmgr_show_failed -c > /disk0:/ztp/failed_config
              if [ `grep "ERROR" /disk0:/ztp/failed_config | wc -l` -eq 1 ]; then
                  ap_log "Configuration is applied with possible error"
                  ap_log "Failure is saved to /disk0:/ztp/failed_config"
                  break
              fi
          else
              ztp_hook_log "Couldn't acquire lock, retry in 5sec..";
              sleep 5;
          fi
    else
      # command completed fine
      ap_log "### XR configuration successfully applied ###"
      break
    fi
  done
}

 

These two (2) functions allow the configuration of IOS XR from within the auto-provisioning script. As part of the standard set of tools, XR provide several utilities that manipulate and apply configurations. The utility "config" apply a text file to the running configuration of the system, the various if statement in the script ensure that the utility acquire a lock on the configuration and that the configuration was applied without errors.

 

The following function will install the Manageability package (mgbl)

function install_mgbl_pkg(){
    ap_log "### XR MGBL INSTALL ###"
    /usr/bin/wget http://172.16.20.1:8080/xrv9k-mgbl-2.0.0.0-r600.x86_64.rpm-6.0.0 -O /disk0:/xrv9k-mgbl-2.0.0.0-r600.x86_64.rpm-6.0.0
    xrcmd "install add source /disk0: xrv9k-mgbl-2.0.0.0-r600.x86_64.rpm-6.0.0" 2>&1 >> $LOGFILE
    xrcmd "install activate xrv9k-mgbl-2.0.0.0-r600" 2>&1 >> $LOGFILE
    complete=0
    while [ "$complete" = 0 ]
      do
        complete=`xrcmd "show install active" | grep mgbl | head -n1 | wc -l`
        ap_log "Waiting for mgbl package to be activated"
        sleep 5
      done
    rm -f /disk0:/xrv9k-mgbl-2.0.0.0-r600.x86_64.rpm-6.0.0
    ap_log "### XR MGBL INSTALL COMPLETE ###"
}

 

The following function will install the chef client inside the XR container, as you can see the script only invokes standard YUM tools to install the package

 

function install_chef_client(){
    ap_log "### Setting up chef ###";
    ap_log " 1) Download client ";
    $TPNNS_EXEC /usr/bin/wget http://192.168.122.1:8080/chef-12.4.1+20150910004954-1.ios_xr6.x86_64.rpm -O /root/chef-12.4.1+20150910004954-1.ios_xr6.x86_64.rpm 2>&1 >> $LOGFILE  
    ap_log " 2) Download validation";
    yum install -y /root/chef-12.4.1+20150910004954-1.ios_xr6.x86_64.rpm   
    ap_log "### Chef prepared ###";
}

The variable $TPNNS_EXEC is a substitute for the command "ip netns exec tpnns", this command allows applications that are network namespace unaware to be run in a specified namespace. With namespaces, the Linux kernel allows multiple instances of network interfaces and routing tables that operate independently of each other.

The IOS XR Linux shell includes the Third Party Network Name Space (TPNNS) that provides the required isolation between third party applications and internal XR processes, while providing the necessary access to IOS XR interfaces for the applications. The install_chef_client() function is invoked after the IOS XR configuration is applied, since the configuration has assigned a static ipv4 address to the management Interface, the script will invoke wget within TPNNS to download the Chef RPM.

 

TASK 1.3 - Router Bootup

 

The student is expected to have some level of understanding of the IOS XRv 9000 architecture. This lab will not dwell into XRv 9000 launching details.

The main goal is to get an XRv 9000 router instance up and running in order to understand and experience the enhancements introduced with IOS XR 6.0

Pre-requisites

The following is provided per POD:

  • sunstone.sh script - launching script for XRv 9000
  • XRv 9000 disk file (qcow2 image)
  • Shortcut / menu in the desktop

Student Tasks

  • Click on desktop shortcut in order to run "sunstone.sh" script and launch XRv 9000 router instance
  • Explore the various consoles associated with the router
    • XR LXC, Calvados LXC, Suntone VM host
  • Verify containers created in XRv 9000 virtual machine (using Virsh commands)

 

TASK 1.3.1 - Booting up the Router

In the desktop of your POD, double-click on the icon labeled launch_xrv9k in order to start the virtual router instance

Upon clicking on the icon, the router will bootup from a pre-built disk image (qcow2)

A multi-tab xfce4-terminal will be launched. Each tab corresponds to different consoles available in the virtual router

 

Console Name
Description
ciscortr1POD user shell. This will be the primary way to interact with the virtual router over SSH. It is also used to run scripts
ciscortr1QEMUQEMU monitor console. This will not be generally used other than to inspect or kill the XRv 9000 instance
ciscortr1XrXR console. Used to interact with XR (via Telnet)  and drop into XR's Linux shell.
ciscortr1XrAuxXR Auxiliary port. Used to gain access to XR's Linex shell (via Telnet). Not used in this lab
ciscortr1AdminSysAdmin console. Used to gain access to Sysadmin console (via Telnet) and Sysadmin's Linux shell. Not used in this lab
ciscortr1HostTelnet access to Host Linux environment of XRv 9000. Not used in this lab

BE PATIENT - the entire process (including auto-provisioning) might take between 8 to 10 minutes to complete

If needed, further POD user shell terminals could be opened by clicking on File -> Open Tab or <Ctrl>+<Shift>+<T> (on XFCE4 terminal that was opened when launching the router)

We recommend using tabs over individual terminal windows in order to avoid cluttering your desktop

 

TASK 1.4 Verify Router State Post Auto-Provisioning

 

 

TASK 1.4.1 Verify Router Interface Configuration

In XR's console tab, log into the router using the following credentials

Username: root / Password: lab

 

<to do: make sure that entire guide uses Courier new for router output>

 

After bootup, the following error messages MAY be displayed in XR's console. These are caused by a known software defect that will be addressed at FCS. For now, the messages can be safely ignored.

Existing lock /var/run/yum.pid: another copy is running as pid 8553.

Another app is currently holding the yum lock; waiting for it to exit...

  The other application is: yum

    Memory :  72 M RSS (255 MB VSZ)

    Started: Tue Dec  8 22:46:39 2015 - 00:04 ago

    State  : Running, pid: 8553

 

RP/0/RP0/CPU0:pod-rtr#show ipv4 interface brief

Wed Dec  2 01:25:37.532 UTC

 

Interface                      IP-Address      Status          Protocol Vrf-Name

Loopback0                      1.1.1.1         Up              Up       default

Loopback1                      8.8.8.8         Up              Up       default

Loopback2                      9.9.9.9         Up              Up       default

GigabitEthernet0/0/0/0         unassigned      Shutdown        Down     default

GigabitEthernet0/0/0/1         unassigned      Shutdown        Down     default

GigabitEthernet0/0/0/2         unassigned      Shutdown        Down     default

MgmtEth0/RP0/CPU0/0            192.168.122.40  Up              Up       default

 

 

TASK 1.4.2 Verify XR Package Installation

 

m2m package was installed as part of the auto-provisioning process

RP/0/RP0/CPU0:pod-rtr#show install active

Wed Dec  2 01:30:43.694 UTC

Node 0/RP0/CPU0 [RP]

  Boot Partition: xr_lv0

  Active Packages: 2

        xrv9k-xr-6.0.0.22I version=6.0.0.22I [Boot image]

        xrv9k-m2m-1.0.0.0-r60022I

 

 

TASK 1.4.3 Verify Chef Client Package Installation

 

From XR's console tab, drop into XR's Linux shell by using the "run" command

 

Be aware to distinguish among the several prompts that would indicate the exact location from where you are issuing commands

For example:

XR console enabled prompt would be: RP/0/RP0/CPU0:pod-rtr#

XR Linux shell prompt would be: [xr-vm_node0_RP0_CPU0:~]$

 

RP/0/RP0/CPU0:pod-rtr#run

Wed Dec  2 01:34:52.745 UTC

[xr-vm_node0_RP0_CPU0:~]$

 

To verify that the Chef Client package has been correctly installed issue the following commands

The following command queries the router's RPM database looking for any installed package with the name "chef".

 

rpm - RPM Package Manager

-q, --query: Queries the RPM database

-i, --info: Display   package  information,  including  name,  version,  and description

 

This command query the RPM datastore in /var/lib/rpm and list some of the package metadata  structure stored in the datastore

Chef version build for Wind River® Linux 7 distribution

In the Packager Source RPM is created and maintained by Chef Software, Inc

The RPM was downloaded and installed by the autoprovisioning.sh script

 

[xr-vm_node0_RP0_CPU0:~]$# rpm -qi chef

Name        : chef                         Relocations: /

Version     : 12.4.1+20150910004954             Vendor: Omnibus <omnibus@getchef.com>

Release     : 1.ios_xr6                     Build Date: Thu Sep 10 00:50:31 2015

Install Date: Tue Dec  1 02:31:21 2015      Build Host: wrl7builder

Group       : default                       Source RPM: chef-12.4.1+20150910004954-1.ios_xr6.src.rpm

Size        : 144818105                        License: unknown

Signature   : DSA/SHA1, Thu Sep 10 00:50:31 2015, Key ID 59bac91c5bdd71b3

Packager    : Chef Software, Inc. <maintainers@chef.io>

URL         : https://www.chef.io

Summary     : The full stack of chef

Architecture: x86_64

Description :

The full stack of chef

 

 

Let's check the version of Chef client that gets installed in the router

 

[xr-vm_node0_RP0_CPU0:~]$chef-client -v

Chef: 12.4.1

 

 


EXERCISE 2 - Modularity

 

 

Objectives

  • Installation of a packages needed to perform subsequent exercises

 

TASK 2.1 - Package Installation

  Pre-requisitesThe following is provided per POD:

  • Crypto (k9sec) rpm - to enable SSH support
  • Manageability (mgbl) rpm - to enable NETCONF
  • A python webserver running in a directory in order to serve as download area of these packages

Student Tasks

  • Perform install of manageability and crypto packages
  • Verify package installation
  • Log in to XR and verify that ssh and netconf-yang commands are now available

 

TASK 2.1.1 - Understanding the Setup

 

Figure 4: Router Package Install Workflow

 

Open the WEB browser located in the desktop and point it to the following URL: http://192.168.122.1:8080/

You shall see the RPMs available in the POD

 

TASK 2.1.2 - Package Installation

 

Packages can be installed using either the CLI or using tools from the shell. Two new CLI have been introduced that complements the existing ones: “install update” and install upgrade” as described in the table below. These new commands require an external packages repository accessible via FTP/SFTP/SCP/TFTP or HTTP.

 

Command
Description

install update source <repository> 

When no package is specified, update latest SMUs of all installed packages

install upgrade source <repository> version <ver_num>

Upgrade the base image to the specified version. All installed packages will be upgraded to same release as the base package

 

 

 

RP/0/RP0/CPU0:pod-rtr#install ?

  activate    Activate software package(s)(cisco-support)

  add         Add package file(s) to software repository(cisco-support)

  commit      Commit changes to the active software(cisco-support)

  deactivate  Deactivate software package(s)(cisco-support)

  extract     Extract mini image to be activated via ISSU(cisco-support)

  prepare     Prepare software package(s) to be activated(cisco-support)

  remove      Remove package file(s) from software repository(cisco-support)

  update      Add & Activate packages

              (cisco-support)

  upgrade     Add & Activate packages along with given version of base image

              (cisco-support)

  verify       verifies packages present on the router(cisco-support)

 

 

RP/0/RP0/CPU0:pod-rtr#install  update source ?

  WORD  Enter source directory for the package(s)

        Example: 

         sftp://user@server/directory/

         scp://user@server/directory/

         ftp://user@server/directory/

         tftp://server/directory/

         http://server/directory/

 

RP/0/RP0/CPU0:pod-rtr#install update source http://192.168.122.1:8080/ xrv9k-k9sec xrv9k-mgbl

Wed Dec  2 03:00:05.121 UTC

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Update in progress...

Scheme : http

Hostname : 192.168.122.1:8080

Collecting software state..

 

Update packages :

    xrv9k-k9sec

    xrv9k-mgbl

Fetching .... xrv9k-mgbl-1.0.0.0-r60022I.x86_64.rpm

 

Skipped downloading inactive packages:

    xrv9k-k9sec-1.0.0.0-r60022I.x86_64.rpm

Adding packages

    xrv9k-mgbl-1.0.0.0-r60022I.x86_64.rpm

Dec 02 03:00:16 Install operation 13 started by root:

install add source /misc/disk1/install_tmp_staging_area/6.0.0.22I xrv9k-mgbl-1.0.0.0-r60022I.x86_64.rpm

Dec 02 03:00:17 Install operation will continue in the background

Dec 02 03:00:21 Install operation 13 finished successfully

 

Install add operation successfull

Activating xrv9k-k9sec-1.0.0.0-r60022I xrv9k-mgbl-1.0.0.0-r60022I

Dec 02 03:00:23 Install operation 14 started by root:

  install activate pkg xrv9k-k9sec-1.0.0.0-r60022I xrv9k-mgbl-1.0.0.0-r60022I

Dec 02 03:00:23 Package list:

Dec 02 03:00:23     xrv9k-k9sec-1.0.0.0-r60022I

Dec 02 03:00:23     xrv9k-mgbl-1.0.0.0-r60022I

Dec 02 03:00:28 Install operation will continue in the background

 

 

RP/0/RP0/CPU0:pod-rtr#

 

 

 

This product contains cryptographic features and is subject to United

States and local country laws governing import, export, transfer and

use. Delivery of Cisco cryptographic products does not imply third-party

authority to import, export, distribute or use encryption. Importers,

exporters, distributors and users are responsible for compliance with

U.S. and local country laws. By using this product you agree to comply

with applicable laws and regulations. If you are unable to comply with

U.S. and local laws, return this product immediately.

 

A summary of U.S. laws governing Cisco cryptographic products may be

found at:

http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

 

If you require further assistance please contact us by sending email to

export@cisco.com.

 

 

 

 

RP/0/RP0/CPU0:pod-rtr#Dec 02 03:02:35 Install operation 14 finished successfully

 

 

TASK 2.1.3 - Package Verification

 

RP/0/RP0/CPU0:pod-rtr#show install active

Wed Dec  2 03:03:11.866 UTC

Node 0/RP0/CPU0 [RP]

  Boot Partition: xr_lv0

  Active Packages: 4

        xrv9k-xr-6.0.0.22I version=6.0.0.22I [Boot image]

        xrv9k-m2m-1.0.0.0-r60022I

       xrv9k-k9sec-1.0.0.0-r60022I

        xrv9k-mgbl-1.0.0.0-r60022I

 

 

Let's play around a bit with the installed packages to understand their structure better:

 

RP/0/RP0/CPU0:pod-rtr#run

Wed Dec  2 03:04:32.231 UTC

 

 

[xr-vm_node0_RP0_CPU0:~]$rpm -qi xrv9k-k9sec

Name        : xrv9k-k9sec                  Relocations: /opt/cisco/XR/packages/xrv9k-k9sec-1.0.0.0-r60022I

Version     : 1.0.0.0                           Vendor: (none)

Release     : r60022I                       Build Date: Thu Nov  5 10:41:30 2015

Install Date: Wed Dec  2 03:00:41 2015      Build Host: sjck-gold-38

Group       : IOS-XR                        Source RPM: xrv9k-k9sec-1.0.0.0-r60022I.src.rpm

Size        : 8607679                          License: Copyright (c) 2015 Cisco Systems Inc. All rights reserved.

Signature   : (none)

Packager    : selva

Summary     : Bundle package for iosxr-security

Architecture: x86_64

Description :

Bundle package for iosxr-security

Build workspace: /scratch/very_important_do_not_delete/xrv9k-sevt

[xr-vm_node0_RP0_CPU0:~]$rpm -qi xrv9k-mgbl

Name        : xrv9k-mgbl                   Relocations: /opt/cisco/XR/packages/xrv9k-mgbl-1.0.0.0-r60022I

Version     : 1.0.0.0                           Vendor: (none)

Release     : r60022I                       Build Date: Thu Nov  5 10:41:23 2015

Install Date: Wed Dec  2 03:01:48 2015      Build Host: sjck-gold-38

Group       : IOS-XR                        Source RPM: xrv9k-mgbl-1.0.0.0-r60022I.src.rpm

Size        : 28994237                         License: Copyright (c) 2015 Cisco Systems Inc. All rights reserved.

Signature   : (none)

Packager    : selva

Summary     : Bundle package for iosxr-mgbl

Architecture: x86_64

Description :

Bundle package for iosxr-mgbl

Build workspace: /scratch/very_important_do_not_delete/xrv9k-sevt

 

In the end, these packages are just RPMs. Use RPM utilities to check the RPM metadata for dependency checks.

Pro Tip:  Run these RPM utilities off box on any linux system that has rpm running. Script around some of the RPM commands to create your own dependency management tool for XR packages. So cool!

 

[xr-vm_node0_RP0_CPU0:~]$rpm -qR xrv9k-mgbl

/bin/sh

/bin/sh

/bin/sh

/bin/sh

xrv9k-iosxr-bgp >= 1.0.0.0

xrv9k-iosxr-bgp < 2.0.0.0

xrv9k-iosxr-fwding >= 1.0.0.0

xrv9k-iosxr-fwding < 2.0.0.0

xrv9k-iosxr-infra >= 1.0.0.0

xrv9k-iosxr-infra < 2.0.0.0

xrv9k-iosxr-os >= 1.0.0.0

xrv9k-iosxr-os < 2.0.0.0

xrv9k-iosxr-routing >= 1.0.0.0

xrv9k-iosxr-routing < 2.0.0.0

 

[xr-vm_node0_RP0_CPU0:~]$rpm -qR xrv9k-k9sec

/bin/sh

/bin/sh

/bin/sh

/bin/sh

xrv9k-iosxr-fwding >= 1.0.0.0

xrv9k-iosxr-fwding < 2.0.0.0

xrv9k-iosxr-infra >= 1.0.0.0

xrv9k-iosxr-infra < 2.0.0.0

xrv9k-iosxr-os >= 1.0.0.0

xrv9k-iosxr-os < 2.0.0.0

 

EXERCISE 3 - Setting Up SSH access

 

 

Objectives

  • Installation of a packages needed to perform subsequent exercises

<to do add small diagram to explain the two types of SSH access into XR>

TASK 3.1 - SSH access to XR console

  Pre-requisitesThe following is provided per POD:

  • xxx<ammar> I take it we will fill this out

Student Tasks

  • xxx

 

RP/0/RP0/CPU0:pod-rtr#crypto key generate rsa

Thu Dec  3 02:51:25.404 UTC

 

The name for the keys will be: the_default

  Choose the size of the key modulus in the range of 512 to 4096 for your General Purpose Keypair. Choosing a key modulus greater than 512 may take a few minutes.

 

How many bits in the modulus [1024]: Generating RSA keys ...

Done w/ crypto generate keypair

[OK]

 

RP/0/RP0/CPU0:pod-rtr#conf t

Thu Dec  3 02:51:28.880 UTC

RP/0/RP0/CPU0:pod-rtr(config)#ssh server v2

RP/0/RP0/CPU0:pod-rtr(config)#commit

Thu Dec  3 02:51:34.257 UTC

RP/0/RP0/CPU0:pod-rtr(config)#

From POD shell verify that we can ping XR loopback

Then attempt to SSH using the XR username crated during ZTP (root / lab)

 

cisco@pod18:~$ ping 1.1.1.1

PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.

64 bytes from 1.1.1.1: icmp_seq=1 ttl=255 time=4.88 ms

64 bytes from 1.1.1.1: icmp_seq=2 ttl=255 time=2.16 ms

^C

--- 1.1.1.1 ping statistics ---

2 packets transmitted, 2 received, 0% packet loss, time 1001ms

rtt min/avg/max/mdev = 2.160/3.521/4.883/1.362 ms

 

 

cisco@pod18:~$ ssh root@1.1.1.1

The authenticity of host '1.1.1.1 (1.1.1.1)' can't be established.

RSA key fingerprint is 40:3e:58:10:94:25:dc:7c:21:14:cc:a3:3d:7f:b8:d8.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '1.1.1.1' (RSA) to the list of known hosts.

root@1.1.1.1's password:

 

 

RP/0/RP0/CPU0:pod-rtr#

 

TASK 3.2 - SSH access to XR Linux Shell

 

 

Pre-requisites

The following is provided per POD:

  • xxx

Student Tasks

xxx

<Ammar> it will be good if we have a short comment for command i.e. service sshd_tpnns start  #enables ssh for third party name space

RP/0/RP0/CPU0:pod-rtr#run

Thu Dec  3 02:53:57.478 UTC

 

[xr-vm_node0_RP0_CPU0:~]$service sshd_tpnns start

Starting OpenBSD Secure Shell server: sshd

  generating ssh RSA key...

  generating ssh ECDSA key...

  generating ssh DSA key...

  generating ssh ED25519 key...

done.

[xr-vm_node0_RP0_CPU0:~]$

 

cisco@pod18:~$ ssh root@1.1.1.1 -p 57722

The authenticity of host '[1.1.1.1]:57722 ([1.1.1.1]:57722)' can't be established.

ECDSA key fingerprint is e9:54:aa:0e:14:a0:fe:eb:8f:27:94:9a:42:19:31:d5.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '[1.1.1.1]:57722' (ECDSA) to the list of known hosts.

root@1.1.1.1's password:

Last login: Thu Dec  3 02:04:27 2015 from 1.1.1.1

[xr-vm_node0_RP0_CPU0:~]$

 

 


EXERCISE 4 - Application Hosting

 

 

Objectives

  • Explore the Chef Server
  • Understand the Cookbook
  • Setup and verification of Chef client
  • Installation and verification of a third-party app container that includes a Streaming Telemetry receiver

 

TASK 4.1 - Explore the Chef Server

  Pre-requisitesThe following is provided per POD:

  • Chef server

Student Tasks<>

Before we bootstrap the chef-client, let's explore the chef-server and look at the available recipe/cookbook:

  • Open Firefox from the desktop

         

          You will be presented with the login screen. The credentials are already set up. Just hit the  "Sign In"  button In case you mess up, the credentials for the chef server are :  cisco/cisco123

  • You will now be taken to the main chef-server dashboard. There should be no nodes already registered on the server.

  • Let's take a look at the cookbooks set up on the chef-server:

  Click on the "Policy" tab and select the cookbook titled "setup_xr" You should see a way to expand the cookbook contents to the full screen as show by the arrows below

  • Select the tab titled "Content" and drop into "default.rb" as highlighted below. This is the cookbook we intend to run

TASK 4.2 - Understand the Cookbook

  Pre-requisitesThe following is provided per POD:

  • Cookbook titled "setup_xr"

Student Tasks<> The cookbook "setup_xr" is a standalone recipe that performs some management tasks for the lab:

  • Download an ubuntu container tar ball and set up the rootfs directory.
  • Launch the container.
  • Copy policy and map files for the Streaming Telemetry exercise to the relevant directories.

 

The cookbook in broken up into chunks below with notes right above to indicate what each section is meant to do:

 

#

# Cookbook Name:: setup_xr

# Recipe:: default

#

# Copyright 2015, YOUR_COMPANY_NAME

#

# All rights reserved - Do Not Redistribute

#

Create the directory to extract the container rootfs

directory '/misc/app_host/rootfs' do     

  owner 'root'

  group 'root'

  mode '755'

  action :create

end

Download the demo tar ball that contains the Telemetry policy and map files. The XML file to launch the container (demo.xml) is also a part the demo tar ball.

remote_file '/root/demo_set.tar.gz' do

  source 'http://pod.cisco.local:8080/demo_set.tar.gz'

  owner 'root'

  group 'root'

  mode '0755'

  action :create_if_missing

end

Untar the demo tar ball into the /root/ directory

execute 'Untar demo set' do

  command 'tar -zxf /root/demo_set.tar.gz'

  cwd '/root/'

  not_if { Dir.exists?("/root/demo_set/") }

end

Download the ubuntu container tar ball

execute 'Download large container rootfs tar ball' do

  command 'wget http://pod.cisco.local:8080/ubuntu_gpb.tar.gz'

  cwd '/misc/app_host/'

  not_if { File.exists?("/misc/app_host/ubuntu_gpb.tar.gz") }

end

 

Untar the rootfs tar ball to the expected directory /misc/app_host/rootfs

execute 'Untar container rootfs' do

  command 'tar -zxf /misc/app_host/ubuntu_gpb.tar.gz'

  cwd '/misc/app_host/rootfs'

ignore_failure true

  not_if { Dir.exists?("/misc/app_host/rootfs/bin") }

end

This is a workaround for this image, and maybe ignored for now

execute 'Correct /usr/bin/sudo permissions for rootfs' do

  command 'chroot . chown -Rf root:root usr/bin/sudo && chroot . chmod 4755 usr/bin/sudo'

  cwd '/misc/app_host/rootfs'

ignore_failure true

end

Start the virsh container using the rootfs. This is done using the XML file (demo.xml) obtained from the demo_set.tar.gz file above. The sample below shows a simple bash script embedded in the chef recipe. This could be done using a ruby_block as well or even by creating a new resource for chef.

bash 'start the virsh container' do

  cwd '/root/demo_set'

  code <<-EOH

virsh_demo_state=`bash check_virsh_demo_status.sh`

   if [[ $virsh_demo_state == 0 ]]; then

  nsenter -t 1 -n -- virsh -c lxc+tcp://10.11.12.15:16509 create /root/demo_set/demo.xml

   else

  echo "Nothing to do here"

  fi

   EOH

end

Copy the telemetry policy files (meant for the Streaming Telemetry lab) to the right directory: /telemetry/policies

execute 'Copy over the prefix and onbox policies to /telemetry/policies' do

  command 'cp prefix.policy onbox.policy /telemetry/policies'

  cwd '/root/demo_set/'

end

The high_prefix and low_prefix policy files are stand-in policy files that will be used in the Streaming Telemetry section. For now, let's just keep them in the /root directory

execute 'Copy over the high_prefix and low_prefix policies to /root' do

  command 'cp high_prefix.policy low_prefix.policy /root'

  cwd '/root/demo_set/'

end

Finally, for the Streaming Telemetry section, copy the pre-generated map files (required for GBP over UDP encoder) to the /telemetry/gpb/maps/ directory

execute 'Copy over map files to /telemetry/gpb/maps' do

  command 'cp *.map /telemetry/gpb/maps/'

  cwd '/root/demo_set/'

end

 

TASK 4.3 - Chef Client Setup

 

 

Pre-requisites

The following is provided per POD:

  • Chef server
  • Pre-defined Chef Cookbook (setup_xr)

Student Tasks

  • Run Chef knife commands to bring up and register the Chef client with the Chef server
  • Verify Chef client installation and registration with the Chef server
  • Verify the result of the recipe run.

 

TASK 4.1.1 - Chef Client Setup

 

Now that we have ssh access to the XR linux shell, let's setup the chef client on XR.

Remember that the chef client RPM was installed as part of the auto-provisioning process.

With the chef-client RPM already installed, we do not need internet connectivity for XR to bootstrap the chef-client.

 

To begin with, on the POD user shell (outside the router), cd into the chef-repo directory:

cisco@pod18:~$cd chef-repo/

cisco@pod18:~/chef-repo$ ls

chefignore cookbooks  data_bags  environments  LICENSE README.md  roles

Now, run the knife bootstrap command:

cisco@pod18:~/chef-repo$ knife bootstrap -x root -P lab -p 57722 -N IOS_XR -r "recipe[setup_xr]" 1.1.1.1

Doing old-style registration with the validation key at /home/cisco/chef-repo/.chef/cisco-validator.pem...

Delete your validation key in order to use your user credentials instead

Connecting to 1.1.1.1

1.1.1.1 -----> Existing Chef installation detected

1.1.1.1 Starting the first Chef Client run...

1.1.1.1 Starting Chef Client, version 12.4.1

1.1.1.1 Creating a new client identity for IOS_XR using the validator key.

1.1.1.1 resolving cookbooks for run list: ["setup_xr"]

1.1.1.1 Synchronizing Cookbooks:

1.1.1.1 - setup_xr

1.1.1.1 Compiling Cookbooks...

1.1.1.1 Converging 10 resources

1.1.1.1 Recipe: setup_xr::default

1.1.1.1 * directory[/misc/app_host/rootfs] action create

1.1.1.1 - create new directory /misc/app_host/rootfs

1.1.1.1 - change mode from '' to '0755'

1.1.1.1 - change owner from '' to 'root'

1.1.1.1 - change group from '' to 'root'

1.1.1.1 * remote_file[/root/demo_set.tar.gz] action create_if_missing

1.1.1.1 - create new file /root/demo_set.tar.gz

1.1.1.1 - update content in file /root/demo_set.tar.gz

 

 

</snip>

 

1.1.1.1   * execute[Copy over the high_prefix and low_prefix policies to /root] action run

1.1.1.1     - execute cp high_prefix.policy low_prefix.policy /root

1.1.1.1   * execute[Copy over map files to /telemetry/gpb/maps] action run

1.1.1.1     - execute cp *.map /telemetry/gpb/maps/

1.1.1.1

1.1.1.1 Running handlers:

1.1.1.1 Running handlers complete

1.1.1.1 Chef Client finished, 9/9 resources updated in 49.708955059 seconds

 

 

TASK 4.4 - Verify Third-Party App Linux Container Installation

 

 

Proctor Notes

  • It is a key to demonstrate Linux container installation while minimizing the complexity associated with creation of the container’s ROOT file system (FS) as well as the usage of LXC tools. For that reason, each POD will provide a pre-built TAR ball that can be referenced directly by a Chef recipe

Pre-requisites

The following is provided per POD:

  • A pre-built root file system TAR ball that includes the following:
    • A simple streaming telemetry receiver used to collect BGP prefix information from the router
    • GPB protobuf file used to decode data collected by the telemetry receiver. This is required since the telemetry session on-box uses Google Protocol Buffers (GPB) over UDP as encapsulation
    • The Telemetry receiver also has code to monitor and analyze the collected data and take actions based on thresholds
    • More details about this telemetry session are provided in EXERCISE 5
  • Chef recipe for LXC bringup

Student Tasks

  • Issue "virsh" commands directly from XR’s shell to verify state of the third-party app container
  • SSH to newly launched demo container over port 58822.
  • Start the gpb receiver on XR loopback inside the container


TASK 4.4.1 - Use  "virsh" commands to verify Third-Party container installation

  Pre-requisites

  • Chef run would have launched an ubuntu container on the XRv9k instance.
  • This ubuntu container is launched with the name "demo"

Student Tasks

  • Run the virsh list command from XR linux shell to check that the container is running

 

RP/0/RP0/CPU0:pod-rtr#RP/0/RP0/CPU0:pod-rtr#runSun Dec  6 09:10:34.816 UTC  [xr-vm_node0_RP0_CPU0:~]$[xr-vm_node0_RP0_CPU0:~]$[xr-vm_node0_RP0_CPU0:~]$virsh -c lxc+tcp://10.11.12.15:16509 list Id    Name                           State---------------------------------------------------- 5536  sysadmin                       running 6019  default-sdr__uvf--2            running 12055 default-sdr--1                 running 28503 demo                           running  [xr-vm_node0_RP0_CPU0:~]$

 

TASK 4.4.2 - SSH to container and start GPB Telemetry receiver

Pre-requisites

  • Ubuntu container "demo" is already running and has SSH port open on port 58822
  • XR's third party network namespace (tpnns) is available inside the "demo" container

Student Tasks

  • SSH into the "demo" container from outside the router (POD shell) using XR's loopback address and port 58822

The credentials of the container are :username: ciscopassword: cisco123

cisco@pod18:~$ ssh -p 58822 cisco@1.1.1.1The authenticity of host '[1.1.1.1]:58822 ([1.1.1.1]:58822)' can't be established.ECDSA key fingerprint is 30:2f:f2:a8:12:db:46:04:fa:fd:74:51:9e:85:63:1b.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '[1.1.1.1]:58822' (ECDSA) to the list of known hosts.cisco@1.1.1.1's password: Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64)  * Documentation:  https://help.ubuntu.com/  The programs included with the Ubuntu system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.  Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted byapplicable law.    The programs included with the Ubuntu system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.  Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted byapplicable law.  Last login: Mon Nov  9 05:03:24 2015cisco@gpb:~$

 

  • Become root inside the container

The environment is set up for the root user. Don't worry, root user inside the container doesn't make you root on XR The sudo password for the user cisco is --> cisco123

cisco@gpb:~$ sudo -s[sudo] password for cisco: root@gpb:~#

 

  • Run ifconfig inside the container to see the available interfaces:

Remember, XR interfaces (Mgmt, Gig, loopback etc.) that are part of XR tpnns are shared with the container. Only the XR interfaces that are up and have an address configured will be visible inside tpnns

root@gpb:~# ifconfig

Mg0_RP0_CPU0_0 Link encap:Ethernet  HWaddr 52:46:48:51:a0:19 

          inet addr:192.168.122.40  Mask:255.255.255.0

          inet6 addr: fe80::5046:48ff:fe51:a019/64 Scope:Link

          UP RUNNING NOARP MULTICAST  MTU:1514  Metric:1

          RX packets:93835 errors:0 dropped:0 overruns:0 frame:0

          TX packets:37858 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:141117814 (141.1 MB)  TX bytes:2596790 (2.5 MB)

 

 

fwd_ew    Link encap:Ethernet  HWaddr 00:00:00:00:00:0b 

          inet6 addr: fe80::200:ff:fe00:b/64 Scope:Link

          UP RUNNING NOARP MULTICAST  MTU:1500  Metric:1

          RX packets:6 errors:0 dropped:0 overruns:0 frame:0

          TX packets:2 errors:0 dropped:1 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:660 (660.0 B)  TX bytes:140 (140.0 B)

 

 

fwdintf   Link encap:Ethernet  HWaddr 00:00:00:00:00:0a 

          inet6 addr: fe80::200:ff:fe00:a/64 Scope:Link

          UP RUNNING NOARP MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:12 errors:0 dropped:1 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:904 (904.0 B)

 

 

lo        Link encap:Local Loopback 

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:28 errors:0 dropped:0 overruns:0 frame:0

          TX packets:28 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:2272 (2.2 KB)  TX bytes:2272 (2.2 KB)

 

 

lo:0      Link encap:Local Loopback 

          inet addr:1.1.1.1  Mask:255.255.255.255

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

 

 

lo:2      Link encap:Local Loopback 

          inet addr:9.9.9.9  Mask:255.255.255.255

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

 

  • Since we want to run a telemetry receiver on the box. We select an XR IP address itself to direct the telemetry data to. Let's select the loopback2 address= 9.9.9.9 for this purpose and run the following command:

Don't worry about this step. Just run the receiver for now. We will circle back to the importance of this on-box telemetry receiver later in the lab.

root@gpb:~# cd gpb_receiver/

root@gpb:~/gpb_receiver# python gpb_receiver.py --ip-address 9.9.9.9 --port 5555 --tmp-dir /tmp/ --proto bgp_instances.proto bgp_instances_vrf.proto memory_summary.proto

Waiting for message

Current State of variables

PREFIX_THRES_EXCEEDED = 0

BACKOFF_STARTED = 1

TIMER_DONE =  0

 

Excellent! Now we're all set with the on-box receiver. On to the BGP session and actual setup of Streaming Telemetry.

 


EXERCISE 5 - Manageability

 

 

Objectives

  • Configuration of a BGP routing process and associated neighbor using Cisco BGP Yang and Open Config data models
  • Verification of BGP router configuration and operational data using Cisco BGP Yang and Open Config data models
  • SMU dependency management and verification
  • Installation of SMU RPMs and validation of automatic dependencies check

 

TASK 5.1 - Programmatic BGP peer configuration

  Pre-requisitesThe following is provided per POD:

  • ROUTEM - a Cisco internal routing emulator tool that runs as a Linux process in the POD
  • A Netconf client application that can run as a script (netconf_client.py python script)
  • Sample XML to configure the required BGP configuration (provided in the labguide)

Student MUST have completed the following tasks before proceeding:TASK 2.1 - Package installation - to be able to use NETCONF Student Tasks

  • Configure Netconf agent in XR console
  • Edit configuration using Cisco BGP Yang Data model
    • e.g. BGP routing process
  • Edit configuration using OC BGP Yang Data Model
    • e.g. BGP neighbor  (pointing to ROUTEM)

 

TASK 5.1.1 - Configure NETCONG YANG Agent

 

Configure NETCONF agent on the router (access over ssh -port 830)

 

netconf-yang agent

ssh

!

ssh server vrf default

ssh server netconf port 830

ssh server rate-limit 600

ssh server session-limit 1024

 

commit

 

Check NETCONF over SSH access into the box

 

cisco@pod18:~$ ssh -l root 1.1.1.1 -p 830 -o PubkeyAuthentication=no -s netconf

root@192.168.122.188's password:

 

<snip>

 

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<capabilities>

<capability>urn:ietf:params:netconf:base:1.1</capability>

<capability>urn:ietf:params:xml:ns:yang:ietf-netconfmonitoring</

capability>

……………

<capability>http://openconfig.net/yang/bgp?module=bgp&revision=2014-

03-05</capability>

<capability>http://openconfig.net/yang/bgp-policy?module=bgppol&

amp;revision=2014-11-30</capability>

</capabilities>

 

</snip>

 

]]>]]>

 

Check out the following two files on the desktop.

On clicking them, the XML encoding for the YANG models pop up as shown:

 

 

 

TASK 5.1.2 - Connect to XR over ssh port 830 using the netconf_client.py Script

 

The netconf_client.py script is made available in the /home/cisco directory on the pod. Use the "-h" option to get help with arguments for the script.

The password is the same as the SSH password for user root, namely:  "lab"

 

cisco@pod18:~$ pwd

/home/cisco

cisco@pod18:~$ ./netconf_client.py -h

Usage: ./netconf_client.py (ssh | tcp) [<agent address>] [1.1 | 1.0] [<port>] [<user name>]

cisco@pod18:~$

cisco@pod18:~$

cisco@pod18:~$

cisco@pod18:~$ ./netconf_client.py ssh 1.1.1.1 1.1 830 root

Connecting to the NETCONF agent using the SSH protocol at 1.1.1.1:830.

User root.

Using NETCONF version 1.1.

Response timeout value is 60 seconds.

Request exec count 1.

Connected to NETCONF agent. Waiting for <hello> message...

root@1.1.1.1's password:

 

 

------------ Received from NETCONF agent ---Mon Dec  7 04:10:11 2015---------

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<capabilities>

  <capability>urn:ietf:params:netconf:base:1.1</capability>

  <capability>urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring</capability>

  <capability>urn:ietf:params:netconf:capability:candidate:1.0</capability>

  <capability>urn:ietf:params:netconf:capability:rollback-on-error:1.0</capability>

  <capability>urn:ietf:params:netconf:capability:validate:1.1</capability>

  <capability>urn:ietf:params:netconf:capability:confirmed-commit:1.1</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-pmengine-oper?module=Cisco-IOS-XR-pmengine-oper&revision=2015-11-09</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-snmp-entitymib-cfg?module=Cisco-IOS-XR-snmp-entitymib-cfg&revision=2015-01-07</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-ifmgr-cfg?module=Cisco-IOS-XR-ifmgr-cfg&revision=2015-07-30</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-ip-static-cfg?module=Cisco-IOS-XR-ip-static-cfg&revision=2015-09-10</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-pmengine-cfg?module=Cisco-IOS-XR-pmengine-cfg&revision=2015-11-09</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-ospf-oper?module=Cisco-IOS-XR-ipv4-ospf-oper&revision=2015-11-09</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-io-oper?module=Cisco-IOS-XR-ipv4-io-oper&revision=2015-10-20</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-arp-cfg?module=Cisco-IOS-XR-ipv4-arp-cfg&revision=2015-11-09</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-segment-routing-ms-oper?module=Cisco-IOS-XR-segment-routing-ms-oper&revision=2015-11-09</capability>

 

  ..........................................................<snip> .............................................................

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-parser-cfg?module=Cisco-IOS-XR-parser-cfg&revision=2015-06-02</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-ip-rib-ipv4-oper?module=Cisco-IOS-XR-ip-rib-ipv4-oper&revision=2015-11-09</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-mpls-lsd-oper?module=Cisco-IOS-XR-mpls-lsd-oper&revision=2015-11-09</capability>

  <capability>http://cisco.com/ns/yang/Cisco-IOS-XR-lib-mpp-cfg?module=Cisco-IOS-XR-lib-mpp-cfg&revision=2015-07-30</capability>

  <capability>http://openconfig.net/yang/bgp?module=bgp&revision=2015-05-15&deviation=cisco-xr-bgp-deviations</capability>

  <capability>http://openconfig.net/yang/bgp-multiprotocol?module=bgp-multiprotocol&revision=2015-05-15</capability>

  <capability>http://openconfig.net/yang/bgp-operational?module=bgp-operational&revision=2015-05-15</capability>

  <capability>http://openconfig.net/yang/bgp-policy?module=bgp-policy&revision=2015-05-15&deviation=cisco-xr-bgp-policy-deviations</capability>

  <capability>http://openconfig.net/yang/bgp-types?module=bgp-types&revision=2015-05-15</capability>

  <capability>http://openconfig.net/yang/routing-policy?module=routing-policy&revision=2015-05-15&deviation=cisco-xr-routing-policy-deviations</capability>

  <capability>http://openconfig.net/yang/policy-types?module=policy-types&revision=2015-05-15</capability>

  <capability>http://cisco.com/ns/yang/cisco-xr-bgp-deviations?module=cisco-xr-bgp-deviations&revision=2015-09-16</capability>

  <capability>http://cisco.com/ns/yang/cisco-xr-bgp-policy-deviations?module=cisco-xr-bgp-policy-deviations&revision=2015-09-16</capability>

  <capability>http://cisco.com/ns/yang/cisco-xr-routing-policy-deviations?module=cisco-xr-routing-policy-deviations&revision=2015-09-16</capability>

</capabilities>

<session-id>2352633651</session-id>

</hello>

]]>]]>

-------------------------------------------------------

 

 

<hello> message was received from NETCONF agent.

Now sending our <hello> message...

 

 

--------------- Sent to NETCONF agent --------Mon Dec  7 04:10:11 2015-------

 

 

<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

  <capabilities>

    <capability>urn:ietf:params:netconf:base:1.1</capability>

  </capabilities>

</hello>

]]>]]>

-------------------------------------------------------

 

 

 

 

Ready to send a request.

Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):

 

 

To understand how the script works, let's do a sample run.

As part of the basic netconf repertoire, we have access to a simple get-config request

 

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <get-config>

        <source>

            <running/>

        </source>

    </get-config>

</rpc>

We have to paste this request in the interactive prompt provided by the netconf_client.py script.

 

To identify the end of a request, the script needs a delimiter. We use the "##" delimiter in the next line at the end of the request to submit it for the call.

This is shown below in red. Every request MUST end with this delimiter.

 

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <get-config>

        <source>

            <running/>

        </source>

    </get-config>

</rpc>

##

 

You'll get the following output:

 

 

-------------------------------------------------------

 

 

----------- Received from NETCONF agent ---Tue Dec  8 16:29:52 2015---------

 

 

#2367

<?xml version="1.0"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<data>

  <interface-configurations xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ifmgr-cfg">

   <interface-configuration>

    <active>act</active>

    <interface-name>Loopback0</interface-name>

    <interface-virtual></interface-virtual>

    <ipv4-network xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-io-cfg">

     <addresses>

      <primary>

       <address>1.1.1.1</address>

       <netmask>255.255.255.255</netmask>

      </primary>

     </addresses>

    </ipv4-network>

   </interface-configuration>

   <interface-configuration>

    <active>act</active>

    <interface-name>Loopback1</interface-name>

    <interface-virtual></interface-virtual>

    <ipv4-network xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-io-cfg">

     <addresses>

      <primary>

       <address>8.8.8.8</address>

       <netmask>255.255.255.255</netmask>

      </primary>

     </addresses>

    </ipv4-network>

   </interface-configuration>

   <interface-configuration>

    <active>act</active>

    <interface-name>Loopback2</interface-name>

    <interface-virtual></interface-virtual>

    <ipv4-network xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-io-cfg">

     <addresses>

      <primary>

       <address>9.9.9.9</address>

       <netmask>255.255.255.255</netmask>

      </primary>

     </addresses>

    </ipv4-network>

   </interface-configuration>

   <interface-configuration>

    <active>act</active>

    <interface-name>MgmtEth0/RP0/CPU0/0</interface-name>

    <ipv4-network xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-io-cfg">

     <addresses>

      <primary>

       <address>192.168.122.40</address>

       <netmask>255.255.255.0</netmask>

      </primary>

     </addresses>

    </ipv4-network>

   </interface-configuration>

   <interface-configuration>

    <active>act</active>

    <interface-name>GigabitEthernet0/0/0/0</interface-name>

    <shutdown></shutdown>

   </interface-configuration>

   <interface-configuration>

    <active>act</active>

    <interface-name>GigabitEthernet0/0/0/1</interface-name>

    <shutdown></shutdown>

   </interface-configuration>

   <interface-configuration>

    <active>act</active>

    <interface-name>GigabitEthernet0/0/0/2</interface-name>

    <shutdown></shutdown>

   </interface-configuration>

  </interface-configurations>

 

 

#412

  <router-static xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ip-static-cfg">

   <default-vrf>

    <address-family>

     <vrfipv4>

      <vrf-unicast>

       <vrf-prefixes>

        <vrf-prefix>

         <prefix>2.2.2.2</prefix>

         <prefix-length>32</prefix-length>

        </vrf-prefix>

       </vrf-prefixes>

      </vrf-unicast>

     </vrfipv4>

    </address-family>

   </default-vrf>

  </router-static>

 

 

#260

  <netconf-yang xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-man-netconf-cfg">

   <agent>

    <ssh>

     <enable></enable>

    </ssh>

   </agent>

  </netconf-yang>

  <cdp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-cdp-cfg">

   <enable>true</enable>

  </cdp>

 

 

#93

  <host-name xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-shellutil-cfg">pod-rtr</host-name>

 

 

#220

  <aaa xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-aaa-lib-cfg">

   <usernames xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-aaa-locald-cfg">

    <username>

     <name>0x1</name>

    </username>

   </usernames>

  </aaa>

 

 

#234

  <crypto xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-crypto-sam-cfg">

   <ssh xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-crypto-ssh-cfg">

    <server>

     <v2></v2>

     <netconf>830</netconf>

    </server>

   </ssh>

  </crypto>

 

 

#22

</data>

</rpc-reply>

 

 

##

------------------------------------------------------

 

 

Ready to send a request.

Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):

 

 

Perfect!  Now you know how to use the netconf_client.py script to connect to the router and issue netconf XML requests over SSH.

 

TASK 5.1.3 - Configure BGP routing process using Open Config Data Model

 

Now let's Configure a BGP routing process and enable the IPv4 unicast address family

Copy and paste the following XML into the prompt provided by netconf_client.py script.

Make sure you add the delimiter "##" at the end as shown

 

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<edit-config>

  <target>

   <candidate/>

  </target>

  <config>

  <bgp xmlns="http://openconfig.net/yang/bgp">

   <global>

    <config>

     <as>65000</as>

    </config>

    <afi-safis>

     <afi-safi>

      <afi-safi-name>ipv4-unicast</afi-safi-name>

      <config>

       <enabled>true</enabled>

      </config>

     </afi-safi>

    </afi-safis>

   </global>

  </bgp>

  </config>

</edit-config>

</rpc>

##

 

When you paste this into the netconf_client.py script prompt, you should get an "ok" in the rpc reply as shown below:

 

-------------------------------------------------------

 

 

----------- Received from NETCONF agent ---Wed Dec  9 02:48:29 2015---------

 

 

 

 

#119

<?xml version="1.0"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<ok/>

</rpc-reply>

 

 

##

------------------------------------------------------

 

 

Ready to send a request.

Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):

 

 

You MUST issue a "commit" for the configuration to actually take effect. The script will automatically convert the "commit" keyword into an RPC request for commit.

 

 

Ready to send a request.

Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):

commit

 

 

--------------- Sent to NETCONF agent ------Wed Dec  9 02:48:40 2015---------

 

 

#91

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <commit/>

</rpc>

##

 

 

-------------------------------------------------------

 

 

----------- Received from NETCONF agent ---Wed Dec  9 02:48:45 2015---------

 

 

 

 

#119

<?xml version="1.0"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<ok/>

</rpc-reply>

 

 

##

------------------------------------------------------

 

 

Ready to send a request.

Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):

 

 

To assuage any apprehensions, let's check the config on the router, the old school way:

 

RP/0/RP0/CPU0:pod-rtr#

RP/0/RP0/CPU0:pod-rtr#

RP/0/RP0/CPU0:pod-rtr#show  running-config router bgp

Wed Dec  9 10:59:28.720 UTC

router bgp 65000

address-family ipv4 unicast

!

!

 

 

RP/0/RP0/CPU0:pod-rtr#

 

TASK 5.1.4 - Configure BGP neighbor using Cisco BGP YANG Data Model

Configure a BGP neighbor programmatically using Cisco BGP YANG data model.

Copy and paste the following XML into the prompt provided by netconf_client.py

 

 

STOP STOP STOP!

 

The IP address of ROUTEM (BGP neighbor) is NOT disclosed as part of the lab guide instructions below ( Indicated below as X.X.X.X )

To discover the address, you need to install some SMUs to obtain an easter-egg show command.

Click the following link!:

SMU Dependency Management

 

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<edit-config>

  <target>

   <candidate/>

  </target>

  <config>

  <bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-bgp-cfg">

   <instance>

    <instance-name>default</instance-name>

    <instance-as>

     <as>0</as>

     <four-byte-as>

      <as>65000</as>

      <bgp-running/>

      <default-vrf>

       <bgp-entity>

        <neighbors>

         <neighbor>

          <neighbor-address>X.X.X.X</neighbor-address>

          <remote-as>

           <as-xx>0</as-xx>

           <as-yy>65000</as-yy>

          </remote-as>

          <update-source-interface>Loopback0</update-source-interface>

          <neighbor-afs>

           <neighbor-af>

            <af-name>ipv4-unicast</af-name>

            <activate/>

           </neighbor-af>

          </neighbor-afs>

         </neighbor>

        </neighbors>

       </bgp-entity>

      </default-vrf>

     </four-byte-as>

    </instance-as>

   </instance>

  </bgp>

  </config>

</edit-config>

</rpc>

 

 

TASK 5.2 - SMU Installation and Automatic Dependency Management

 

 

Pre-requisites

The following is provided per POD:

  • SMU1: xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm. This is the SMU we intend to install
  • SMU2: xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm. This SMU is a dependency for SMU1.
  • SMU3: xrv9k-iosxr-fwding-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm. This SMU is a dependency for SMU2.
  • A python webserver running in a directory in order to serve as remote repository for these SMUs

Student Tasks

  • Install SMU1 for DDTS: CSCaa22222 using the new "install update" command
  • Observe how dependencies (SMU2 and subsequently SMU3 for DDTS CSCaa11111) are automatically detected, downloaded and installed.
  • Run the "magic" show command to discover ROUTEM IP address: "show routem address"

 

 

TASK 5.2.1 - Analyse SMU dependencies

 

Before we install the SMU, let's take a look at the dependencies for each SMU.

Since SMUs are now just RPMs, you don't need a Cisco created tool to analyze dependencies off-box. Any linux server with rpm installed would work just fine.

 

As shown below SMU1: xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm    depends on xrv9k-iosxr-infra = 1.0.0.1  and hence SMU2: xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm will be detected and downloaded as a dependency automatically

cisco@pod18:~/web_server$ rpm -qpR xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm

/bin/sh

/bin/sh

/bin/sh

/bin/sh

xrv9k-iosxr-fwding = 1.0.0.0

xrv9k-iosxr-infra >= 1.0.0.0

xrv9k-iosxr-infra = 1.0.0.1

xrv9k-iosxr-infra < 2.0.0.0

xrv9k-iosxr-os >= 1.0.0.0

xrv9k-iosxr-os < 2.0.0.0

cisco@pod18:~/web_server$

Similarly, take a look at the dependencies of SMU2, and it'll show up to depend on SMU3

cisco@pod18:~/web_server$ rpm -qpR xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm

/bin/sh

/bin/sh

/bin/sh

/bin/sh

xrv9k-iosxr-fwding = 1.0.0.1

xrv9k-iosxr-infra = 1.0.0.0

xrv9k-iosxr-os >= 1.0.0.0

xrv9k-iosxr-os < 2.0.0.0

cisco@pod18:~/web_server$

 

Let's check if we can access the show command before installation:

RP/0/RP0/CPU0:pod-rtr#

RP/0/RP0/CPU0:pod-rtr#show  routem address

                                 ^

% Invalid input detected at '^' marker.

RP/0/RP0/CPU0:pod-rtr#

 

TASK 5.2.2 - Install SMU

 

Now let's install SMU1: xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm.

Based on the dependency check done above, SMU1--> SMU2 --> SMU3 will be the order in which the SMUs will get fetched and installed.

 

RP/0/RP0/CPU0:pod-rtr#install update source  http://192.168.122.1:8080/ xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm

Sun Dec  6 20:11:53.877 UTC

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Update in progress...

Scheme : http

Hostname : 192.168.122.1:8080

Collecting software state..

 

 

Update packages :

  xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm

Fetching .... xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm

 

 

Update packages :

  xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm

Fetching .... xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm

 

 

Update packages :

  xrv9k-iosxr-fwding-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm

Fetching .... xrv9k-iosxr-fwding-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm

Adding packages

  xrv9k-iosxr-fwding-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm

  xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm

  xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm

Dec 06 20:12:11 Install operation 13 started by root:

install add source /misc/disk1/install_tmp_staging_area/6.0.0.22I xrv9k-iosxr-fwding-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222.x86_64.rpm xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111.x86_64.rpm

Dec 06 20:12:12 Install operation will continue in the background

Dec 06 20:12:16 Install operation 13 finished successfully

 

 

Install add operation successfull

Activating xrv9k-iosxr-fwding-1.0.0.1-r60022I.CSCaa11111 xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111 xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222

Dec 06 20:12:17 Install operation 14 started by root:

  install activate pkg xrv9k-iosxr-fwding-1.0.0.1-r60022I.CSCaa11111 xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111 xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222

Dec 06 20:12:17 Package list:

Dec 06 20:12:17     xrv9k-iosxr-fwding-1.0.0.1-r60022I.CSCaa11111

Dec 06 20:12:17     xrv9k-iosxr-infra-1.0.0.1-r60022I.CSCaa11111

Dec 06 20:12:17     xrv9k-iosxr-fwding-1.0.0.2-r60022I.CSCaa22222

Dec 06 20:12:21 Install operation will continue in the background

RP/0/RP0/CPU0:pod-rtr#Dec 06 20:14:40 Install operation 14 finished successfully

 

 

TASK 5.2.3 - Discover ROUTEM IP address

 

Once the install operation is successful, let's see if we are served up a new show command:

 

RP/0/RP0/CPU0:pod-rtr#

RP/0/RP0/CPU0:pod-rtr#show  routem address

Sun Dec  6 22:08:14.829 UTC

Address of routem peer is 2.2.2.2

RP/0/RP0/CPU0:pod-rtr#

Perfect!  Now you're all ready to configure BGP using Netconf and the YANG Data Models. Click the following link to jump back:

 

TASK 5.2.4 - Configure BGP neighbour using ROUTEM's IP Address

 

Now with the BGP peer address known, let's edit our XML request before pasting it into the netconf_client.py prompt.

Don't forget the "##" delimiter.

 

<?xml version="1.0" encoding="UTF-8"?>

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<edit-config>

  <target>

   <candidate/>

  </target>

  <config>

  <bgp xmlns="http://cisco.com/ns/yang/Cisco-IOS-XR-ipv4-bgp-cfg">

   <instance>

    <instance-name>default</instance-name>

    <instance-as>

     <as>0</as>

     <four-byte-as>

      <as>65000</as>

      <bgp-running/>

      <default-vrf>

       <bgp-entity>

        <neighbors>

         <neighbor>

          <neighbor-address>2.2.2.2</neighbor-address>

          <remote-as>

           <as-xx>0</as-xx>

           <as-yy>65000</as-yy>

          </remote-as>

          <update-source-interface>Loopback0</update-source-interface>

          <neighbor-afs>

           <neighbor-af>

            <af-name>ipv4-unicast</af-name>

            <activate/>

           </neighbor-af>

          </neighbor-afs>

         </neighbor>

        </neighbors>

       </bgp-entity>

      </default-vrf>

     </four-byte-as>

    </instance-as>

   </instance>

  </bgp>

  </config>

</edit-config>

</rpc>

##

 

Again issue a commit at the end:

 

-------------------------------------------------------

 

 

----------- Received from NETCONF agent ---Wed Dec  9 03:16:32 2015---------

 

 

 

 

#119

<?xml version="1.0"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<ok/>

</rpc-reply>

 

 

##

------------------------------------------------------

 

 

Ready to send a request.

Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):

commit

 

 

--------------- Sent to NETCONF agent ------Wed Dec  9 03:16:35 2015---------

 

 

#91

<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

    <commit/>

</rpc>

##

 

 

-------------------------------------------------------

 

 

----------- Received from NETCONF agent ---Wed Dec  9 03:16:36 2015---------

 

 

 

 

#119

<?xml version="1.0"?>

<rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">

<ok/>

</rpc-reply>

 

 

##

------------------------------------------------------

 

 

Ready to send a request.

Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):

 

 

Finally in the XR console, issue a show run on router bgp to check the changes made:

 

 

RP/0/RP0/CPU0:pod-rtr#show  running-config  router bgp

Wed Dec  9 11:19:09.841 UTC

router bgp 65000

address-family ipv4 unicast

!

neighbor 2.2.2.2

  remote-as 65000

  update-source Loopback0

  address-family ipv4 unicast

  !

!

!

 

 

RP/0/RP0/CPU0:pod-rtr#

 

TASK 5.3 - Programmatic BGP config verification

 

 

Student Tasks

  • Get BGP configuration using Yang

 

TASK 5.3.1 - Retrieve BGP Configuration using Open Config Data Model

To check if the configuration went through; let's use an OC-BGP model to glean the filtered config for BGP

Ready to send a request.Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"><get-config><source><running/></source><filter type="subtree"><ns0:bgp xmlns:ns0="http://openconfig.net/yang/bgp"/></filter></get-config></rpc>##  --------------- Sent to NETCONF agent ------Wed Dec  9 03:20:18 2015---------  #222<rpc message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"><get-config><source><running/></source><filter type="subtree"><ns0:bgp xmlns:ns0="http://openconfig.net/yang/bgp"/></filter></get-config></rpc>##  -------------------------------------------------------  ----------- Received from NETCONF agent ---Wed Dec  9 03:20:18 2015---------    #925<?xml version="1.0"?><rpc-reply message-id="101" xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"> <data>  <bgp xmlns="http://openconfig.net/yang/bgp">   <global>    <config>     <as>65000</as>    </config>    <afi-safis>     <afi-safi>      <afi-safi-name>ipv4-unicast</afi-safi-name>      <config>       <afi-safi-name>ipv4-unicast</afi-safi-name>       <enabled>true</enabled>      </config>     </afi-safi>    </afi-safis>   </global>   <neighbors>    <neighbor>    <neighbor-address>2.2.2.2</neighbor-address>     <config>      <neighbor-address>2.2.2.2</neighbor-address>      <peer-as>65000</peer-as>     </config>     <afi-safis>      <afi-safi>       <afi-safi-name>ipv4-unicast</afi-safi-name>       <config>        <afi-safi-name>ipv4-unicast</afi-safi-name>        <enabled>true</enabled>       </config>      </afi-safi>     </afi-safis>    </neighbor>   </neighbors>  </bgp>  #22 </data></rpc-reply>  ##------------------------------------------------------  Ready to send a request.Paste your request or enter 'get', 'get-config', 'commit', or 'bye' to quit):

Awesome! We've configured BGP on XR using OC-BGP and Cisco XR BGP yang models. Let that sink in. Think about potential automation possibilities for XR configuration using these industry standard techniques.

TASK 5.4 - Set up the BGP session

  Student Tasks

  • Set up the BGP speaker (ROUTEM) configuration to send 3500 prefixes to XR
  • Start the ROUTEM process and establish the BGP session

 

The routem configuration is stored in a file called  "routem.cfg" in the /home/cisco/ directory of the pod.

A soft link to the file is also present on the desktop of your POD.

 

Click on the file to open it. We will use this setup to edit the file in the next few stages of the lab.

 

 

You can clearly see that the file is set up for routem to send 3500 prefixes to XR over a BGP session.

We use this config file to start the routem process as follows from the POD shell:

 

 

cisco@pod18:~$ ./routem.june04.2008 -f routem.cfg -d0

Version 2.1(8)(routem_june04_2008) by khphan on Wed Jun 4 11:03:24 PDT 2008

Copyright (c) 1998-1999, 2002-2003, 2007 by cisco Systems, Inc.

All rights reserved.

This is a Cisco internal test tool. It is not to be used outside of

Cisco except at the request of Cisco Engineering.

http://wwwin-routem.cisco.com

send feature request and questions to routem-support@cisco.com

ROUTEM:start reading config file  :Wed Dec  9 04:06:21 2015

(bgp:1 ospf:0 isis:0 bfd:0 tcp:0 msdp:0 traff:0)

ROUTEM:finish reading config file :Wed Dec  9 04:06:21 2015

 

 

ROUTEM:Try TCP non-blocking connect ...

ROUTEM:Wed Dec  9 04:06:22 2015

ROUTEM:[id=0, local=2.2.2.2(43875), peer=1.1.1.1(0)]

ROUTEM:BGP:Connect 0 secs, connect:0, ht:0, TCP rcvwnd:32768

ROUTEM:BGP[id=0] sock = 5 : Operation now in progress

 

 

ROUTEM:waiting to finish bgp connection ...

.....................<snip>................................

ROUTEM:<-- OPEN received

 

 

ROUTEM:-->Send KEEPALIVE

ROUTEM:Wed Dec  9 04:06:22 2015

ROUTEM:[id=0, local=2.2.2.2(43875), peer=1.1.1.1(179)]

ROUTEM:BGP:OpenSent 0 secs, connect:0, ht:0, TCP rcvwnd:32768

 

 

ROUTEM:waiting to finish bgp connection ...

ROUTEM:Wed Dec  9 04:06:22 2015

ROUTEM:[id=0, local=2.2.2.2(43875), peer=1.1.1.1(179)]

ROUTEM:BGP:OpenConfirm 0 secs, connect:0, ht:0, TCP rcvwnd:32768

 

 

ROUTEM:<-- KEEPALIVE received

ROUTEM:Wed Dec  9 04:06:22 2015

ROUTEM:[id=0, local=2.2.2.2(43875), peer=1.1.1.1(179)]

ROUTEM:BGP:OpenConfirm 0 secs, connect:0, ht:0, TCP rcvwnd:32768

 

 

ROUTEM:<-->BGP connection: OK

 

.........................</snip>.............................

 

 

 

Now hop back to XR console and use the "show bgp scale" command to view the number of prefixes inserted by the ROUTEM process.

 

 

RP/0/RP0/CPU0:pod-rtr#show  bgp scale

Wed Dec  9 12:12:15.660 UTC

 

 

VRF: default

Neighbors Configured: 1      Established: 1    

 

 

Address-Family   Prefixes Paths    PathElem   Prefix     Path       PathElem 

                                               Memory     Memory     Memory 

  IPv4 Unicast    3500     3500     3500       543KB      300KB      328KB     

  ------------------------------------------------------------------------------

  Total           3500     3500     3500       543KB      300KB      328KB     

 

 

Total VRFs Configured: 0

 

 

RP/0/RP0/CPU0:pod-rtr#

 

 

Excellent. We now have a BGP session running with 3500 prefixes being pushed into XR RIB.

 


EXERCISE 6 - Streaming Telemetry

 

 

Objectives

  • Configure the Streaming Telemetry session between XR and a receiver located in the third-party app container on XR (aka "on-box" telemetry session)
  • Configure the Streaming Telemetry session between XR and a receiver located in the POD VM (aka "external or ELK" telemetry session)
  • Vary the BGP prefixes injected by ROUTEM.
  • View the actions/events taken by on-box telemetry receiver in response to changing BGP prefixes
  • View the automatic variation in the data received by the "external" receiver  due to events generated by on-box telemetry receiver.

 

TASK 6.1 - Streaming Telemetry Configuration

 

 

Proctor Notes

  • Need to share diagram depicting "local" and "external" telemetry sessions

Pre-requisites

The following is provided per POD:

  • Docker containers  (on the POD) running ELK stack (Elasticsearch, Logstash and Kibana) aka the "external" receiver for Streaming Telemetry.
  • An "on-box" Streaming Telemetry receiver located in a third-party app container on XR (used to collect BGP prefixes learnt by the router)
  • Some monitoring code as part of on-box receiver to create events based on a "Prefix Threshold = 5000"
  • Streaming Telemetry policy and map files, already placed in the right location on XR by chef.

Student MUST have completed the following tasks before proceeding:TASK 4.2 - Chef Client Installation and run.TASK 4.4 - Third-Party App Container Installation - this LXC contains the on-box Telemetry receiver *** monitoring agent.TASK 5.1 - Programmatic BGP peer configuration Student Tasks

  • Configure "local" telemetry session - between XR and a receiver located in the third-party app container
    • Encapsulation: Google Protocol Buffer (GPB) (which runs over UDP)
    • IP Address of telemetry receiver: any IP address configured on the router (except Loopback 1)
    • The policy file of this session contains the following:
      • paths to monitor total BGP prefix learnt
      • monitoring cadence: five (5) seconds
  • Configure "external" telemetry session - between XR and a receiver (ELK stack) located in the VM POD
    • Encapsulation: JSON over TCP
    • IP Address of telemetry receiver: 192.168.122.1 (IP address of container hosting ELK stack) on port 2104.
    • The base policy file of this session contains the following:
      • paths to monitor total BGP prefix learnt and BGP memory statistics
      • monitoring cadence: thirty (30) seconds
  • Event based Policy files located in /root/ directory of XR (setup by chef earlier)

 

 

TASK 6.1.1 - Understanding  the Streaming Telemetry policy files

 

Drop down to XR bash and move to the /telemetry/policies directory. You will see two policy files

RP/0/RP0/CPU0:pod-rtr#

RP/0/RP0/CPU0:pod-rtr#run

Wed Dec  9 15:48:21.833 UTC

 

 

[xr-vm_node0_RP0_CPU0:~]$

[xr-vm_node0_RP0_CPU0:~]$cd /telemetry/policies

[xr-vm_node0_RP0_CPU0:/telemetry/policies]$ls

onbox.policy  prefix.policy

[xr-vm_node0_RP0_CPU0:/telemetry/policies]$

 

   onbox.policy --> Meant for on-box GPB receiver

   prefix.policy  --> Meant for ELK stack (external) receiver

Now, drop into the  /root directory. Here you will see two policy files. These are used for the event mechanisms later on. Let's compare the two:

[xr-vm_node0_RP0_CPU0:/telemetry/policies]$

[xr-vm_node0_RP0_CPU0:/telemetry/policies]$cd /root

[xr-vm_node0_RP0_CPU0:/root]$ls  | grep policy

high_prefix.policy

low_prefix.policy

[xr-vm_node0_RP0_CPU0:/root]$cat high_prefix.policy

{

"Name": "prefix",

"Metadata": {

     "Version": 25,

    "Description": "High prefix-count policy",

     "Comment": "Send show memory, BGP-config and BGP scale/performance stats to ELK every 5 seconds",

     "Identifier": "202"},

"CollectionGroups": {

     "FirstGroup": {

         "Period": 5,

         "Paths": [

             "RootOper.BGP.BPMInstancesTable.BPMInstances",

             "RootOper.MemorySummary.Node(*).Summary",

             "RootOper.BGP.Instance({'InstanceName': 'default'}).InstanceActive.DefaultVRF.AF({'AFName': 'IPv4Unicast'}).AFProcessInfo({'ProcessID': 0})"

         ]

     }

}

}

 

[xr-vm_node0_RP0_CPU0:/root]$cat low_prefix.policy

{

"Name": "prefix",

"Metadata": {

     "Version": 25,

     "Description": "Low prefix-count policy",

     "Comment": "Only send show memory and BGP-config stats to ELK every 30 seconds",

     "Identifier": "201"},

"CollectionGroups": {

     "FirstGroup": {

         "Period": 30,

         "Paths": [

             "RootOper.BGP.BPMInstancesTable.BPMInstances",

             "RootOper.BGP.Instance({'InstanceName': 'default'}).InstanceActive.DefaultVRF.AF({'AFName': 'IPv4Unicast'}).AFProcessInfo({'ProcessID': 0})"

         ]

     }

}

}

The high_prefix policy as shown above works with a cadence of 5 seconds. Also, as opposed to the low_prefix policy, it tries to stream 3 paths as opposed to 2.

RootOper.MemorySummary.Node(*).Summary",

is a new path .

 

The basic lowdown on the two policy files is in the table below:

 

PolicyCadencePaths
Location
high_prefix.policy5 seconds

"RootOper.BGP.BPMInstancesTable.BPMInstances",

"RootOper.MemorySummary.Node(*).Summary",

"RootOper.BGP.Instance({'InstanceName': 'default'}).InstanceActive.DefaultVRF.AF({'AFName': 'IPv4Unicast'}).AFProcessInfo({'ProcessID': 0})"

/root/
low_prefix.policy30 seconds

"RootOper.BGP.BPMInstancesTable.BPMInstances",

"RootOper.BGP.Instance({'InstanceName': 'default'}).InstanceActive.DefaultVRF.AF({'AFName': 'IPv4Unicast'}).AFProcessInfo({'ProcessID': 0})"

/root/

 

 

TASK 6.1.2 -Understanding Thresholds and Effect on Policy

 

All the policy files that must be used with Streaming Telemetry, HAVE to be located in /telemetry/policies directory. Hence the only valid policy files at any moment are:

  • onbox.policy
  • prefix.policy

In this exercise, we are concerned with BGP prefix count as an "event" parameter.

  • The Threshold defined for this exercise is 5000 BGP prefixes
  • The monitoring agent is the gpb_receiver running in the container on XR
  • The monitoring Agent will receive prefix information periodically (every 5 seconds - see onbox.policy)
  • If no_of_prefixes >= 5000

then  prefix.policy =  high_prefix.policy

  • If no_of_prefixes < 5000

then  prefix.policy =  low_prefix.policy

 

 

TASK 6.1.3 - Make sure on-box GPB receiver is running

 

Let's recap, as part of the initial setup, chef setup a third party container for us.

Within this container, a GPB receiver was started manually on port 5555.

 

If you don't have this terminal running already, we can quickly start it again.

 

The credentials of the container are :

username: cisco

password: cisco123

 

sudo password:  cisco123

cisco@pod18:~$ ssh cisco@1.1.1.1 -p 58822

cisco@1.1.1.1's password:

Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.14.23-WR7.0.0.2_standard x86_64)

 

 

* Documentation:  https://help.ubuntu.com/

 

 

The programs included with the Ubuntu system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

 

 

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by

applicable law.

 

 

 

 

The programs included with the Ubuntu system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

 

 

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by

applicable law.

 

 

Last login: Mon Nov  9 05:03:24 2015

cisco@gpb:~$ sudo -s

[sudo] password for cisco:

root@gpb:~#

root@gpb:~# cd gpb_receiver/

root@gpb:~/gpb_receiver# python gpb_receiver.py --ip-address 9.9.9.9 --port 5555 --tmp-dir /tmp/ --proto bgp_instances.proto bgp_instances_vrf.proto memory_summary.proto

Waiting for message

Current State of variables

PREFIX_THRES_EXCEEDED = 0

BACKOFF_STARTED = 1

TIMER_DONE =  0

 

 

 

 

 

 

 

TASK 6.1.4 - Configure receiver for the onbox policy

 

We need the following configuration

telemetry

encoder gpb

  policy group beta

   mtu 3000

   policy onbox

   destination ipv4 9.9.9.9 port 5555

  !

!

!

end

 

Within the next 5 seconds (on-box policy cadence) you should see logs pop on the terminal running the container receiver:

 

root@gpb:~/gpb_receiver# python gpb_receiver.py --ip-address 9.9.9.9 --port 5555 --tmp-dir /tmp/ --proto bgp_instances.proto bgp_instances_vrf.proto memory_summary.proto

Waiting for message

Current State of variables

PREFIX_THRES_EXCEEDED = 0

BACKOFF_STARTED = 1

TIMER_DONE =  0

Got message of length:1396bytes from address:('9.9.9.9', 5555)

 

 

Encoding:2271560481

Policy Name:onbox

Version:25

Identifier:200

Start Time:Mon Jan 26 23:44:49 1970

End Time:Wed Dec  9 08:24:28 2015

 

 

DEBUG:paramiko.transport:starting thread (client mode): 0x13a4bf50L

DEBUG:paramiko.transport:Local version/idstring: SSH-2.0-paramiko_1.16.0

DEBUG:paramiko.transport:Remote version/idstring: SSH-2.0-OpenSSH_6.6

INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_6.6)

DEBUG:paramiko.transport:kex algos:[u'curve25519-sha256@libssh.org', u'ecdh-sha2-nistp256', u'ecdh-sha2-nistp384', u'ecdh-sha2-nistp521', u'diffie-hellman-group-exchange-sha256', u'diffie-hellman-group-exchange-sha1', u'diffie-hellman-group14-sha1', u'diffie-hellman-group1-sha1'] server key:[u'ssh-rsa', u'ssh-dss', u'ecdsa-sha2-nistp256', u'ssh-ed25519'] client encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] server encrypt:[u'aes128-ctr', u'aes192-ctr', u'aes256-ctr', u'arcfour256', u'arcfour128', u'aes128-gcm@openssh.com', u'aes256-gcm@openssh.com', u'chacha20-poly1305@openssh.com', u'aes128-cbc', u'3des-cbc', u'blowfish-cbc', u'cast128-cbc', u'aes192-cbc', u'aes256-cbc', u'arcfour', u'rijndael-cbc@lysator.liu.se'] client mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac-sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac-ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac-md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] server mac:[u'hmac-md5-etm@openssh.com', u'hmac-sha1-etm@openssh.com', u'umac-64-etm@openssh.com', u'umac-128-etm@openssh.com', u'hmac-sha2-256-etm@openssh.com', u'hmac-sha2-512-etm@openssh.com', u'hmac-ripemd160-etm@openssh.com', u'hmac-sha1-96-etm@openssh.com', u'hmac-md5-96-etm@openssh.com', u'hmac-md5', u'hmac-sha1', u'umac-64@openssh.com', u'umac-128@openssh.com', u'hmac-sha2-256', u'hmac-sha2-512', u'hmac-ripemd160', u'hmac-ripemd160@openssh.com', u'hmac-sha1-96', u'hmac-md5-96'] client compress:[u'none', u'zlib@openssh.com', u'zlib'] server compress:[u'none', u'zlib@openssh.com', u'zlib'] client lang:[u''] server lang:[u''] kex follows?False

DEBUG:paramiko.transport:Kex agreed: diffie-hellman-group1-sha1

DEBUG:paramiko.transport:Cipher agreed: aes128-ctr

DEBUG:paramiko.transport:MAC agreed: hmac-sha2-256

DEBUG:paramiko.transport:Compression agreed: none

DEBUG:paramiko.transport:kex engine KexGroup1 specified hash_algo <built-in function openssl_sha1>

DEBUG:paramiko.transport:Switch to new keys ...

DEBUG:paramiko.transport:Adding ssh-rsa host key for [1.1.1.1]:57722: a526468d52636856596cf4321d20e274

DEBUG:paramiko.transport:userauth is OK

INFO:paramiko.transport:Authentication (password) successful!

DEBUG:paramiko.transport:[chan 0] Max packet in: 32768 bytes

DEBUG:paramiko.transport:[chan 0] Max packet out: 32768 bytes

DEBUG:paramiko.transport:Secsh channel 0 opened.

DEBUG:paramiko.transport:[chan 0] Sesch channel 0 request ok

DEBUG:paramiko.transport:EOF in transport thread

All good, telemetry policy updated

 

 

Waiting for message

Current State of variables

PREFIX_THRES_EXCEEDED = 0

BACKOFF_STARTED = 0

TIMER_DONE =  0

 

 

 

 

Got message of length:1396bytes from address:('9.9.9.9', 5555)

 

 

Encoding:2271560481

Policy Name:onbox

Version:25

Identifier:200

Start Time:Mon Jan 26 23:44:54 1970

End Time:Wed Dec  9 08:24:33 2015

 

 

Prefixes are low and under control, and no action is required

 

 

Waiting for message

Current State of variables

PREFIX_THRES_EXCEEDED = 0

BACKOFF_STARTED = 0

TIMER_DONE =  0

 

 

 

 

Got message of length:1396bytes from address:('9.9.9.9', 5555)

 

 

Encoding:2271560481

Policy Name:onbox

Version:25

Identifier:200

Start Time:Mon Jan 26 23:44:59 1970

End Time:Wed Dec  9 08:24:38 2015

 

 

Prefixes are low and under control, and no action is required

 

 

Waiting for message

Current State of variables

PREFIX_THRES_EXCEEDED = 0

BACKOFF_STARTED = 0

TIMER_DONE =  0

 

 

 

 

 

 

 

TASK 6.1.5 - Configure ELK stack (external) receiver for prefix.policy

Enter the following config on the router

telemetry

encoder json

  policy group alpha

   policy prefix

   destination ipv4 192.168.122.1 port 2104

  !

!

!

end

Logstash running outside the XR instance opens a TCP port on 2104.

This port is specified in the config above.

 

 

TASK 6.1.6 - View the Kibana Dashboard for the ELK stack (external) receiver

To view the kibana dashboard, open firefox inside your POD and browse to

http://192.168.122.1:5601

Let's switch to the pre-loaded dashboard as shown below:

The name of the dashboard is "DashMetaMonitoring"

 

 

 

 

You will be presented with a set of visualizations that showcase prefixcount/pathcount , system_memory, log_count etc. Play around  on the dashboard to better understand what you're dealing with.

 

 

 

TASK 6.2 - Creating events for Streaming Telemetry using on-box agent

 

 

Proctor notes

  • Kibana must be configured with pre-built graphs for the students to visualize; including:
    • BGP prefixes learnt
    • BGP memory statistics
    • Router memory statistics

Pre-requisitesThe following is provided per POD:

  • The json-tcp encoder for Telemetry must be configured to point to the ELK stack running in the POD.
  • The gpb encoder for Telemetry must be configured to point to the on-box gpb_receiver running inside the container on XR.

Student Tasks

  • Monitor Telemetry statistics using Kibana by selecting pre-built graphs
    • Observe how streamed data from the router is being received at 30-sec intervals
    • Observe how streamed data from router populates graphs corresponding to BGP stats (prefixes and memory)
    • Observe how NO data is being received for the graphs associated to router's system memory
  • Go to ROUTEM configuration file and increase the number of injected prefixes to 7000
    • The "local" telemetry session monitors total number of prefixes by the router
    • If the number of prefixes learnt exceeds a threshold, then the telemetry event helper modifies the policy file of the external telemetry session; with the following:
      • NEW: paths to monitor Router's system memory
      • MODIFIED: monitoring cadence: five (5) seconds
      • SAME: paths to monitor total BGP prefix learnt and BGP memory statistics
  • Monitor Telemetry statistics using Kibana by selecting pre-built graphs
    • Observe how streamed data from the router is now arriving at 5-sec intervals (instead of 15-sec intervals earlier)
    • Observe how streamed data from router populates BOTH the graphs corresponding to BGP stats (prefixes and memory) AND new stats from router's system memory

 

TASK 6.2.1 - Monitor "external" Telemetry stats

 

In the Kibana dashboard, Look at the topmost graph.

It shows  the variation of the Prefix count (Network entry count), path count and Path elements count  with time

 

It can be clearly seen that all the counts are the same and set at 3500.

If you remember,  this is expected since the number of BGP prefixes (injected into XR RIB by ROUTEM) was defined to be 3500 in the routem.cfg file that we dealt with in "Task 5.4 - Set up the BGP session"

 

Look at the graph titled Telemetry log_count. This shows logs from 2 paths (corresponding to the 2 colors) coming in every 30 seconds.

 

 

 

TASK 6.2.2 - Increase BGP prefixes learnt by the router

 

Now, let us increase the number of prefixes in the routem.cfg file to , say 7000.

Again, open up the file on the desktop of the pod:

To save the config, either do a File--> Save  or use CTRL+S

For this new config to take effect, we need to restart the running ROUTEM process. Hop onto the Terminal currently running the ROUTEM process, kill and start it again

 

Now hop on to the XR console, and issue a "show bgp scale" command:

 

RP/0/RP0/CPU0:pod-rtr#

RP/0/RP0/CPU0:pod-rtr#show  bgp scale

Wed Dec  9 17:18:09.918 UTC

 

 

VRF: default

Neighbors Configured: 1      Established: 1    

 

 

Address-Family   Prefixes Paths    PathElem   Prefix     Path       PathElem 

                                               Memory     Memory     Memory 

  IPv4 Unicast    7000     7000     7000       1MB        601KB      656KB     

  ------------------------------------------------------------------------------

  Total           7000     7000     7000       1MB        601KB      656KB     

 

 

Total VRFs Configured: 0

 

 

RP/0/RP0/CPU0:pod-rtr#

 

 

TASK 6.2.3 - Monitor "external" Telemetry stats

 

Pro-Tip:  Click on the default 15 minutes time interval you see at the top-right. This should then throw up an Auto-Refresh option as shown:

 

You could then set the auto-refresh option to 15 or 30 seconds, as you please to see the visualizations update periodically.

 

 

To see the effect that the on-box agent has on the Telemetry data, take a look at the first two graphs.

 

 

We see that the prefix count (1st graph) jumps from 3500 to 7000. This is expected and Streaming Telemetry will help capture this transition.

What is interesting is the second graph showcasing system_memory:

  • It can be seen that the data for system_memory starts coming in only when the prefixes go high, i.e. increase to 7000 (in our system, threshold is 5000 prefixes)
  • This is the extra path that is set up by high_prefix.policy
  • This is the on-box Telemetry receiver+ monitoring agent at play. It changes the policy file for the ELK stack when the BGP prefixes fall below or exceed the pre-defined Threshold of 5000 prefixes.

 

 

Probably the most obvious indication of the change of cadence and paths is graph3, or the telemetryLogCount, as shown below:

 

 

The bars close to each other occur because the cadence suddenly is reduced to 5 seconds  (Closer bars) and number of paths to 3 (The 3 different colors as opposed to 2).

 

 

TASK 6.2.4 - Reduce BGP prefixes again

 

Let's follow the above procedure and reduce the number of prefixes by modifying routem.cfg.

Suppose we reduce the prefixes to 2000, then consequent effect on the visualizations is shown below:

 

The BGP Prefix and Path counts:

 

 

 

System_memory Stats:

Notice how it suspends again after the prefixes fall below threshold

 

 

TelemetryLogCount:

Notice how the logs again start coming in every 30 seconds

 

 

And there you have it. An automated Telemetry eventing mechanism using an on-box container app that changes the data received by ELK based on the variation of the BGP prefix count.

  <END OF LAB>

 

 

 

 

 

 

Attachments

    Outcomes