Apstra 6.0 Lab

Overview

The demonstration starts with a pre-configured Apstra setup that has rack type, template, logical device and blueprint already pre-configured. Day 1 and Day 2 Apstra configurations are also done to showcase creation of routing zone, virtual networks and connectivity templates. The purpose is to introduce Apstra 6.0 focusing on concepts that help in the deployment of a DC fabric with inter and intra-virtual network connectivity established and finally, understand how to identify anomalies and other features on Apstra 6.0 UI.

  • Day 0 activity

    • Discover the vEX devices using Offbox agent and manage them

    • Assign system IDs to respective device parameters

  • Day 1 activity

    • Verify the configuration of templates, rack types, logical devices, interface mapping, blueprint. virtual networks, routing zones, connectivity templates and routing policy

    • Deploy the configuration onto the fabric

    • Connect the leafs to an external router

  • Day 2 activity

    • Verify intra-virtual network connectivity between the hosts via tagged interfaces

    • Insert configuration deviation, swap links, add a configlet, create an IBA probe, create and observe root cause identification, rollback using time voyager

Starting Lab

Topology

Topology consists of a 2-stage leaf spine architecture with external gateway being a vMX router. We have 4 different hosts connecting to three leafs with one of them having LAG enabled. All of them have tagged interfaces connecting to the leafs. Leaf1 and Leaf2 are connecting to the external router. Each of the leafs have a host device connected to them.

Access Details

[Once you have submitted the form check your inbox and you should have received an email from (_jcl-hol@juniper.net) containing the lab details as show below.

]: #

  1. Open a browser and http into server Link provided to you in the above email.

  1. Login with username and the password shared via email.

  1. Click on JumpHost, and login using the following credentials:

    • user - jumpstation

    • password - Juniper!1

  1. Open a Firefox browser and navigate to Apstra UI: https://100.123.0.74 with username: admin and password: Juniper!1

  1. Navigate to Blueprints, You will be working with evpn-vex-virtual blueprint.

Reviewing DC Configuration (Pre-Configured)

Review Resources

  1. Navigate to Resources > IP Pools section in Apstra UI.

  1. You can see that the demo is currently using three such IP Pools for the purpose of fabric and loopback connectivity.

  1. You can create multiple IP Pools using the Create IP Pool button as shown.

Note: You don't necessarily have to create IP Pools and ASN Pools, this is just for knowing how to create them. These are already pre-created for the purpose of demo.

  1. Similarly, cross check ASN and VNI Pools and view how to create an ASN pool.

Review Rack Types

Rack Types are modular definitions containing top-of-rack switches, workloads, and their associated connections. There are also redundancy protocol settings and other details contained. The racks we have built will be used in the next stages of modelling our lab fabric. Like the other building blocks we will work with, there are several pre-defined examples that come with the server. You can examine them in the list to get a feel for the possibilities. They can all be easily cloned and modified to give us the exact characteristics we need to architect our fabric.

These are already pre-configured so you just need to review them and not create them.

  1. Navigate to Design > Rack Types.

  2. We are using evpn-esi and evpn-single rack in this demo (search for them if you don’t get it immediately).

  3. Let’s review the first rack-type, evpn-esi:

  • Name - evpn-esi

  • Fabric Connectivity Design - L3 Clos

  1. Click on Edit

Note: They are in L3 Clos mode. This defines the two leafs (leaf-1 and leaf-2) connected to three hosts (switch1-server1, switch2-server1 and rack1-server1). Rack1-server1 is dual homed to leaf-1 and leaf-2.

You can see that under the leafs section, we have defined the name (evpn-esi), logical device that determines the connections and role, links per spine, speed and redundancy protocol – in this case ESI.

You can view it in Generate Summary field as shown below:

Under generic systems tab, we have defined the type of hosts and the type of connection between the hosts and the leaf devices

The dual-server is dual homed to leaf1 and leaf2 – we have defined the logical device , generic system count, the link type (dual-homed), protocol used – LACP Active and physical link count per switch with its speed.

The switch1-server and swicth2-server are single homed to leaf-1 and leaf-2 respectively.

  1. Let’s also review the second rack-type, evpn-single:

This consisting of one leaf (leaf3) and one single connection to the host device (switch3-server1) from the leaf3.

Review Templates

Templates are where we assemble the building-blocks we have constructed so far, on our journey to create a Blueprint. We have already put our devices into Racks. Now we will create a Template, where we place our Racks and other selections on how our network will operate. This is the template that we will use when it’s time to turn all our preparations into an operating fabric. (Template is already pre-created).

  1. Navigate to Design > Templates

  1. Click on evpn-vex-virtual template, and review the details:

  • Name - evpn-vex-virtual

  • Type - RACK BASED

  • ASN Allocation Scheme - Unique

  • Overlay Control Protocol - MP-EBGP EVPN

  • Rack Types

    • evpn-single

    • evpn-esi

  1. Review what we have defined for the Spines:

  • Spine Logical Device - slicer-7x10-1

  • Count - 2

  1. Review the logical diagram to see if it matches with the JCL topology. Now, we are ready to create the blueprint.

Deploying Blueprint

  1. Navigate to Blueprint – evpn-vex-virtual

  1. From the menu bar on the left, click the Devices icon, then click Managed Devices.

  1. Click on Create Offbox Agent(s) at the top right of the screen,and enter the following:

  • Device Addresses - 100.123.51.1-100.123.51.5

  • Platform - Junos

  • Username - jcluser

  • Password - Juniper!1

  1. Click on Create.

Note: Wait until all of them initialize and come back in connected form, then acknowledge the devices.

  1. Once acknowledged, they all show up in green tick mark under “Acknowledged?” section.

  1. Verify all managed devices are in an Acknowledged state.

Assign System IDs to Fabric Nodes

  1. Click the Blueprints icon in the menu bar on the left and select evpn-vex-virtual blueprint.. This opens up the Blueprint Dashboard.

  1. Navigate to the Staged tab under DC-1.

  1. Click the Devices tab in the Build workspace on the right of the screen.

  1. Click the yellow splat to the left of Assigned System IDs.

  1. Click on Change System ID Assignment.

  1. Associate each fabric node with the correct System ID/IP Address as shown in the img below:

  1. Use the drop down window to assign device IDs to devices (spine, server leaf and border leaf).

    • Mode - Deploy (in right column)

  2. Click on Update Assignments

Spine1 : 100.123.51.4 Spine2 : 100.123.51.5 leaf1 : 100.123.51.1 leaf2: 100.123.51.2 leaf3: 100.123.51.3

Commit and Deploy the Blueprint

The next step is to Commit the configuration and deploy it from Apstra to the vEX devices.

  1. Navigate to the Uncommitted tab on the blueprint, and click Commit at the top right corner of the screen. This should take care of deploying the blueprint.

  1. Now, wait for 3-4 mins for the deployments to finish and the alarms to resolve.

Note: You can go to the Dashboard and check if there are any alarms. If there are no alarms, it should look like the screenshot below, indicating a deployed, healthy fabric.

If there are no alarms, proceed, if there are alarms, reach out to the lab team.

Note: You can ignore the alarms related to vMX configuration (bgp and route table errors connecting to external router).

Reviewing Blueprint parameters (Pre-configured)

  1. Navigate to Staged section of the Blueprint:

  1. From the Staged – Physical – Topology, you can see the actual topology that has three leafs, two spines, four hosts connected to three different leafs and an external router connected to the leafs.

Review Blueprint Properties

  1. Navigate to Staged > Physical, you will see there are certain parameters of the Blueprint that we have to set are already set in this demo.

  1. The ASN and IP pools have to be assigned to the fabric and loopback connections.

  1. The Loopback IPs have to be to set as well.

  1. The Link IPs have to be assigned as well.

Note: In the device profile window, we can see that the device profiles for all 5 vEX devices (spine and leaf) are set to Juniper vEX. Under the devices column, we have already set the system IDs for all the devices that define which device belongs to which system ID and IP.

Review Routing Zones

Routing zones are created in a template where MP-EBGP EVPN is configured as the overlay control protocol. Only inter-rack virtual networks can be associated with routing zones. For a virtual network with Layer 3 SVI, the SVI will be associated with a VRF for each routing zone isolating the virtual network SVI from other tenants. This lab is for an MP-EBGP EVPN datacenter, so we’ll be using VXLAN.

  1. Navigate to Physical > Virtual > Routing Zones, you will see there are three routing zones (also known as VRFs) created for our use-cases (Red / Blue / Default).

  1. Click on the Blue routing zone, you can see that it has a VLAN ID and VNI associated with it. This is used for EVPN VXLAN connectivity across the fabric. It also has a default routing policy associated with it Similarly, for red and default routing zone.

Review Virtual Networks

Virtual networks (VN) are collections of L2 forwarding domains. In an Apstra-managed fabric, a virtual network can be constructed using either VLANs or VXLANs.

Routing zones are created in a template where MP-EBGP EVPN is configured as the overlay control protocol. Only inter-rack virtual networks can be associated with routing zones. For a virtual network with Layer 3 SVI, the SVI will be associated with a VRF for each routing zone isolating the virtual network SVI from other tenants. This lab is for an MP-EBGP EVPN datacenter, so we’ll be using VXLAN.

  1. Navigate to Staged > Virtual > Virtual Networks, We have configured a number of different virtual networks with their respective VLAN IDs and ipv4 subnets.

  1. Click on one of them

  2. You will see that it is associated withthe following:

  • Type - VXLAN

  • Name - The Virtual Network you selected

  • Routing Zone - blue

  • VNI - VNI from the VNI pool

  • IPv4 Subnet - IP Associated with the Virtual Network you selected

Note: You will see it is under the Blue Routing Zone.. This is used to configure IRB on the leaf devices for intra and inter vxlan routing between the host devices.

  1. Each of these virtual networks is associated with a connectivity template defining the interface associated with the virtual network.

Review Virtual Network and Routing Zone parameters

  1. Navigate to Virtual > Routing Zones.

  1. Here we can define routing zone parameters.

  1. Here we can define Leaf Loopback IPs

  1. Here we can define Link IPs

  1. Here we can define EVPN L3 VNIs

Review Connectivity Templates

We have given the system details about how we wish to physically connect the leaf pair to the external router. Now we need to tell Apstra what kind of layer 3 characteristics need to be applied to the links. This information is placed into an object known as a Connectivity Template (CT). A CT contains the architectural details necessary for creating an IP Link with BGP peering between the leaf pair and the external router as well as hosts to leaf connectivity.

  1. Navigate to Staged > Connectivity Templates

  1. Click the Assign button (chain button) in the Actions column

You will see that the connectivity template is assigned to a particular interface based on which tagged virtual network needs to be assigned to which interface in the topology.

Note: Some of them are pointing to the interfaces connecting the hosts to the leafs while some are pointing to the interfaces connected from the external router to the leafs.

  1. Click on Edit in the Actions for the rtr_leaf1 connectivity template (which connects external router to leaf1 and leaf2).

  1. You can see that the template is created by assigning multiple endpoints (BGP and IP link properties) to it.

  1. Let’s review logical_link_red_0 8Type: IP Link8

  • Routing Zone - Red

  • Interface Type - Tagged

  • VLAN ID - 3

  • IPv4 Addressing Type - Numbered

  • IPv6 Addressing Type - None

  1. Let’s review bgp_red_0 8Type: BGP Peering (Generic System)*

  • IPv4 AFI - On

  • IPv6 AFI - Off

  • TTL - 1

  • Enable BFD - Off

  • IPv4 Addressing Type - Addressed

  • IPv6 Addressing Type - None

  • Neighbor ASN Type - Static

  • Peer From - Interface

  • Peer To - Interface/IP Endpoint

Note: We have configured for the Blue and Default routing zone similarly,connecting to external router.

  1. We have then assigned the interface connecting to the external router from Leaf1 and Leaf2 via the connectivity template.

Usecases

Usecase 1: Verify Connectivity

  1. From Jumphost VM, open Terminal*

  1. SSH into host-1 by running the following command in the terminal:

ssh root@100.123.20.2
  1. Type the password as Juniper!1

  1. SSH into switch1-server1 connected to Leaf1. We can see that it has a tagged interface connecting to leaf1 with the IP “10.1.5.51/24”

  2. Type “ip a” to check eth1.32 IP which is 10.1.5.51/24

  1. It also has a route defined for 10.1.5.0/24 network

  1. SSH into rack1-server1 (100.123.20.1) connected to Leaf1 and Leaf2. Rack1-server1 has a bond0 (LAG) interface defined with VLAN 32 and IP set to “10.1.5.52/24”, is dual homed to leaf1 and leaf2.

  1. Eth1 and Eth2 belong to bond0 interface, IP - 10.1.5.53/24

  2. Switch2-server1 connected to Leaf2

  1. Switch3-server1 connected to Leaf3, IP - 10.1.5.54/24

  1. In summary, Let’s try to ping between the hosts to verify intra-virtual network and routing zone connectivity across different leafs.

Switch1-server1 : 10.1.5.51/24

Rack1-server1: 10.1.5.52/24

Switch2-server1: 10.1.5.53/24

Switch3-server1: 10.1.5.54/24

Usecase 1a: Ping from Host1 to Host2

Ping 10.1.5.52 from Host1 (switch1-server1)

Usecase 1b: Ping from Host2 to Host3

  1. Similarly, ping from Host2 (rack1-server1) to Host 3 (switch2-server1)

Usecase 1c: Ping from Host3 to Host4

  1. Similarly, ping from host3 (switch2-server1) to Host4 (switch3-server1)

Usecase 2: Configure via Configlet

Configure an NTP server via a configlet.

Configlets are small configuration segments that are used to apply device settings that fall outside the Apstra Reference Design parameters. Examples of common Configlet usage are items like syslog, SNMP, TACACS/RADIUS, management interface ACLs, control plane policing and NTP settings. The NTP example is the one we will use to show you the process of working with Configlets.

In addition, we are going to use another mechanism called Property Sets. These increase flexibility by allowing us to use variables in a Configlet. This is quite handy if we would like to use a common Configlet structure for our devices, but not all devices require the same values. This exercise calls for us to add the Junos style and variables to an existing NTP Configlet. Property Sets will supply the IP address for the server and identify the VRF to apply the configurations into.

Always remember that Configlets are a powerful way to apply configuration that falls outside of the Apstra Reference Designs. In other words, do not use Configlets for features that Apstra manages, itself. Doing so can interfere with the proper operation of the solution. It is critical that the configurations applied in Configlets are thoroughly verified before applying them in a Blueprint. Otherwise, malformed Configlets will cause errors and interfere with deployment.

  1. Navigate to Design > Configlets.

  1. Create a Configlet using the following:

  • Name - ntp

  • Config Style - Junos

  • Section

    • Top-Level - Set / Delete

    • Interface-Level - None Selcted

  • Template Text - Set system ntp server {{ntp_server}}

Note: The Configlet we edited resides in the global Design area of the Apstra server. To use it, we need to import the NTP Configlet into a Blueprint where it can be applied to our Junos devices. We also need to bring in Property Sets containing values for the variables. To do this, we go into the Blueprint and move to the Catalog area.
  1. Navigate to Design > Property Set.

  1. Create a Property Setusing the following:

  • Name - ntp

  • Input Type - Editor

  • Values - ntp_server: 100.123.0.1

  1. Navigate to Blueprint > Staged > Catalog > Configlet.

  1. Click Import Configlet

  2. Select ntp Server and select spine and leaf undwer roles.

  1. Navigate to Staged > Catalog > Property Set

  2. Click Import Property Set

  3. Select ntp, and click Import Property Set

  1. Now, click Commit and wait for successful deployment on all 5 devices.

  1. SSH into Leaf1 from Jumphost by opening Terminal and running the following:

ssh jcluser@100.123.51.1

  1. Now run the following command to verify the NTP server IP:

show system ntp

Usecase 3: Instantiate Pre-Defined IBA Probe

  1. Navigate to Blueprint > Analytics > Probes

  1. Click Create Probe, then Instantiate Predefined Probe

  2. Change the Discard Percentage Threshold to 2.

  3. Click Create

  1. The Operational tab and No Anomalies tab should be green.

Usecase 4: Starting and Stopping Probes

  1. Click the Probes tab to return to the list view.

  2. To stop an enabled probe, click the Enabled toggle off.

  3. To start a disabled probe, click the Enabled toggle on.

Usecase 5: Root Cause Analysis

The root cause identification system automatically correlates anomalies to identify the actual cause of connectivity problems. This eliminates unnecessary work and troubleshooting for an operator.

With Root Cause Identification enabled, link or interface failures will be identified with precision. RCI can be tested in this lab pod by misconfiguring fabric links.

  1. Navigate to Analytics > Root Causes

  1. Click Enable root cause analysis

  1. Go to Staged > Physical > Links, and click Edit cabling Map.

  1. Change the following:

  • spine1 - ge-0/0/1 (Leaf3)

  • spine1 - ge-0/0/2 (Leaf2)

  1. Go to Uncommitted tab, and click Commit to save the changes.

  1. Name the commit as swap cabling.

  1. Now go to Analytics > Root Causes.

Note: Swapping the interface settings will disrupt BGPs peering between fabric devices. This will cause multiple anomaly reports, all of which will be correlated by the Root Cause probe. The view below shows the result of the analysis.

  1. Explore all the areas of this view to see the details of the Root Cause diagnosis.

Usecase 6: Time Voyager

To go back to the previous configuration before swapping the cabling of spine-1, we can either re-edit the cabling map to point to original or use the LLDP feature or use Time voyager to roll back to the previous commit.

Let’s use Time Voyager to roll back to previous commit, the commit before “swap cabling” which is current.

  1. Navigate to Time Voyager, The current configuration is “swap cabling”.

  2. Select the one prior to that, and click on Jump to this revision icon in the Actions column.

  1. Click on Rollback and review the information in the uncommitted tab

  1. Click Commit

  1. Cross check that the cabling map is back to its original state.

Usecase 7: Config Deviation

  1. SSH into spine-1 from Jumphost by opening Terminal and running the following:

ssh jcluser@100.123.51.4

  1. Set the routing-options by using the following commands:

edit

set routing-options static route 7.7.7.7/32 next-hop 8.8.8.8

commit

  1. Navigate to Apstra UI, you will the Deployment Status shows a config dev error.

  1. Click on the config dev error, it shows that the error is associated with spine1 and it is a route based error.

  1. Click on spine1

  1. Go to the config tab.

  2. Scroll down, and you can see where and what the inserted config is.

  1. Click on Accept Changes

  1. Click Confirm to resolve the error.

Note: You can also apply full config to revert the device to the golden configuration.

You have successfully completed this Hands-On Lab!

Try it later ?

You can find this lab and try it out on-demand in JCL at the below link:

(https://portal.cloudlabs.juniper.net/RM/Topology?b=45df00a0-7d40-4e57-9e3b-b195861e0c81&d=c78ae7c7-e93b-4708-813e-109fd780f301) this lab is available in Demonstration-US1 domain

Lab Survey

Please take 2 minutes and complet the Apstra 6.0 Hands-On Lab Survey

Apstra-6.0-hol-Survey-qr-code