# Apstra 6.0 Lab ## Overview The demonstration starts with a pre-configured Apstra setup that has rack type, template, logical device and blueprint already pre-configured. Day 1 and Day 2 Apstra configurations are also done to showcase creation of routing zone, virtual networks and connectivity templates. The purpose is to introduce Apstra 6.0 focusing on concepts that help in the deployment of a DC fabric with inter and intra-virtual network connectivity established and finally, understand how to identify anomalies and other features on Apstra 6.0 UI. - **Day 0 activity** - Discover the vEX devices using Offbox agent and manage them - Assign system IDs to respective device parameters - **Day 1 activity** - Verify the configuration of templates, rack types, logical devices, interface mapping, blueprint. virtual networks, routing zones, connectivity templates and routing policy - Deploy the configuration onto the fabric - Connect the leafs to an external router - **Day 2 activity** - Verify intra-virtual network connectivity between the hosts via tagged interfaces - Insert configuration deviation, swap links, add a configlet, create an IBA probe, create and observe root cause identification, rollback using time voyager ## Starting Lab ### Topology Topology consists of a 2-stage leaf spine architecture with external gateway being a vMX router. We have 4 different hosts connecting to three leafs with one of them having LAG enabled. All of them have tagged interfaces connecting to the leafs. Leaf1 and Leaf2 are connecting to the external router. Each of the leafs have a host device connected to them. ![](images/img1.png) ![](images/img2.png) ### Access Details [Once you have submitted the [form](https://forms.office.com/r/uLA73TBDYK) check your inbox and you should have received an email from (<_jcl-hol@juniper.net>) containing the lab details as show below. ![](images/img3.png)]: # 1. Open a browser and http into server Link provided to you in the above email. ![](images/img4.png) 2. Login with username and the password shared via email. ![](images/img5.png) 3. Click on **JumpHost**, and login using the following credentials: - **user** - jumpstation - **password** - Juniper!1 ![](images/img6.png) 4. Open a Firefox browser and navigate to Apstra UI: with username: admin and password: Juniper!1 ![](images/img7.png) 5. Navigate to **Blueprints**, You will be working with **evpn-vex-virtual** blueprint. ![](images/img8.png) ## Reviewing DC Configuration (Pre-Configured) ### Review Resources 1. Navigate to **Resources > IP Pools** section in Apstra UI. ![](images/img9.png) 2. You can see that the demo is currently using three such IP Pools for the purpose of fabric and loopback connectivity. ![](images/img10.png) 3. You can create multiple IP Pools using the **Create IP Pool** button as shown.
Note: You don't necessarily have to create IP Pools and ASN Pools, this is just for knowing how to create them. These are already pre-created for the purpose of demo.
![](images/img11.png) 4. Similarly, cross check ASN and VNI Pools and view how to create an ASN pool. ![](images/img12.png) ![](images/img13.png) ### Review Rack Types Rack Types are modular definitions containing top-of-rack switches, workloads, and their associated connections. There are also redundancy protocol settings and other details contained. The racks we have built will be used in the next stages of modelling our lab fabric. Like the other building blocks we will work with, there are several pre-defined examples that come with the server. You can examine them in the list to get a feel for the possibilities. They can all be easily cloned and modified to give us the exact characteristics we need to architect our fabric. These are already pre-configured so you just need to review them and not create them. 1. Navigate to **Design > Rack Types**. 2. We are using **evpn-esi** and **evpn-single** rack in this demo (search for them if you don't get it immediately). 3. Let's review the first rack-type, **evpn-esi**: - **Name** - evpn-esi - **Fabric Connectivity Design** - L3 Clos 4. Click on **Edit** ![](images/img119.png)
Note: They are in L3 Clos mode. This defines the two leafs (leaf-1 and leaf-2) connected to three hosts (switch1-server1, switch2-server1 and rack1-server1). Rack1-server1 is dual homed to leaf-1 and leaf-2.
![](images/img120.png) ![](images/img121.png) You can see that under the leafs section, we have defined the name (evpn-esi), logical device that determines the connections and role, links per spine, speed and redundancy protocol – in this case ESI. You can view it in Generate Summary field as shown below: ![](images/img122.png) Under generic systems tab, we have defined the type of hosts and the type of connection between the hosts and the leaf devices The dual-server is dual homed to leaf1 and leaf2 – we have defined the logical device , generic system count, the link type (dual-homed), protocol used – LACP Active and physical link count per switch with its speed. ![](images/img123.png) The switch1-server and swicth2-server are single homed to leaf-1 and leaf-2 respectively. ![](images/img124.png) 5. Let's also review the second rack-type, **evpn-single**: This consisting of one leaf (leaf3) and one single connection to the host device (switch3-server1) from the leaf3. ![](images/img125.png) ![](images/img126.png) ### Review Templates Templates are where we assemble the building-blocks we have constructed so far, on our journey to create a **Blueprint**. We have already put our devices into **Racks**. Now we will create a **Template**, where we place our Racks and other selections on how our network will operate. This is the template that we will use when it's time to turn all our preparations into an operating fabric. (Template is already pre-created). 1. Navigate to **Design > Templates** ![](images/img26.png) 2. Click on **evpn-vex-virtual** template, and review the details: - **Name** - evpn-vex-virtual - **Type** - RACK BASED - **ASN Allocation Scheme** - Unique - **Overlay Control Protocol** - MP-EBGP EVPN - **Rack Types** - evpn-single - evpn-esi ![](images/img27.png) 3. Review what we have defined for the **Spines**: - **Spine Logical Device** - slicer-7x10-1 - **Count** - 2 ![](images/img28.png) 4. Review the logical diagram to see if it matches with the JCL topology. Now, we are ready to create the blueprint. ## Deploying Blueprint 1. Navigate to **Blueprint -- evpn-vex-virtual** ![](images/img29.png) 2. From the menu bar on the left, click the **Devices** icon, then click **Managed Devices**. ![](images/img30.png) ![](images/img31.png) 3. Click on **Create Offbox Agent(s)** at the top right of the screen,and enter the following: - **Device Addresses** - 100.123.51.1-100.123.51.5 - **Platform** - Junos - **Username** - jcluser - **Password** - Juniper!1 4. Click on **Create**.
Note: Wait until all of them initialize and come back in connected form, then acknowledge the devices.
![](images/img32.png) 5. Once acknowledged, they all show up in green tick mark under **"Acknowledged?"** section. ![](images/img33.png) 6. Verify all managed devices are in an Acknowledged state. ![](images/img34.png) ### Assign System IDs to Fabric Nodes 1. Click the **Blueprints** icon in the menu bar on the left and select **evpn-vex-virtual blueprint.**. This opens up the Blueprint Dashboard. ![](images/img35.png) 2. Navigate to the **Staged** tab under DC-1. ![](images/img36.png) 3. Click the **Devices** tab in the Build workspace on the right of the screen. ![](images/img37.png) 4. Click the **yellow splat** to the left of Assigned System IDs. ![](images/img38.png) 5. Click on **Change System ID Assignment**. ![](images/img39.png) 6. Associate each fabric node with the correct System ID/IP Address as shown in the img below: ![](images/img40.png) 7. Use the drop down window to assign device IDs to devices (spine, server leaf and border leaf). - **Mode** - Deploy (in right column) 8. Click on **Update Assignments** ![](images/img41.png) Spine1 : 100.123.51.4 Spine2 : 100.123.51.5 leaf1 : 100.123.51.1 leaf2: 100.123.51.2 leaf3: 100.123.51.3 ### Commit and Deploy the Blueprint The next step is to Commit the configuration and deploy it from Apstra to the vEX devices. 1. Navigate to the **Uncommitted** tab on the blueprint, and click **Commit** at the top right corner of the screen. This should take care of deploying the blueprint. ![](images/img42.png) 2. Now, wait for 3-4 mins for the deployments to finish and the alarms to resolve.
Note: You can go to the Dashboard and check if there are any alarms. If there are no alarms, it should look like the screenshot below, indicating a deployed, healthy fabric.
![](images/img43.png) If there are no alarms, proceed, if there are alarms, reach out to the lab team.
Note: You can ignore the alarms related to vMX configuration (bgp and route table errors connecting to external router).
## Reviewing Blueprint parameters (Pre-configured) 1. Navigate to **Staged** section of the Blueprint: ![](images/img118.png) 2. From the **Staged -- Physical -- Topology**, you can see the actual topology that has three leafs, two spines, four hosts connected to three different leafs and an external router connected to the leafs. ![](images/img2.png) ### Review Nodes and Links 1. Navigate to **Physical > Nodes**, and check the information regarding each node (device) and its associated logical device, hostname, role, device profile used (in our case, Juniper vEX). ![](images/img48.png) 2. Next, click **Physical > Links**, and check the cabling map to see if it coincides with the JCL topology. ![](images/img49.png) ### Review Blueprint Properties 1. Navigate to **Staged > Physical**, you will see there are certain parameters of the Blueprint that we have to set are already set in this demo. ![](images/img50.png) 2. The ASN and IP pools have to be assigned to the fabric and loopback connections. ![](images/img51.png) 3. The Loopback IPs have to be to set as well. ![](images/img52.png) 4. The Link IPs have to be assigned as well. ![](images/img53.png)
Note: In the device profile window, we can see that the device profiles for all 5 vEX devices (spine and leaf) are set to Juniper vEX. Under the devices column, we have already set the system IDs for all the devices that define which device belongs to which system ID and IP.
### Review Routing Zones Routing zones are created in a template where MP-EBGP EVPN is configured as the overlay control protocol. Only inter-rack virtual networks can be associated with routing zones. For a virtual network with Layer 3 SVI, the SVI will be associated with a VRF for each routing zone isolating the virtual network SVI from other tenants. This lab is for an MP-EBGP EVPN datacenter, so we'll be using VXLAN. 1. Navigate to **Physical > Virtual > Routing Zones**, you will see there are three routing zones (also known as VRFs) created for our use-cases (Red / Blue / Default). ![](images/img54.png) 2. Click on the **Blue** routing zone, you can see that it has a VLAN ID and VNI associated with it. This is used for EVPN VXLAN connectivity across the fabric. It also has a default routing policy associated with it Similarly, for red and default routing zone. ### Review Virtual Networks Virtual networks (VN) are collections of L2 forwarding domains. In an Apstra-managed fabric, a virtual network can be constructed using either VLANs or VXLANs. Routing zones are created in a template where MP-EBGP EVPN is configured as the overlay control protocol. Only inter-rack virtual networks can be associated with routing zones. For a virtual network with Layer 3 SVI, the SVI will be associated with a VRF for each routing zone isolating the virtual network SVI from other tenants. This lab is for an MP-EBGP EVPN datacenter, so we'll be using VXLAN. 1. Navigate to **Staged > Virtual > Virtual Networks**, We have configured a number of different virtual networks with their respective VLAN IDs and ipv4 subnets. ![](images/img55.png) 2. Click on **one of them** 3. You will see that it is associated withthe following: - **Type** - VXLAN - **Name** - *The Virtual Network you selected* - **Routing Zone** - blue - **VNI** - *VNI from the VNI pool* - **IPv4 Subnet** - *IP Associated with the Virtual Network you selected*
Note: You will see it is under the Blue Routing Zone.. This is used to configure IRB on the leaf devices for intra and inter vxlan routing between the host devices.
![](images/img56.png) 4. Each of these virtual networks is associated with a connectivity template defining the interface associated with the virtual network. ![](images/img57.png) ### Review Virtual Network and Routing Zone parameters 1. Navigate to **Virtual > Routing Zones**. ![](images/img58.png) 2. Here we can define routing zone parameters. ![](images/img59.png) 3. Here we can define Leaf Loopback IPs ![](images/img60.png) 4. Here we can define Link IPs ![](images/img61.png) 5. Here we can define EVPN L3 VNIs ![](images/img62.png) ### Review Connectivity Templates We have given the system details about how we wish to physically connect the leaf pair to the external router. Now we need to tell Apstra what kind of layer 3 characteristics need to be applied to the links. This information is placed into an object known as a **Connectivity Template (CT)**. A CT contains the architectural details necessary for creating an IP Link with BGP peering between the leaf pair and the external router as well as hosts to leaf connectivity. 1. Navigate to **Staged > Connectivity Templates** ![](images/img63.png) 2. Click the **Assign button (chain button**) in the **Actions** column ![](images/img64.png) You will see that the connectivity template is assigned to a particular interface based on which tagged virtual network needs to be assigned to which interface in the topology.
Note: Some of them are pointing to the interfaces connecting the hosts to the leafs while some are pointing to the interfaces connected from the external router to the leafs.
![](images/img65.png) 3. Click on **Edit** in the **Actions** for the rtr_leaf1 connectivity template (which connects external router to leaf1 and leaf2). ![](images/img66.png) 4. You can see that the template is created by assigning multiple endpoints (BGP and IP link properties) to it. ![](images/img67.png) 5. Let's review **logical_link_red_0** 8Type: IP Link8 - **Routing Zone** - Red - **Interface Type** - Tagged - **VLAN ID** - 3 - **IPv4 Addressing Type** - Numbered - **IPv6 Addressing Type** - None ![](images/img68.png) 6. Let's review **bgp_red_0** 8Type: BGP Peering (Generic System)* - **IPv4 AFI** - On - **IPv6 AFI** - Off - **TTL** - 1 - **Enable BFD** - Off - **IPv4 Addressing Type** - Addressed - **IPv6 Addressing Type** - None - **Neighbor ASN Type** - Static - **Peer From** - Interface - **Peer To** - Interface/IP Endpoint
Note: We have configured for the Blue and Default routing zone similarly,connecting to external router.
![](images/img69.png) ![](images/img70.png) 7. We have then assigned the interface connecting to the external router from Leaf1 and Leaf2 via the connectivity template. ![](images/img71.png) ## Usecases ### Usecase 1: Verify Connectivity 1. From Jumphost VM, open *Terminal** ![](images/img72.png) 2. SSH into host-1 by running the following command in the terminal: ``` ssh root@100.123.20.2 ``` 3. Type the password as **Juniper!1** ![](images/img73.png) 4. SSH into switch1-server1 connected to Leaf1. We can see that it has a tagged interface connecting to leaf1 with the IP "10.1.5.51/24" 5. Type "ip a" to check eth1.32 IP which is 10.1.5.51/24 ![](images/img74.png) ![](images/img75.png) 6. It also has a route defined for 10.1.5.0/24 network ![](images/img76.png) 7. SSH into rack1-server1 (100.123.20.1) connected to Leaf1 and Leaf2. Rack1-server1 has a bond0 (LAG) interface defined with VLAN 32 and IP set to "10.1.5.52/24", is dual homed to leaf1 and leaf2. ![](images/img77.png) ![](images/img78.png) 8. Eth1 and Eth2 belong to bond0 interface, IP - 10.1.5.53/24 9. Switch2-server1 connected to Leaf2 ![](images/img79.png) 10. Switch3-server1 connected to Leaf3, IP - 10.1.5.54/24 ![](images/img80.png) 11. In summary, Let's try to ping between the hosts to verify intra-virtual network and routing zone connectivity across different leafs. ***Switch1-server1 : 10.1.5.51/24*** ***Rack1-server1: 10.1.5.52/24*** ***Switch2-server1: 10.1.5.53/24*** ***Switch3-server1: 10.1.5.54/24*** #### Usecase 1a: Ping from Host1 to Host2 Ping 10.1.5.52 from Host1 (switch1-server1) ![](images/img81.png) #### Usecase 1b: Ping from Host2 to Host3 1. Similarly, ping from Host2 (rack1-server1) to Host 3 (switch2-server1) ![](images/img82.png) #### Usecase 1c: Ping from Host3 to Host4 1. Similarly, ping from host3 (switch2-server1) to Host4 (switch3-server1) ![](images/img83.png) ### Usecase 2: Configure via Configlet Configure an NTP server via a configlet. Configlets are small configuration segments that are used to apply device settings that fall outside the Apstra Reference Design parameters. Examples of common Configlet usage are items like syslog, SNMP, TACACS/RADIUS, management interface ACLs, control plane policing and NTP settings. The NTP example is the one we will use to show you the process of working with Configlets. In addition, we are going to use another mechanism called Property Sets. These increase flexibility by allowing us to use variables in a Configlet. This is quite handy if we would like to use a common Configlet structure for our devices, but not all devices require the same values. This exercise calls for us to add the Junos style and variables to an existing NTP Configlet. Property Sets will supply the IP address for the server and identify the VRF to apply the configurations into. Always remember that Configlets are a powerful way to apply configuration that falls outside of the Apstra Reference Designs. In other words, do not use Configlets for features that Apstra manages, itself. Doing so can interfere with the proper operation of the solution. It is critical that the configurations applied in Configlets are thoroughly verified before applying them in a Blueprint. Otherwise, malformed Configlets will cause errors and interfere with deployment. 1. Navigate to **Design > Configlets**. ![](images/img84.png) 2. Create a Configlet using the following: - **Name** - ntp - **Config Style** - Junos - **Section** - **Top-Level** - Set / Delete - **Interface-Level** - *None Selcted* - **Template Text** - Set system ntp server {{ntp_server}} ![](images/img85.png)
Note: The Configlet we edited resides in the global Design area of the Apstra server. To use it, we need to import the NTP Configlet into a Blueprint where it can be applied to our Junos devices. We also need to bring in Property Sets containing values for the variables. To do this, we go into the Blueprint and move to the Catalog area.
3. Navigate to **Design > Property Set**. ![](images/img86.png) 4. Create a Property Setusing the following: - **Name** - ntp - **Input Type** - Editor - **Values** - ntp_server: 100.123.0.1 ![](images/img87.png) 5. Navigate to **Blueprint > Staged > Catalog > Configlet**. ![](images/img88.png) 6. Click **Import Configlet** 7. Select **ntp Server** and select **spine** and **leaf** undwer roles. ![](images/img89.png) [!\[\](images/img90.png)]: # 8. Navigate to **Staged > Catalog > Property Set** 9. Click **Import Property Set** 10. Select **ntp**, and click **Import Property Set** ![](images/img91.png) 11. Now, click **Commit** and wait for successful deployment on all 5 devices. ![](images/img92.png) 12. SSH into Leaf1 from Jumphost by opening **Terminal** and running the following: ``` ssh jcluser@100.123.51.1 ``` ![](images/img93.png) 13. Now run the following command to verify the NTP server IP: ``` show system ntp ``` ![](images/img94.png) ### Usecase 3: Instantiate Pre-Defined IBA Probe 1. Navigate to **Blueprint > Analytics > Probes** ![](images/img95.png) 2. Click **Create Probe**, then **Instantiate Predefined Probe** 3. Change the **Discard Percentage Threshold** to 2. 4. Click **Create** ![](images/img96.png) 5. The **Operational** tab and **No Anomalies** tab should be green. ![](images/img97.png) ### Usecase 4: Starting and Stopping Probes 1. Click the **Probes** tab to return to the list view. 2. To stop an enabled probe, click the **Enabled toggle off**. 3. To start a disabled probe, click the **Enabled toggle on**. ![](images/img98.png) ### Usecase 5: Root Cause Analysis The root cause identification system automatically correlates anomalies to identify the actual cause of connectivity problems. This eliminates unnecessary work and troubleshooting for an operator. With Root Cause Identification enabled, link or interface failures will be identified with precision. RCI can be tested in this lab pod by misconfiguring fabric links. 1. Navigate to **Analytics > Root Causes** ![](images/img99.png) 2. Click **Enable root cause analysis** ![](images/img100.png) 3. Go to **Staged > Physical > Links**, and click **Edit cabling Map**. ![](images/img101.png) 4. Change the following: - **spine1** - ge-0/0/1 (Leaf3) - **spine1** - ge-0/0/2 (Leaf2) ![](images/img102.png) 5. Go to **Uncommitted** tab, and click **Commit** to save the changes. ![](images/img103.png) 6. Name the commit as **swap cabling**. ![](images/img104.png) 7. Now go to **Analytics > Root Causes**.
Note: Swapping the interface settings will disrupt BGPs peering between fabric devices. This will cause multiple anomaly reports, all of which will be correlated by the Root Cause probe. The view below shows the result of the analysis.
![](images/img105.png) 8. Explore all the areas of this view to see the details of the Root Cause diagnosis. ### Usecase 6: Time Voyager To go back to the previous configuration before swapping the cabling of spine-1, we can either re-edit the cabling map to point to original or use the LLDP feature or use Time voyager to roll back to the previous commit. Let's use Time Voyager to roll back to previous commit, the commit before "swap cabling" which is current. 1. Navigate to **Time Voyager**, The current configuration is "swap cabling". 2. Select the **one prior** to that, and click on **Jump to this revision** icon in the **Actions** column. ![](images/img106.png) 3. Click on **Rollback** and review the information in the uncommitted tab ![](images/img107.png) 4. Click **Commit** ![](images/img108.png) 5. Cross check that the cabling map is back to its original state. ![](images/img109.png) ### Usecase 7: Config Deviation 1. SSH into spine-1 from Jumphost by opening **Terminal** and running the following: ``` ssh jcluser@100.123.51.4 ``` ![](images/img110.png) 2. Set the **routing-options** by using the following commands: ``` edit set routing-options static route 7.7.7.7/32 next-hop 8.8.8.8 commit ``` ![](images/img111.png) 3. Navigate to **Apstra UI**, you will the **Deployment Status** shows a config dev error. ![](images/img112a.png) 4. Click on the **config dev error**, it shows that the error is associated with spine1 and it is a route based error. ![](images/img117.png) 5. Click on **spine1** ![](images/img113.png) 6. Go to the **config tab**. 7. Scroll down, and you can see where and what the inserted config is. ![](images/img114.png) 8. Click on **Accept Changes** ![](images/img115.png) 9. Click **Confirm** to resolve the error.
Note: You can also apply full config to revert the device to the golden configuration.
![](images/img116.png) **You have successfully completed this Hands-On Lab!** ## Try it later ? You can find this lab and try it out on-demand in JCL at the below link: (https://portal.cloudlabs.juniper.net/RM/Topology?b=45df00a0-7d40-4e57-9e3b-b195861e0c81&d=c78ae7c7-e93b-4708-813e-109fd780f301) this lab is available in Demonstration-US1 domain ## Lab Survey Please take 2 minutes and complet the [Apstra 6.0 Hands-On Lab Survey](https://www.surveymonkey.com/r/P3X2SX2) ![Apstra-6.0-hol-Survey-qr-code](./images/Apstra-6.0-hol-Survey-qr-code.png)