SD-WAN and DMVPN

Date: Aug 5, 2024 By , , . Sample Chapter is provided courtesy of Cisco Press.

Learn common design strategies in SD-WAN and DMVPN deployments, and how to design and deploy these two technologies to integrate with other IBN domains, in this sample chapter from Designing Real-World Multi-domain Networks.

In this chapter, we discuss the following:

  • Common design strategies in SD-WAN and DMVPN deployments

  • How to design and deploy SD-WAN to integrate with other IBN domains

  • How to design and deploy DMVPN to integrate with other IBN domains

Overview

Both Cisco Software-Defined WAN (SD-WAN) and Dynamic Multi-point Virtual Private Network (DMVPN) provide the ability to abstract the WAN service provider transports from the enterprise routing environment. Additionally, both provide a means to create and extend macro- and microsegmentation, including support for Cisco TrustSec. This support allows either architecture to be utilized as part of an end-to-end security policy. Cisco SD-WAN has many advantages as an architecture over DMVPN, such as application-aware routing and built-in automation and provisioning; however, DMVPN does have its use cases. Both of these technologies fundamentally provide an efficient way of routing between the sites by providing direct site-to-site communication without the need for going through a centralized hub or a data center.

SD-WAN

Cisco SD-WAN as a technology is discussed in detail in various other Cisco Press texts. The sections in this chapter assume that you are familiar with Cisco SD-WAN. The following sections discuss designing and integrating Cisco SD-WAN with the various other domains as part of a single multi-domain strategy.

SD-WAN and SDA

Cisco’s Software Defined Access, or SDA, allows the enterprise to introduce macro- and microsegmentation with automation and assurance at the local campus environment. Hosts may be dynamically or statically classified into virtual networks (macrosegmentation) and security groups (microsegmentation). When discussing SDA, it is important to remember that the control plane is based on LISP, whereas the data plane uses VXLAN.

In a multi-site SDA deployment without integration into other domains, the architect must plan for policy enforcement and ensure that the virtual network identifier (VNID) and security group tag (SGT) information is correctly propagated across either an IP transit environment or via SDA-Transit. SD-WAN allows the enterprise to extend the macro- and microsegmentation end to end in a fully integrated fashion. This way, the SGT information can be propagated across the network without additional resource utilization or reclassification as the data re-enters the SDA environment at the remote location. Even if the remote site has not been migrated to SDA, SD-WAN may be utilized to provide Cisco TrustSec policy enforcement at the remote site without the originating site being aware of the destination SGTs.

Cisco product engineering has supported two methods for integrating SDA and SD-WAN: the one-box and the two-box solutions. One of the most essential points to remember is that inline tagging for SD-WAN is only supported on routers based on Cisco platforms such as the ISR 4000 series, ASR 1000 series, and Catalyst 8000 series. Viptela-based routing platforms, such as vEdge 100 and vEdge 1000, are not supported for inline tagging.

One-Box SDA and SD-WAN

One-box SDA and SD-WAN is also known as an integrated solution. In this scenario, the SD-WAN Edge router serves as an SDA control plane and also a border node. For this to occur, the Cisco DNA Center must be integrated with the Catalyst SD-WAN Manager via API integration. The SD-WAN Edge will be a part of the Catalyst SD-WAN Manager inventory and be provisioned as a normal SD-WAN device. When the SDA fabric is created, the Cisco DNA Center will update the Catalyst SD-WAN Manager via the API integration to reprovision the SD-WAN Edge with the appropriate SDA configuration. Figure 3-1 shows how a packet changes as it traverses the one-box solution across SD-WAN from one SDA site to another. Notice that the VNID and SGT information is propagated across the network with the data packet itself.

Figure 3.1 SD-WAN-SDA One-Box Topology

There are distinct pros and cons to the one-box approach that must be considered as part of the overall enterprise design. As will be discussed with the two-box approach, the one-box approach removes support for modularity. This is important to consider because most enterprises are divided into multiple organizational units even when considering who manages which part of the network. For instance, one group may manage the WAN while another manages the LAN campus. The one-box approach will make it difficult for the two groups to manage their respective environments.

From a design consideration, it should be noted that the one-box solution requires a router to perform the border node and control plane functionality in the SDA campus. With physical redundancy included, two routers now perform those roles. While this solution works well for the control plane, because the routers have greater routing capabilities, in the data plane, the routers could inadvertently become the logical core of the local area network. In larger locations, additional functional blocks may exist outside of the SDA domain. For instance, where Nexus switches perform a local services aggregation block, an additional pair of switches performing the internal border node functionality at the intended core will better facilitate the required high-speed switching without the traffic traversing the edge routers. Figure 3-2 illustrates how the connectivity between the additional non-SDA fabric at the local site could connect directly to the core of the network while still utilizing the one-box solution. Notice that the core now performs SDA border node functionality. SDA VXLAN traffic that is egressing the location will still utilize the SD-WAN Edges, whereas traffic to these additional services will utilize the core border nodes as their VXLAN termination point.

Figure 3.2 One-Box Topology with Additional Service Domains

Catalyst SD-WAN Manager and Catalyst Center Integration

Integrating Catalyst SD-WAN Manager and Catalyst Center is fairly straightforward. From the Cisco Catalyst Center System Settings, navigate to the External Services page and select Catalyst SD-WAN Manager. Depending on the version of Catalyst Center, this selection will navigate to a new page or open a pop-out, allowing you to enter the Catalyst SD-WAN Manager and SD-WAN overlay information shown in Figure 3-3.

The Catalyst Center will use the configured user credentials for its API calls to Catalyst SD-WAN Manager. Therefore, it is recommended that a service account for the integration is created within the enterprise identity store that can be used for auditing purposes. As noted in Figure 3-3, if the Catalyst SD-WAN Manager is authenticated via a root certificate authority, then the Catalyst Center must have a certificate installed through the Certificates page from the same trust chain.

Figure 3.3 Catalyst SD-WAN Manager and Catalyst Center Integration Process

Two-Box SDA and SD-WAN

In the two-box SDA–SD-WAN scenario, also known as a nonintegrated solution, both architectures are kept separate, providing for a modular networking approach. The SD-WAN Edge devices each have a physical link to the SDA border nodes that is a dot1Q trunk. On the SD-WAN Edge side of the link, the subinterface is associated with a specific service VPN. On the border node side of the link, the SVI provisioned from Catalyst Center is associated with the corresponding SDA virtual network. In this manner, the VLAN becomes the mapping between the SDA virtual network and the SD-WAN service VPN. By enabling Cisco TrustSec (CTS) inline tagging on both sides of the interface, the SGT value may be propagated as well, maintaining both the macro- and microsegmentation of the environment.

From a design perspective, the two-box scenario allows for both a modular design and phased rollouts at the expense of potentially more hardware to install and manage. Figure 3-4 demonstrates how a data packet traverses the two-box solution while maintaining the VNID and the SGT information with the packet itself.

Figure 3.4 SD-WAN-SDA Two-Box Topology

SDA and SD-WAN Segmentation

In SDA, the VNID is a 24-bit value used to identify which virtual network a packet traversing the underlay is associated with. When a new virtual network is created in the Catalyst Center UI, Catalyst Center creates a new VNID for it that is constant across all locations. Whenever that virtual network is provisioned at an SDA site, Catalyst Center uses the VNID as the LISP instance ID, which is mapped to the VRF on the switch with the correct virtual network name. The VNID is carried across the underlay as part of the VXLAN header. Additionally, the VXLAN header carries the SGT value. The SGT is a 16-bit value that indicates to which security group the source of the packet belongs.

For data egressing the SDA environment, it is forwarded as a VXLAN packet to the correct border node. After the packet arrives at the border node, it is decapsulated and forwarded based on the VRF instance associated with the VNID. If the destination security group is known by the border node, it will enforce applicable policy. This is not always the case because it would depend on the configuration of the border node.

With SD-WAN, the service VPN ID is inside the IPsec header prior to forwarding on the transport layer. Additionally, the CMD header commonly associated with Layer 2 frames has been added to the IPsec header on an SD-WAN packet to allow the SGT information to be propagated also.

The last piece of the discussion pertains to connecting the SDA environment and the SD-WAN environment together. Whether the one-box or two-box solution is utilized, there must be a consistent mapping between the two architectures. In the one-box solution, mapping is done via Cisco Catalyst Center. After the Catalyst SD-WAN Manager to Catalyst Center integration has been performed as described previously, the individual SDA virtual networks are mapped to specific SD-WAN service VPNs in the Catalyst Center UI. When an SD-WAN Edge is provisioned at an SDA location, the mapping of Service VPN to VNID configured in the Catalyst Center is used. The SD-WAN Service VPN is used as the name of the VRF on the SD-WAN Edge for all of the SDA and SD-WAN appropriate configurations. When the integration is complete, the Catalyst Center page utilized to tie the SD-WAN Service VPN to the SDA virtual network is shown, as in Figure 3-5.

Figure 3.5 One-Box VNID to Service VPN Mapping

In the two-box solution, the mapping between VNID and service VPN is achieved through the use of the VLAN carrying the traffic between the two devices. It is critical to standardize the VLAN mapping across the environment. Doing so facilitates easier operation and troubleshooting of a multi-site environment when operations engineers know that a specific VLAN is used to connect SD-WAN Edge to the border node in the SDA Corporate VN and SD-WAN service VPN 500 at all locations. The VLAN mapping ensures that macrosegmentation is maintained; however, support for the microsegmentation propagation must be intentionally added. This is achieved by configuring inline tagging on both sides of the link, allowing the SGT information to be propagated via the CMD header in the frame. Care should be taken to ensure that the device SGT also is trusted on both sides. In Example 3-1, the inline tagging is configured on both the physical interface and the subinterface carrying the traffic. The SGT with ID number 2 is the All Cisco TrustSec Devices SGT.

Example 3-1 Inline Tagging Configuration

! LAN Interfaces
interface GigabitEthernet0/0/0
 cts manual
 policy static sgt 2 trusted
!
interface GigabitEthernet0/0/0.100
 cts manual
 policy static sgt 2 trusted
!

SDA and SD-WAN Best Practices

As is seen throughout this book, standardization across the environment is critical. When creating any standardized mapping, ensure that future proofing is considered. Is there a potential for additional network devices for horizontal scaling at the location? Will additional macrosegmentation be added to the environment that may need to be considered?

The SDA INFRA_VN, or SDA underlay, should be contiguous with a service VPN in SD-WAN. The SDA underlay may be mapped to a unique SD-WAN service VPN or utilize one used by the general corporate VPN. The former allows the underlay management to be reachable from the local environment without traffic leaving the site, and the latter maintains the macrosegmentation requiring traffic to egress the site, possibly fusing the two routing domains together at a centralized firewall environment.

When the SDA and SD-WAN multi-domain environment is built out, it is recommended to build the centralized data center environment first—the services and SD-WAN Edge headend devices. At the remote sites, the process of building and validating the SD-WAN environment first facilitates a smoother transition to deploying SDA.

When Cisco TrustSec is enabled at a site, care should be taken to ensure one device does not become inaccessible. In a location where physical redundancy exists, the engineer should ensure that the SDA devices are accessible across both pathways prior to enabling CTS. Additionally, it is important to remember that CTS must be enabled on the physical and subinterfaces on the link.

In the event that it is a small site with minimal hardware and connectivity—for instance, a single SD-WAN Edge cabled to a single Fabric-in-a-Box (FiaB) switch—redundant pathways may not exist. In this instance, it is recommended to have the CTS configuration as a simple text file on the flash of the switch. The file may be copied into the running configuration on the switch. At this point, the reachability to the switch will be lost; however, all of the configuration in the text file will be applied. The SD-WAN device may then be reprovisioned to include the CTS, restoring the connectivity.

SD-WAN and ACI

Cisco’s Application Centric Infrastructure (ACI) allows the enterprise to introduce macro- and microsegmentation with automation and assurance within the data center. ACI uses a structured hierarchy including tenants, contexts, and endpoint groups (EPGs) to create macrosegmentation and microsegmentation. The EPG is similar to the SGT in the SDA and SD-WAN environments. It allows for policy enforcement based on logical group membership.

When SD-WAN and ACI are deployed, the individual macrosegmentation that is created within the DC ACI environment is extended to the remote site location. For instance, the enterprise may provide managed services to their end customers, internal or external, and want to ensure segmentation from the DC to the site. Having an SD-WAN service VPN for each ACI tenant will maintain that segmentation. While the current APIC and Catalyst SD-WAN Manager allow for integration via REST APIs, that integration only supports a dynamic application-aware routing policy signaling from ACI to SD-WAN. Therefore, all of the routing interconnectivity must be performed individually in both environments. For this reason, standardization again becomes important.

Imagine an enterprise environment without SD-WAN or ACI consisting of two data centers and multiple remote locations. This enterprise currently has all of its clients in a single global routing table without any segmentation. Now, they would like to migrate to full segmentation with both ACI and SD-WAN deployed. It will take some time to build the ACI environment and migrate the relevant services into each tenant. It will also take time to migrate each of the individual sites to SD-WAN. How is this performed without issues?

First, we must consider all of the possible traffic patterns. There is traffic from the nonmigrated data center environment to the nonmigrated remote locations through the service provider environment using the current CE equipment. This traffic will exist throughout all the migrations until both the SD-WAN and the ACI migrations are fully completed, although the amount of traffic will decrease with each migration window. As the migrations proceed, there will be traffic from the ACI environment through the SD-WAN environment to the remote locations. At first, this traffic will not exist at all and will increase as migrations occur. There will also be traffic between the nonmigrated data center environment and the new SD-WAN remote locations, as well as nonmigrated remote sites with the newly migrated ACI environment. Additionally, traffic will exist between migrated and nonmigrated sites, and an existing data center with the ACI environment.

All of these traffic patterns will exist in some amount from the beginning of the project until the end. Therefore, from a routing and switching perspective, there are four domains: the SD-WAN environment, the ACI environment, the existing data center, and the existing WAN environment. It is recommended to create an additional routing and switching layer within the data center that performs aggregation and routing between the domains. In Figure 3-6 notice a new aggregation layer has been inserted between the legacy WAN environment, the new SD-WAN devices, the legacy data center services infrastructure, and the new ACI environment. This new layer will allow the routing to drive traffic to the correct blocks based on the destination location—whether already migrated to the new environment or not.

Figure 3.6 SD-WAN–ACI Topology

When the environment is designed and implemented, the use of standardized VLANs will facilitate an easier migration per client. After an aggregation layer is created within the data center to facilitate interactions between the environments, the SD-WAN headend devices may be stood up appropriately. When this has happened, the ACI and SD-WAN environments are migrated at their own individual rates. This approach allows the WAN team to focus on just the remote location migrations while the data center team is able to focus on client services.

Consider the migration of Client A. At first, the services for the client exist in the existing data center environment, and the remote locations that service this client all use the global routing table with the service provider. When the aggregation layer and the SD-WAN headends are in place, the migration of the client is transparent to the client, with the exception of the required routing updates during maintenance windows. Perhaps the ACI environment is not built out while the SD-WAN environment is ready for production. The service VPN for this client is provisioned on the headends—for example, VPN 1201. As part of the provisioning, BGP peering between the headends and the aggregation layer on VLAN 1201 is created. Provisioning the new service VPN in the headends will have no effect on the traffic flows because there will be no routing advertisements at this point coming from the headends. Whenever a remote location is moved to VPN 1201, the headends will begin to advertise the remote site via BGP while the service provider will lose the routing advertisement from the remote location. This is the case with Client A.

At any specific remote location, there may be a different collection of service VPNs, that is, clients, from other locations. Because it is conceivable that the headend environment may not be provisioned for all clients or the enterprise wants to move to SD-WAN quickly, we may want to create a single-service VPN that may be used similarly to the existing function of the global routing table. That is, migration to SD-WAN is performed, but segmentation is not fully introduced. Perhaps the local network has not been configured for segmentation via VRF Lite or some other manner; this common service VPN allows for the entire site to move to SD-WAN while not affecting the local environment. When the local network is ready to migrate Client A to its own segmented environment, the Client A service VPN is provisioned on the SD-WAN Edges at the site. With the ensuing routing updates, the Client A traffic for this remote site now uses the SD-WAN environment and is advertised from the headends to the data center aggregation layer to all other environments.

This architectural design also works for the migration of services for Client A. When the ACI environment is ready for production, all of the logical ACI components are added into the ACI environment to support Client A. The required services for Client A are moved into the ACI environment, and the ACI L3Outs, or border leafs, advertise the services to the data center aggregation layer.

Therefore, while ACI and SD-WAN are integrated together, the migration of the services for a particular tenant in ACI and the migration of the tenant’s remote locations in SD-WAN may proceed at their own individual pace. The aggregation layer handles the routing between the various environments. As shown in Figure 3-7, the aggregation layer allows a remote site that has been migrated to SD-WAN already to interact with a remote site that has not. The reason is that the SD-WAN headends are advertising to the aggregation layer the SD-WAN remote site prefixes to the legacy WAN environment, and vice versa. With an L3 MPLS offering from the service provider, it is conceivable that these sites are able to send traffic directly to each other across the service provider. However, because the SD-WAN traffic is encrypted while the legacy traffic across the service provider is not, during this hybrid state of migrated and nonmigrated sites, the headends and the aggregation layer must be utilized to interconnect the domains.

Figure 3.7 SD-WAN–ACI Traffic Flows During Migration

Catalyst SD-WAN Manager and APIC Integration

Integration of the Catalyst SD-WAN Manager with the ACI APIC is performed on the APIC itself. A static user on the Catalyst SD-WAN Manager is required for the APIC to communicate with the Catalyst SD-WAN Manager. For security, auditing, and best-practice purposes, it is recommended to use a service account for the integration, as well as authentication via an external identity store. Doing so will facilitate proper user auditing, as well as the ability to manage the account via the appropriate operations processes.

Example 3-2 illustrates the configuration process required to integrate the APIC and the Catalyst SD-WAN Manager together.

Example 3-2 Catalyst SD-WAN Manager and APIC Integration Process

apic1#conf t
apic1(config)#integrations-group MyExtDevGroupClassic
apic1(config-integrations-group)#integrations-mgr External_Device Cisco/vManage
apic1(config-integrations-mgr)#device-address 172.31.209.198
apic1(config-integrations-mgr)#user admin
Password:
Retype password:
apic1(config-integrations-mgr)#

ACI and SD-WAN Segmentation

While ACI and SD-WAN both support the concepts of macro- and microsegmentation, microsegmentation propagation does not occur without additional configuration. Also, the macrosegmentation propagation must be handled in a systematic manner.

For macrosegmentation propagation, this is where the ACI Tenant to VLAN to Service VPN mapping is important. The VLAN used to interconnect the ACI L3Out, or border leaf, to the SD-WAN Edge is crucial to maintain the macrosegmentation.

Microsegmentation propagation of the ACI EPG to SD-WAN SGT values is more difficult and limited. The APIC must be integrated with ISE using pxGrid in the same manner as used for the ACI-SDA integration. This will allow ACI to advertise or receive EPGs to and from ISE; however, as with the ACI-SDA integration, this is limited to a single context. For the SD-WAN side, the headend SD-WAN Edges may use SXP with ISE in any of the service VPNs. The SGT information will be propagated along the data path of SD-WAN to the remote SD-WAN Edge, where policy enforcement or further propagation may occur.

ACI and SD-WAN Best Practices

For ACI and SD-WAN integration together, the two most important aspects are standardization of the handoff between them and the support for migrated and nonmigrated traffic flows. For the former, it is recommended to use a planned VLAN to Service VPN numbering. This approach prevents confusion later when some clients have migrated to ACI while others have not, or when some sites or clients have been migrated to SD-WAN while others have not. For the latter, the use of the aggregation layer with BGP allows the enterprise to connect the legacy and new environments together while using BGP to affect policy routing, if necessary.

SD-WAN with MPLS

Because SD-WAN is designed to utilize multiple independent transports from various service providers, integrating SD-WAN with an MPLS deployment is rather straightforward. There are numerous reasons why the enterprise may want an SD-WAN deployment while still utilizing an L2VPN or L3VPN MPLS deployment.

The first scenario is quite obvious: migration from MPLS to SD-WAN. In this scenario, the enterprise already maintains a WAN topology facilitated by their MPLS provider and plan to move to SD-WAN, perhaps to take advantage of less expensive broadband circuits. However, there are other scenarios where both the MPLS environment and the SD-WAN environment will coexist by design. For instance, the interconnection between data centers may be through an L2VPN offering that should be maintained even after the remaining WAN has migrated or deployed SD-WAN.

In most scenarios, the primary concern for design and implementation will be on proper route filtering. Depending on the routing protocols utilized within the environment, it is possible to inadvertently cause a routing loop through redistribution, as well as create suboptimal routing. For this reason, it is recommended that all best practices around route redistribution are strictly followed, including marking all prefixes that are redistributed from one protocol to another. This may be via OSPF and OMP tags, BGP communities, and so on.

SD-WAN has various built-in mechanisms to prevent route looping. For instance, when the OMP overlay AS has been configured, this ASN is added to the BGP as-path attribute when the SD-WAN Edge advertised the prefix into BGP. Additionally, when the SD-WAN Edge advertises an OMP prefix into OSPF, the down bit is set. The SD-WAN Edge works in a similar fashion to an MPLS PE node. However, without proper care, the mechanism used by SD-WAN to prevent route looping could be bypassed.

SD-WAN and the Cloud

Over the past several years, enterprises have started to move heavily into the cloud. This is true with many of the largest enterprises that had been traditionally cloud averse. The strong push to a hybrid work environment, as well as the availability of integrating with Secure Access Service Edge (SASE) architectures, has helped facilitate the migration to the cloud. Additionally, SD-WAN itself not only participates in SASE but also offers various solutions via Cloud OnRamp to assist in deploying into the cloud.

Cisco’s SD-WAN offers three virtual platforms for extending the SD-WAN environment into the cloud: the CSR1000V, the vEdge-cloud, and the Catalyst 8000v. The first two here are approaching the end of life, so the virtual platform of choice moving forward should be the Catalyst 8000v. Both Amazon Web Services (AWS) and Azure offer the Cat8kv with multiple compute options in various zones and regions. As with any cloud virtual deployment, the compute requirements should be carefully considered based on throughput requirements, as well as overall cost. For instance, there are scenarios where doubling the compute for a virtual SD-WAN Edge in AWS doubles the cost of the VM; however, the throughput of the SD-WAN Edge itself is not doubled. In this scenario, it is more cost-effective to double the number of virtual SD-WAN Edges deployed in AWS. Doing so not only doubles the cost and the compute resources but also the total amount of throughput in the virtual environment. Therefore, horizontal scaling in the cloud is not just a useful practice for applications but also for the virtual network functions.

Discussing the cloud deployment is not dissimilar from any other network deployment. Does the environment constitute a greenfield deployment or brownfield? In SD-WAN, that question is even more important than usual because the current Catalyst SD-WAN Manager versions support only greenfield integration for certain Cloud OnRamp features. Therefore, if, for instance, the deployment already has VPCs in AWS that the enterprise wants to deploy SD-WAN virtual routers into, the Catalyst SD-WAN Manager Cloud OnRamp workflows will not work in that scenario. However, whether it is brownfield or greenfield, the overall design will be the same with the differences coming from how the virtual routers are deployed and maintained.

Either way, the virtual cloud SD-WAN Edge is configured from Catalyst SD-WAN Manager via templates just like any other SD-WAN router. The cloud environment itself may be considered to be another site in the SD-WAN environment. As with all routers, there is a finite number of tunnels and throughput the virtual router may support, so the control policy should be defined to ensure those thresholds are not exceeded.

SIG

One of the fundamental pieces of SASE is Secure Internet Gateway (SIG). As applications have moved to the cloud, such as Microsoft Office 365, the traditional paradigm of Internet direct from the data center or centralized location has created bottlenecks in network throughput because the Internet circuits in the centralized location were not deployed for all of the application traffic. As such, enterprises look to offload the Internet application traffic at the remote site. However, this opens new concerns from a security perspective, especially because the data center environment is normally built with security inspection and defense in depth in mind.

How then do we secure the remote site Internet edge, ensuring that application traffic is inspected without additional hardware? The first part of the answer is SIG. With SIG, the SD-WAN Edge will use API calls to the cloud service, commonly Cisco Umbrella or other third-party vendor solutions. The API calls to the cloud service are used by the SD-WAN Edge to create a direct point-to-point encrypted tunnel to the service provider. With the addition of a SIG service route to steer Internet-destined traffic or specific traffic applications across the SIG tunnel, the remote-site application traffic specified by the policy is sent encrypted to the provider. Depending on the policy and service offering, the provider then performs the required inspection on the application traffic. The provider uses NAT Translation of the application traffic so that return traffic for the application is returned to the cloud prior to sending to the remote site over the encrypted tunnel.

As with almost all technologies in networking, SIG supports redundancy. We may configure active/standby tunnel pairs where one tunnel terminates in one zone or region, and the other tunnel in the pair terminates in another zone or region of the provider. Also, the SD-WAN solution probes across the tunnel to monitor state, so the application traffic may be steered through the data center in the event that the SIG pathway is not viable. Up to four active/standby tunnel pairs may be configured on a single SD-WAN Edge to achieve maximum throughput performance for the SIG tunnels as a single tunnel throughput is capped based on the software version.

In Figure 3-8, traffic destined to the enterprise uses the SD-WAN fabric across the various service providers following the various SD-WAN policies; however, traffic that is destined for the Internet follows the encrypted SIG tunnel to the SIG service provider.

Figure 3.8 SD-WAN SIG Traffic

Cloud OnRamp

The Cisco SD-WAN solution offers several enhancements as part of the Cloud OnRamp (CoR) features that facilitate SD-WAN cloud connectivity. Cloud OnRamp for SaaS allows the SD-WAN solution to integrate and properly steer application traffic for select applications that are cloud hosted, such as Office 365, Dropbox, and others. With CoR SaaS, the solution probes the pathway through the DIA circuit from the site, as well as the pathway through the data center via the normal SD-WAN tunnels. Based on the probe performance and configured policy, the SaaS application traffic is steered appropriately between the options. Cloud OnRamp for IaaS handles the provisioning of virtual SD-WAN Edge devices within the cloud provider, AWS, or Azure. As part of the provisioning of the environment, the appropriate VPCs or VNets are configured based on the workflow. Additionally, Software-Defined Cloud Interconnect (SDCI), which evolved from the Cloud OnRamp for Multicloud workflow, allows for the creation of middle-mile topologies. In these workflows, the SD-WAN Edges at remote sites create SD-WAN tunnels to one of the two supported providers, Equinix or Megaport. The provider then provides SD-WAN tunnels direct to the cloud provider over the provider’s infrastructure, reducing the requirement on Internet traversal. All of these Cloud OnRamp options may be used separately or together. This scenario is illustrated in Figure 3-9.

Figure 3.9 SD-WAN Cloud OnRamp for SaaS

In this figure, user application traffic destined for one of the SaaS providers uses the direct Internet access at the SD-WAN site directly. All other traffic follows the SD-WAN fabric pathways. Configuring CoR SaaS within Catalyst SD-WAN Manager is fairly straightforward. From the Administration Settings page within Catalyst SD-WAN Manager, enable Cloud OnRamp for SaaS. Additionally, Cloud Services and Catalyst SD-WAN Analytics must be enabled from the same page. This will require entry of a one-time password and cloud gateway URL that are provided at the time of system setup. After the feature is enabled, you can use the Cloud OnRamp for SaaS configuration pages to view and manage how the SaaS applications should be monitored. Additionally, support for SaaS can be systematically deployed across the environment on a per-site basis as required.

Setting up Cloud OnRamp for IaaS or Cloud OnRamp for Multicloud requires associating the cloud service provider account. As of the 20.9 Catalyst SD-WAN Manager UI, the CoR IaaS functionality is moved into the Cloud OnRamp Multicloud page. Because these are enterprise accounts, it is again recommended to follow best practices and security operations requirements around creating a service account for this part. After the appropriate account has been configured within Catalyst SD-WAN Manager using the Associate Cloud Account workflow, the UI allows the user to associate and tag the VPCs that will then be used within the Intent Management. The Intent Management piece is where the branch-to-cloud connectivity is defined within the workflow.

The same workflows allow the user to create middle-mile connectivity through either Megaport or Equinix via the Software-Defined Cloud Interconnect controls. Just as following the workflows allows cloud SD-WAN Edges to be provisioned in AWS or Azure, these workflows allow the circuits between middle-mile locations to be allocated as required. Figure 3-10 shows the various cloud and on-premises environments that may be interconnected via SDCI.

Figure 3.10 SD-WAN Software Defined Cloud Interconnect

As shown in the figure, with the SDCI working in the middle of the architecture, SD-WAN is capable of creating dynamic tunnels between sites and the nearest colocation facilities. The facilities themselves then provide direct peering to application providers, direct connection to other cloud services, or global connectivity to other regions and colocation facilities.

DMVPN

Dynamic Multi-point Virtual Private Network (DMVPN) is an older tunneling technology for WAN utilization that allows for simplified configuration of a hub-and-spoke topology while offering support for dynamic spoke-to-spoke tunnels. The technology also includes support for IPsec encryption across the tunnels, allowing for secure communications between all of the enterprise locations.

Similar to SD-WAN, DMVPN is capable of carrying the SGT information across the encrypted tunnel architecture. While there are numerous advantages and benefits when using SD-WAN as compared to DMVPN, numerous DMVPN deployments exist in production today.

The details of how DMVPN itself works are discussed in various Cisco Press texts. For questions on configuring DMVPN, please review those works.

DMVPN and SDA

One of the more common DMVPN multi-domain strategies is the integration with SDA. In this scenario, the enterprise has multiple locations where each is an SDA fabric site interconnected via DMVPN. Several approaches may be taken here, depending on the enterprise’s end goals. Therefore, a clear understanding of the extent of macrosegmentation in the DMVPN environment is important for the design. For instance, the goal may be to extend macro- and microsegmentation throughout the DMVPN environment extending into the hub locations. Additionally, what is the design for any guest network or other SDA virtual network that is using VN anchoring? Are there locations where the local DMVPN router should behave as a fusion device between two or more SDA virtual networks?

For each SDA virtual network that requires segmentation across the DMVPN topology, a unique DMVPN tunnel in that VRF must be configured. While existing DMVPN deployments may have already used the global routing table for the DMVPN tunnel source, it is recommended to use an isolated frontdoor VRF (fVRF) for the tunnel source with the service provider. This fVRF provides isolation in the routing table information between the global or corporate routing information that traverses the tunnels and the tunnel source information with the service providers. This also inherently prevents any tunnel recursion issues that must be considered when the tunnel source and the overlay are in the same routing instance. When SDA macrosegmentation integration is added with the DMVPN environment, this will further reduce the overall complexity by having a single VRF used as the fVRF for all of the tunnels across all VRFs.

Note that the various DMVPN tunnels at the spoke sites may not be required at all locations. The tunnel requirement would be based on what SDA virtual networks are configured at a specific location. In Figure 3-11, the red virtual network is extended from one SDA location to one remote location while the blue virtual network exists at both SDA locations and the other remote location. The extension of the SDA virtual network with the DMVPN tunnel topology depends on the placement of the virtual networks, as well as the individual enterprise requirements. From the DMVPN topology to the SDA border node is a dot1Q trunk mapping VLANs to SDA virtual networks. As mentioned in other chapters throughout this book, using standardized VLAN mapping will facilitate smoother and simpler operations in production.

Figure 3.11 DMVPN-SDA Topology

After a DMVPN tunnel has been configured for the respective SDA virtual network, adding SGT inline tagging to the tunnel and the interface between the SDA BN and the DMVPN router will allow for microsegmentation propagation.

Example 3-3 illustrates the use of the frontdoor VRF configuration to isolate the transport while the tunnel interface itself is in the corporate VRF with the SGT value propagated. This sample configuration would be for the headend while the remote site would have the relevant NHRP configuration changes.

Example 3-3 DMVPN-SDA Configuration

! Per-VN Tunnel Interface
interface Tunnel0
 vrf forwarding Corporate
 ip address 10.0.0.1 255.255.255.0
 tunnel vrf Underlay
 no ip redirects
 ip mtu 1400
 ip nhrp authentication test
 ip nhrp map multicast dynamic
 ip nhrp network-id 100000
 ip nhrp shortcut
 ip nhrp redirect
 ip tcp adjust-mss 1360
 tunnel source GigabitEthernet0/0/0
 tunnel mode gre multipoint
 tunnel key 100000
 tunnel protection ipsec profile profile-dmvpn shared
 cts manual
 propagate sgt
!

DMVPN and SDA Policy Enforcement

As mentioned previously, the design requires an understanding of how DMVPN will be utilized to maintain macro- and microsegmentation. As a part of that, the question of policy enforcement should be considered within the DMVPN environment. Will the DMVPN environment simply propagate the segmentation information, will only the headends enforce policy for traffic egressing to the hub locations, or will all DMVPN devices be responsible for policy enforcement?

Based on the answer to this question, configuring policy enforcement on a specific DMVPN device may be required. For instance, if DMVPN is providing enforcement only at the headend locations, then only the hubs will require specific enforcement configuration while the spokes will need only the propagation pieces. This is most likely the most common scenario, as traffic passing between SDA locations will have policy enforcement at the fabric egress point, such as the SDA fabric edge node. There will be migration scenarios where DMVPN is connecting SDA fabric sites with non-SDA sites, including the hub locations.

SDA and DMVPN Best Practices

For migrations to SDA with DMVPN, it would be expected that the DMVPN topology for a single network already exists in the enterprise, and the remote locations are transitioning to SDA. For this scenario, it is recommended to stand up the additional tunnel or tunnels in DMVPN at the hub and required spoke sites first. Routing and connectivity should be validated first prior to the SDA conversion. Here, standardizing tunnel numbers with SDA virtual networks is a good practice to facilitate easier management, operations, and troubleshooting.

Additionally, utilize the existing DMVPN tunnel for the SDA underlay topology. This will ensure that during migration, when devices are being configured in SDA, there will always be connectivity from the SDA underlay management interface with the Catalyst Center in the event that routing or connectivity issues occur in the newly deployed DMVPN tunnels. The overlay traffic will move into the new tunnel topology, leaving just the SDA underlay traffic in the original tunnel system.

DMVPN and ACI

DMVPN with ACI is similar to SD-WAN with ACI with the exception that there is no integration of the APIC with another controller. Just as with the SD-WAN–ACI integration, it is normally expected that macrosegmentation will be maintained between DMVPN and ACI, whereas microsegmentation propagation may not be a requirement. However, just as with the former, microsegmentation propagation may be configured, but again, with the limitation that the EPG mapping is limited to a single ACI context.

As seen in the DMVPN-SDA discussion, multiple DMVPN tunnels will be required on each DMVPN speaker with one tunnel per VRF required for macrosegmentation. It is recommended to use a fVRF for the tunnel source to reduce the complexity associated with ensuring tunnel route recursion does not occur.

Additionally, as discussed in the “SD-WAN and ACI” section earlier, the use of an aggregation layer between the ACI, DMVPN, and legacy environments will facilitate smoother migrations to either ACI, multi-tenant DMVPN, or both.

ACI and DMVPN Segmentation

It is possible with DMVPN to extend both the macrosegmentation and the microsegmentation created in ACI across the WAN topology; however, both are more complicated to configure and manage via the CLI as compared to the Cisco SD-WAN solution.

For macrosegmentation, the DMVPN environment may be made VRF aware by creating multiple DMVPN tunnels on each router for each required VRF. For each tunnel system, a unique NHRP ID and tunnel key should be utilized to ensure the various tunnels remain isolated. The use of CLI templates will greatly reduce the management overhead.

Extending microsegmentation from the ACI environment to the DMVPN environment is much more complicated and requires more manual configuration. In the preceding chapter discussing SDA, the SDA integration with ACI was shown using ISE pxGrid. Via ISE, the ACI EPG information is translated to the Cisco TrustSec SGT information. Using SDA, the Cisco DNA Center facilitates the management of the ISE deployment. With DMVPN, the Cisco TrustSec SGTs would have to be manually administered on ISE and connected to the appropriate ACI EPGs. Additionally, all of the Cisco TrustSec configuration must be manually configured. This would include adding SGT propagation on the DMVPN tunnels themselves, as shown previously, and including SXP peerings to the ISE for each VRF managed.

ACI and DMVPN Best Practices

It is best practice to standardize the DMVPN tunnel numbering for each ACI context with the VLAN used to interconnect the two at the hub locations. If an ACI tenant does not need to be extended to a specific remote site, do not configure that tenant’s tunnel at that site. This will limit the number of IPsec SAs required on each router, as well as the number of NHRP registrations.

If macrosegmentation is to be extended to the remote locations, then the macrosegmentation is inherently maintained via the various VRF tunnels. Microsegmentation propagation must be configured, if desired, using inline tagging on the tunnel configuration. For the remote endpoints in DMVPN to support Cisco TrustSec, they will need to support SXP communication with ISE. The scaling of the latter should be carefully considered because one pair of ISE nodes supports only 200 SXP peerings, which are counted as per VRF per device. Therefore, 20 devices with 10 VRFs each can reach the maximum limitation easily. ISE can scale to support 800 total SXP peerings with eight ISE nodes; however, a better scaling mechanism would be to utilize an SXP reflector—a dedicated router capable of handling a higher quantity of SXP peerings. As shown in Figure 3-12, the user creates an IP-to-SGT mapping on the ISE Policy Administration Node (PAN). This also could have been created dynamically by ISE. The IP-SGT mapping is forwarded to the policy node with the SXP support enabled. The SXP Policy Service Node (PSN) will forward the update to the configured peer SXP Reflectors. The SXP Reflector could be an ASR1002HX or a Catalyst 8500 to facilitate the scaling. On the SXP Reflector side, the SXP peerings are per-VRF; therefore, the SXP Reflector updates only the DMVPN remote endpoints that have the specific VRF. Updates are not sent to other locations. When the SXP Reflector system is used, the load is reduced on the ISE environment while increasing the scale limitations. This does increase the count of devices to manage; however, these additional routers may be considered as server functionality similar to BGP route reflectors because they do not need to participate in the data path.

Figure 3.12 SXP Reflector System

Summary

Both Cisco SD-WAN and DMVPN solutions integrate well with the other domains, allowing the enterprise to extend the business intent and segmentation across the WAN environment between domains. While both solutions provide macrosegmentation via VRFs and microsegmentation by propagating the SGT value from one side of the WAN to the other, the management and configuration of the segmentation are quite different. For SD-WAN, the Catalyst SD-WAN Manager facilitates management of the SD-WAN Edge routers, whereas DMVPN requires manual configuration or use of another automation tool to manage the configurations. Additionally, DMVPN requires a unique tunnel system for each macrosegmented VRF, whereas the SD-WAN solution uses a single tunnel system with the ability to create logical topologies per VRF. In both solutions, having a single standard, such as VLAN-to-VRF mapping, used at all of the remote locations improves management and operational efficiencies.


vceplus-200-125    | boson-200-125    | training-cissp    | actualtests-cissp    | techexams-cissp    | gratisexams-300-075    | pearsonitcertification-210-260    | examsboost-210-260    | examsforall-210-260    | dumps4free-210-260    | reddit-210-260    | cisexams-352-001    | itexamfox-352-001    | passguaranteed-352-001    | passeasily-352-001    | freeccnastudyguide-200-120    | gocertify-200-120    | passcerty-200-120    | certifyguide-70-980    | dumpscollection-70-980    | examcollection-70-534    | cbtnuggets-210-065    | examfiles-400-051    | passitdump-400-051    | pearsonitcertification-70-462    | anderseide-70-347    | thomas-70-533    | research-1V0-605    | topix-102-400    | certdepot-EX200    | pearsonit-640-916    | itproguru-70-533    | reddit-100-105    | channel9-70-346    | anderseide-70-346    | theiia-IIA-CIA-PART3    | certificationHP-hp0-s41    | pearsonitcertification-640-916    | anderMicrosoft-70-534    | cathMicrosoft-70-462    | examcollection-cca-500    | techexams-gcih    | mslearn-70-346    | measureup-70-486    | pass4sure-hp0-s41    | iiba-640-916    | itsecurity-sscp    | cbtnuggets-300-320    | blogged-70-486    | pass4sure-IIA-CIA-PART1    | cbtnuggets-100-101    | developerhandbook-70-486    | lpicisco-101    | mylearn-1V0-605    | tomsitpro-cism    | gnosis-101    | channel9Mic-70-534    | ipass-IIA-CIA-PART1    | forcerts-70-417    | tests-sy0-401    | ipasstheciaexam-IIA-CIA-PART3    | mostcisco-300-135    | buildazure-70-533    | cloudera-cca-500    | pdf4cert-2v0-621    | f5cisco-101    | gocertify-1z0-062    | quora-640-916    | micrcosoft-70-480    | brain2pass-70-417    | examcompass-sy0-401    | global-EX200    | iassc-ICGB    | vceplus-300-115    | quizlet-810-403    | cbtnuggets-70-697    | educationOracle-1Z0-434    | channel9-70-534    | officialcerts-400-051    | examsboost-IIA-CIA-PART1    | networktut-300-135    | teststarter-300-206    | pluralsight-70-486    | coding-70-486    | freeccna-100-101    | digitaltut-300-101    | iiba-CBAP    | virtuallymikebrown-640-916    | isaca-cism    | whizlabs-pmp    | techexams-70-980    | ciscopress-300-115    | techtarget-cism    | pearsonitcertification-300-070    | testking-2v0-621    | isacaNew-cism    | simplilearn-pmi-rmp    | simplilearn-pmp    | educationOracle-1z0-809    | education-1z0-809    | teachertube-1Z0-434    | villanovau-CBAP    | quora-300-206    | certifyguide-300-208    | cbtnuggets-100-105    | flydumps-70-417    | gratisexams-1V0-605    | ituonline-1z0-062    | techexams-cas-002    | simplilearn-70-534    | pluralsight-70-697    | theiia-IIA-CIA-PART1    | itexamtips-400-051    | pearsonitcertification-EX200    | pluralsight-70-480    | learn-hp0-s42    | giac-gpen    | mindhub-102-400    | coursesmsu-CBAP    | examsforall-2v0-621    | developerhandbook-70-487    | root-EX200    | coderanch-1z0-809    | getfreedumps-1z0-062    | comptia-cas-002    | quora-1z0-809    | boson-300-135    | killtest-2v0-621    | learncia-IIA-CIA-PART3    | computer-gcih    | universitycloudera-cca-500    | itexamrun-70-410    | certificationHPv2-hp0-s41    | certskills-100-105    | skipitnow-70-417    | gocertify-sy0-401    | prep4sure-70-417    | simplilearn-cisa    |
http://www.pmsas.pr.gov.br/wp-content/    | http://www.pmsas.pr.gov.br/wp-content/    |