RFC 9730 | GMPLS and Controller Interwork | January 2025 |
Zheng, et al. | Informational | [Page] |
Generalized Multiprotocol Label Switching (GMPLS) control allows each network element (NE) to perform local resource discovery, routing, and signaling in a distributed manner.¶
The advancement of software-defined transport networking technology enables a group of NEs to be managed through centralized controller hierarchies. This helps to tackle challenges arising from multiple domains, vendors, and technologies. An example of such a centralized architecture is the Abstraction and Control of Traffic-Engineered Networks (ACTN) controller hierarchy, as described in RFC 8453.¶
Both the distributed and centralized control planes have their respective advantages and should complement each other in the system, rather than compete. This document outlines how the GMPLS distributed control plane can work together with a centralized controller system in a transport network.¶
This document is not an Internet Standards Track specification; it is published for informational purposes.¶
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are candidates for any level of Internet Standard; see Section 2 of RFC 7841.¶
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc9730.¶
Copyright (c) 2025 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Generalized Multiprotocol Label Switching (GMPLS) [RFC3945] extends MPLS to support different classes of interfaces and switching capabilities such as Time-Division Multiplex Capable (TDM), Lambda Switch Capable (LSC), and Fiber-Switch Capable (FSC). Each network element (NE) running a GMPLS control plane collects network information from other NEs and supports service provisioning through signaling in a distributed manner. A more generic description of traffic-engineering networking information exchange can be found in [RFC7926].¶
On the other hand, Software-Defined Networking (SDN) technologies have been introduced to control the transport network centrally. Centralized controllers can collect network information from each node and provision services on corresponding nodes. One example is the Abstraction and Control of Traffic-Engineered Networks (ACTN) [RFC8453], which defines a hierarchical architecture with the Provisioning Network Controller (PNC), Multi-Domain Service Coordinator (MDSC), and Customer Network Controller (CNC) as centralized controllers for different network abstraction levels. A PCE-based approach has been proposed in [RFC7491]: Application-Based Network Operations (ABNO).¶
GMPLS can be used to control network elements (NEs) in such centralized controller architectures. A centralized controller may support GMPLS-enabled domains and communicate with a GMPLS-enabled domain where the GMPLS control plane handles service provisioning from ingress to egress. In this scenario, the centralized controller sends a request to the entry node and does not need to configure all NEs along the path within the domain from ingress to egress, thus leveraging the GMPLS control plane. This document describes how the GMPLS control plane interworks with a centralized controller system in a transport network.¶
The following abbreviations are used in this document.¶
This section provides an overview of the GMPLS control plane, centralized controller systems, and their interactions in transport networks.¶
A transport network [RFC5654] is a server-layer network designed to provide connectivity services for client-layer connectivity. This setup allows client traffic to be carried seamlessly across the server-layer network resources.¶
GMPLS separates the control plane and the data plane to support time-division, wavelength, and spatial switching, which are significant in transport networks. For the NE level control in GMPLS, each node runs a GMPLS control plane instance. Functionalities such as service provisioning, protection, and restoration can be performed via GMPLS communication among multiple NEs. At the same time, the GMPLS control plane instance can also collect information about node and link resources in the network to construct the network topology and compute routing paths for serving service requests.¶
Several protocols have been designed for the GMPLS control plane [RFC3945], including link management [RFC4204], signaling [RFC3471], and routing [RFC4202] protocols. The GMPLS control plane instances applying these protocols communicate with each other to exchange resource information and establish LSPs. In this way, GMPLS control plane instances in different nodes in the network have the same view of the network topology and provision services based on local policies.¶
With the development of SDN technologies, a centralized controller architecture has been introduced to transport networks. One example architecture can be found in ACTN [RFC8453]. In such systems, a controller is aware of the network topology and is responsible for provisioning incoming service requests.¶
Multiple hierarchies of controllers are designed at different levels to implement different functions. This kind of architecture enables multi-vendor, multi-domain, and multi-technology control. For example, a higher-level controller coordinates several lower-level controllers controlling different domains for topology collection and service provisioning. Vendor-specific features can be abstracted between controllers, and a standard API (e.g., generated from RESTCONF [RFC8040] / YANG [RFC7950]) may be used.¶
Besides GMPLS and the interactions among the controller hierarchies, it is also necessary for the controllers to communicate with the network elements. Within each domain, GMPLS control can be applied to each NE. The bottom-level centralized controller can act as an NE to collect network information and initiate LSPs. Figure 1 shows an example of GMPLS interworking with centralized controllers (ACTN terminologies are used in the figure).¶
Figure 1 shows the scenario with two GMPLS domains and one non-GMPLS domain. This system supports the interworking among non-GMPLS domains, GMPLS domains, and the controller hierarchies.¶
For domain 1, the network elements were not enabled with GMPLS, so the control is purely from the controller, via Network Configuration Protocol (NETCONF) [RFC6241] / YANG and/or PCE Communication Protocol (PCEP) [RFC5440].¶
For domains 2 and 3:¶
This document focuses on the interworking between GMPLS and the centralized controller system, including:¶
For convenience, this document uses the following terminologies for the controller and the orchestrator:¶
In GMPLS control, the link connectivity must be verified between each pair of nodes. In this way, link resources, which are fundamental resources in the network, are discovered by both ends of the link.¶
In GMPLS control, link state information is flooded within the network as defined in [RFC4202]. Each node in the network can build the network topology according to the flooded link state information. Routing protocols such as OSPF-TE [RFC4203] and IS-IS-TE [RFC5307] have been extended to support different interfaces in GMPLS.¶
In a centralized controller system, the centralized controller can be placed in the GMPLS network and passively receives the IGP information flooded in the network. In this way, the centralized controller can construct and update the network topology.¶
OSPF-TE is introduced for TE networks in [RFC3630]. OSPF extensions have been defined in [RFC4203] to enable the capability of link state information for the GMPLS network. Based on this work, OSPF has been extended to support technology-specific routing. The routing protocols for the Optical Transport Network (OTN), Wavelength Switched Optical Network (WSON), and optical flexi-grid networks are defined in [RFC7138], [RFC7688], and [RFC8363], respectively.¶
IS-IS-TE is introduced for TE networks in [RFC5305], is extended to support GMPLS routing functions [RFC5307], and has been updated [RFC7074] to support the latest GMPLS switching capability and Types fields.¶
NETCONF [RFC6241] and RESTCONF [RFC8040] protocols are originally used for network configuration. These protocols can also utilize topology-related YANG modules, such as those in [RFC8345] and [RFC8795]. These protocols provide a powerful mechanism for the notification (in addition to the provisioning and monitoring) of topology changes to the client.¶
Once a controller learns the network topology, it can utilize the available resources to serve service requests by performing path computation. Due to abstraction, the controllers may not have sufficient information to compute the optimal path. In this case, the controller can interact with other controllers by sending, for example, YANG-based path computation requests [PATH-COMP] or PCEP to compute a set of potential optimal paths; and then, based on its constraints, policy, and specific knowledge (e.g., cost of access link), the controller can choose the more feasible path for end-to-end (E2E) service path setup.¶
Path computation is one of the key objectives in various types of controllers. In the given architecture, it is possible for different components that have the capability to compute the path.¶
In GMPLS control, a routing path may be computed by the ingress node [RFC3473] based on the ingress node Traffic Engineering Database (TED). In this case, constraint-based path computation is performed according to the local policy of the ingress node.¶
The PCE was first introduced in [RFC4655] as a functional component that offers services for computing paths within a network. In [RFC5440], path computation is achieved using the TED, which maintains a view of the link resources in the network. The introduction of the PCE has significantly improved the quality of network planning and offline computation. However, there is a potential risk that the computed path may be infeasible when there is a diversity requirement, as a stateless PCE lacks knowledge about previously computed paths.¶
To address this issue, a stateful PCE has been proposed in [RFC8231]. Besides the TED, an additional LSP Database (LSP-DB) is introduced to archive each LSP computed by the PCE. This way, the PCE can easily determine the relationship between the computing path and former computed paths. In this approach, the PCE provides computed paths to the PCC, and then the PCC decides which path is deployed and when it is to be established.¶
With PCE-initiated LSPs [RFC8281], the PCE can trigger the PCC to perform setup, maintenance, and teardown of the PCE-initiated LSP under the stateful PCE model. This would allow a dynamic network that is centrally controlled and deployed.¶
In a centralized controller system, the PCE can be implemented within the centralized controller. The centralized controller then calculates paths based on its local policies. Alternatively, the PCE can be located outside of the centralized controller. In this scenario, the centralized controller functions as a PCC and sends a path computation request to the PCE using the PCEP. A reference architecture for this can be found in [RFC7491].¶
Signaling mechanisms are used to set up LSPs in GMPLS control. Messages are sent hop by hop between the ingress node and the egress node of the LSP to allocate labels. Once the labels are allocated along the path, the LSP setup is accomplished. Signaling protocols such as Resource Reservation Protocol - Traffic Engineering (RSVP-TE) [RFC3473] have been extended to support different interfaces in GMPLS.¶
RSVP-TE is introduced in [RFC3209] and extended to support GMPLS signaling in [RFC3473]. Several label formats are defined for a generalized label request, a generalized label, a suggested label, and label sets. Based on [RFC3473], RSVP-TE has been extended to support technology-specific signaling. The RSVP-TE extensions for the OTN, WSON, and optical flexi-grid network are defined in [RFC7139], [RFC7689], and [RFC7792], respectively.¶
Topology information is necessary on both network elements and controllers. The topology on a network element is usually raw information, while the topology used by the controller can be either raw, reduced, or abstracted. Three different abstraction methods have been described in [RFC8453], and different controllers can select the corresponding method depending on the application.¶
When there are changes in the network topology, the impacted network elements need to report changes to all the other network elements, together with the controller, to sync up the topology information. The inter-NE synchronization can be achieved via protocols mentioned in Sections 4 and 5. The topology synchronization between NEs and controllers can either be achieved by routing protocols OSPF-TE/PCEP-LS in [PCEP-LS] or NETCONF protocol notifications with a YANG module.¶
Service provisioning can be deployed based on the topology information on controllers and network elements. Many methods have been specified for single-domain service provisioning, such as the PCEP and RSVP-TE methods.¶
Multi-domain service provisioning would require coordination among the controller hierarchies. Given the service request, the end-to-end delivery procedure may include interactions at any level (i.e., interface) in the hierarchy of the controllers (e.g., MPI and SBI for ACTN). The computation for a cross-domain path is usually completed by controllers who have a global view of the topologies. Then the configuration is decomposed into lower-level controllers to configure the network elements to set up the path.¶
A combination of centralized and distributed protocols may be necessary to interact between network elements and controllers. Several methods can be used to create the inter-domain path:¶
With an end-to-end RSVP-TE session:¶
In this method, all the domains need to support the RSVP-TE protocol and thus need to be GMPLS domains. The Controller(G) of the source domain triggers the source node to create the end-to-end RSVP-TE session; and the assignment and distribution of the labels on the inter-domain links are done by the border nodes of each domain, using RSVP-TE protocol. Therefore, this method requires the interworking of RSVP-TE protocols between different domains.¶
There are two possible methods:¶
One single end-to-end RSVP-TE session:¶
In this method, an end-to-end RSVP-TE session from the source node to the destination node will be used to create the inter-domain path. A typical example would be the PCE initiation scenario, in which a PCE message (PCInitiate) is sent from the Controller(G) to the source node, triggering an RSVP procedure along the path. Similarly, the interaction between the controller and the source node of the source domain can be achieved by using the NETCONF protocol with corresponding YANG modules, and then it can be completed by running RSVP among the network elements.¶
LSP Stitching:¶
The LSP stitching method defined in [RFC5150] can also create the E2E LSP. That is, when the source node receives an end-to-end path creation request (e.g., using PCEP or NETCONF protocol), the source node starts an end-to-end RSVP-TE session along the endpoints of each LSP segment (S-LSP) (refers to S-LSP in [RFC5150]) of each domain, to assign the labels on the inter-domain links between each pair of neighbor S-LSPs and to stitch the end-to-end LSP to each S-LSP. See Figure 2 as an example.¶
Note that the S-LSP in each domain can be either created by its Controller(G) in advance or created dynamically triggered by the end-to-end RSVP-TE session.¶
Without an end-to-end RSVP-TE session:¶
In this method, each domain can be a GMPLS domain or a non-GMPLS domain. Each controller (which may be a Controller(G) or a Controller(N)) is responsible for creating the path segment within its domain. The border node does not need to communicate with other border nodes in other domains for the distribution of labels on inter-domain links, so an end-to-end RSVP-TE session through multiple domains is not required, and the interworking of the RSVP-TE protocol between different domains is not needed.¶
Note that path segments in the source domain and the destination domain are "asymmetrical" segments, because the configuration of client signal mapping into the server-layer tunnel is needed at only one end of the segment, while configuration of the server-layer cross-connect is needed at the other end of the segment. See the example in Figure 3.¶
The PCEP / GMPLS protocols should support the creation of such asymmetrical segments.¶
Note also that mechanisms to assign the labels in the inter-domain links also need to be considered. There are two possible methods:¶
Inter-domain labels assigned by NEs:¶
The concept of a stitching label that allows stitching local path segments was introduced in [RFC5150] and [SPCE-ID], in order to form the inter-domain path crossing several different domains. It also describes the Backward Recursive PCE-Based Computation (BRPC) [RFC5441] and Hierarchical PCE (H-PCE) [RFC8685] PCInitiate procedure, i.e., the ingress node of each downstream domain assigns the stitching label for the inter-domain link between the downstream domain and its upstream neighbor domain; and this stitching label will be passed to the upstream neighbor domain by PCE protocol, which will be used for the path segment creation in the upstream neighbor domain.¶
Inter-domain labels assigned by the controller:¶
If the resources of inter-domain links are managed by the Orchestrator(MD), each domain controller can provide to the Orchestrator(MD) the list of available labels (e.g., time slots, if the OTN is the scenario) using the IETF Topology YANG module and a related technology-specific extension. Once the Orchestrator(MD) has computed the E2E path, RSVP-TE or PCEP can be used in the different domains to set up the related segment tunnel consisting of label inter-domain information; for example, for PCEP, the label Explicit Route Object (ERO) can be included in the PCInitiate message to indicate the inter-domain labels so that each border node of each domain can configure the correct cross-connect within itself.¶
GMPLS can interwork with centralized controller systems in multi-layer networks.¶
An example with two layers of network is shown in Figure 4. In this example, the GMPLS control plane is enabled in at least one layer network (otherwise, it is out of the scope of this document) and interworks with the controller of its domain (H-Controller and L-Controller, respectively). The Orchestrator(ML) is used to coordinate the control of the multi-layer network.¶
[RFC5623] describes three inter-layer path computation models and four inter-layer path control models:¶
Section 4.2.4 of [RFC5623] also provides all the possible combinations of inter-layer path computation and inter-layer path control models.¶
To apply [RFC5623] in a multi-layer network with GMPLS-controller interworking, the H-Controller and the L-Controller can act as the PCE Hi and PCE Lo, respectively; and typically, the Orchestrator(ML) can act as a VNTM because it has the abstracted view of both the higher-layer and lower-layer networks.¶
Table 1 shows all possible combinations of path computation and path control models in multi-layer network with GMPLS-controller interworking:¶
Path computation / Path control | Single PCE (Not applicable) | Multiple PCE with inter-PCE | Multiple PCE w/o inter-PCE |
---|---|---|---|
PCE-VNTM cooperation | N/A | Yes | Yes |
Higher-layer signaling trigger | N/A | Yes | Yes |
NMS-VNTM cooperation (integrated flavor) | N/A | Yes (1) | No (1) |
NMS-VNTM cooperation (separate flavor) | N/A | No (1) | Yes (1) |
Note that:¶
Since there is one PCE in each layer network, the path computation model "Single PCE path computation" is not applicable (N/A).¶
For the other two path computation models "Multiple PCE with inter-PCE" and "Multiple PCE w/o inter-PCE", the possible combinations are the same as defined in [RFC5623]. More specifically:¶
(1) The path control models "NMS-VNTM cooperation (integrated flavor)" and "NMS-VNTM cooperation (separate flavor)" are the typical models to be used in a multi-layer network with GMPLS-controller interworking. This is because, in these two models, the path computation is triggered by the NMS or VNTM. And in the centralized controller system, the path computation requests are typically from the Orchestrator(ML) (acts as VNTM).¶
For the other two path control models "PCE-VNTM cooperation" and "Higher-layer signaling trigger", the path computation is triggered by the NEs, i.e., the NE performs PCC functions. These two models are still possible to be used, although they are not the main methods.¶
In a multi-layer network, a lower-layer LSP in the lower-layer network can be created, which will construct a new link in the higher-layer network. Such a lower-layer LSP is called Hierarchical LSP, or H-LSP for short; see [RFC6107].¶
The new link constructed by the H-LSP can then be used by the higher-layer network to create new LSPs.¶
As described in [RFC5212], two methods are introduced to create the H-LSP: the static (pre-provisioned) method and the dynamic (triggered) method.¶
Static (pre-provisioned) method:¶
In this method, the H-LSP in the lower-layer network is created in advance. After that, the higher-layer network can create LSPs using the resource of the link constructed by the H-LSP.¶
The Orchestrator(ML) is responsible to decide the creation of H-LSP in the lower-layer network if it acts as a VNTM. Then it requests the L-Controller to create the H-LSP via, for example, an MPI interface under the ACTN architecture. See Section 3.3.2 of [YANG-TE].¶
If the lower-layer network is a GMPLS domain, the L-Controller(G) can trigger the GMPLS control plane to create the H-LSP. As a typical example, the PCInitiate message can be used for the communication between the L-Controller and the source node of the H-LSP. And the source node of the H-LSP can trigger the RSVP-TE signaling procedure to create the H-LSP, as described in [RFC6107].¶
If the lower-layer network is a non-GMPLS domain, other methods may be used by the L-Controller(N) to create the H-LSP, which is out of scope of this document.¶
Dynamic (triggered) method:¶
In this method, the signaling of LSP creation in the higher-layer network will trigger the creation of H-LSP in the lower-layer network dynamically, if it is necessary. Therefore, both the higher-layer and lower-layer networks need to support the RSVP-TE protocol and thus need to be GMPLS domains.¶
In this case, after the cross-layer path is computed, the Orchestrator(ML) requests the H-Controller(G) for the cross-layer LSP creation. As a typical example, the MPI interface under the ACTN architecture could be used.¶
The H-Controller(G) can trigger the GMPLS control plane to create the LSP in the higher-layer network. As a typical example, the PCInitiate message can be used for the communication between the H-Controller(G) and the source node of the higher-layer LSP, as described in Section 4.3 of [RFC8282]. At least two sets of ERO information should be included to indicate the routes of higher-layer LSP and lower-layer H-LSP.¶
The source node of the higher-layer LSP follows the procedure defined in Section 4 of [RFC6001] to trigger the GMPLS control plane in both the higher-layer network and the lower-layer network to create the higher-layer LSP and the lower-layer H-LSP.¶
On success, the source node of the H-LSP should report the information of the H-LSP to the L-Controller(G) via, for example, the PCRpt message.¶
If the higher-layer network and the lower-layer network are under the same GMPLS control plane instance, the H-LSP can be a Forwarding Adjacency LSP (FA-LSP). Then the information of the link constructed by this FA-LSP can be advertised in the routing instance, so that the H-Controller can be aware of this new FA. [RFC4206] and the following updates to it (including [RFC6001] and [RFC6107]) describe the detailed extensions to support advertisement of an FA.¶
If the higher-layer network and the lower-layer network are under separate GMPLS control plane instances or if one of the layer networks is a non-GMPLS domain, after an H-LSP is created in the lower-layer network, the link discovery procedure will be triggered in the higher-layer network to discover the information of the link constructed by the H-LSP. The LMP protocol defined in [RFC4204] can be used if the higher-layer network supports GMPLS. The information of this new link will be advertised to the H-Controller.¶
The GMPLS recovery functions are described in [RFC4426]. Span protection and end-to-end protection and restoration are discussed with different protection schemes and message exchange requirements. Related RSVP-TE extensions to support end-to-end recovery are described in [RFC4872]. The extensions in [RFC4872] include protection, restoration, preemption, and rerouting mechanisms for an end-to-end LSP. Besides end-to-end recovery, a GMPLS segment recovery mechanism is defined in [RFC4873], which also intends to be compatible with Fast Reroute (FRR) (see [RFC4090], which defines RSVP-TE extensions for the FRR mechanism, and [RFC8271], which describes the updates of the GMPLS RSVP-TE protocol for FRR of GMPLS TE-LSPs).¶
Span protection refers to the protection of the link between two neighboring switches. The main protocol requirements include:¶
Link management: Link property correlation on the link protection type¶
GMPLS already supports the above requirements, and there are no new requirements in the scenario of interworking between GMPLS and a centralized controller system.¶
The LSP protection includes end-to-end and segment LSP protection. For both cases:¶
In the provisioning phase:¶
In both single-domain and multi-domain scenarios, the disjoint path computation can be done by the centralized controller system, as it has the global topology and resource view. And the path creation can be done by the procedure described in Section 8.2.¶
In the protection switchover phase:¶
In both single-domain and multi-domain scenarios, the existing standards provide the distributed way to trigger the protection switchover, for example, the data plane Automatic Protection Switching (APS) mechanism described in [G.808.1], [RFC7271], and [RFC8234] or the GMPLS Notify mechanism described in [RFC4872] and [RFC4873]. In the scenario of interworking between GMPLS and a centralized controller system, using these distributed mechanisms rather than a centralized mechanism (i.e., the controller triggers the protection switchover) can significantly shorten the protection switching time.¶
Pre-planned LSP protection (including shared-mesh restoration):¶
In pre-planned protection, the protecting LSP is established only in the control plane in the provisioning phase and will be activated in the data plane once failure occurs.¶
In the scenario of interworking between GMPLS and a centralized controller system, the route of protecting LSP can be computed by the centralized controller system. This takes the advantage of making better use of network resources, especially for the resource-sharing in shared-mesh restoration.¶
Full LSP rerouting:¶
In full LSP rerouting, the normal traffic will be switched to an alternate LSP that is fully established only after a failure occurrence.¶
As described in [RFC4872] and [RFC4873], the alternate route can be computed on demand when there is a failure occurrence or can be pre-computed and stored before a failure occurrence.¶
In a fully distributed scenario, the pre-computation method offers a faster restoration time but has the risk that the pre-computed alternate route may become out-of-date due to the changes of the network.¶
In the scenario of interworking between GMPLS and a centralized controller system, the pre-computation of the alternate route could take place in the centralized controller (and may be stored in the controller or the head-end node of the LSP). In this way, any changes in the network can trigger the refreshment of the alternate route by the centralized controller. This makes sure that the alternate route will not become out-of-date.¶
A working LSP may traverse multiple domains, each of which may or may not support a GMPLS distributed control plane.¶
If all the domains support GMPLS, both the end-to-end rerouting method and the domain segment rerouting method could be used.¶
If only some domains support GMPLS, the domain segment rerouting method could be used in those GMPLS domains. For other domains that do not support GMPLS, other mechanisms may be used to protect the LSP segments, which are out of scope of this document.¶
End-to-end rerouting:¶
In this scenario, a failure on the working LSP inside any domain or on the inter-domain links will trigger the end-to-end restoration.¶
In both pre-planned and full LSP rerouting, the end-to-end protecting LSP could be computed by the centralized controller system and could be created by the procedure described in Section 8.2. Note that the end-to-end protecting LSP may traverse different domains from the working LSP, depending on the result of multi-domain path computation for the protecting LSP.¶
Domain segment rerouting:¶
Intra-domain rerouting:¶
If failure occurs on the working LSP segment in a GMPLS domain, the segment rerouting [RFC4873] could be used for the working LSP segment in that GMPLS domain. Figure 6 shows an example of intra-domain rerouting.¶
The intra-domain rerouting of a non-GMPLS domain is out of scope of this document.¶
Inter-domain rerouting:¶
If intra-domain segment rerouting failed (e.g., due to lack of resource in that domain), or if failure occurs on the working LSP on an inter-domain link, the centralized controller system may coordinate with other domain(s) to find an alternative path or path segment to bypass the failure and then trigger the inter-domain rerouting procedure. Note that the rerouting path or path segment may traverse different domains from the working LSP.¶
The domains involved in the inter-domain rerouting procedure need to be GMPLS domains, which support the RSVP-TE signaling for the creation of a rerouting LSP segment.¶
For inter-domain rerouting, the interaction between GMPLS and a centralized controller system is needed:¶
A report of the result of intra-domain segment rerouting to its Controller(G) and then to the Orchestrator(MD). The former could be supported by the PCRpt message in [RFC8231], while the latter could be supported by the MPI interface of ACTN.¶
A report of inter-domain link failure to the two Controllers (e.g., Controller(G) 1 and Controller(G) 2 in Figure 7) by which the two ends of the inter-domain link are controlled, respectively, and then to the Orchestrator(MD). The former could be done as described in Section 8.1, while the latter could be supported by the MPI interface of ACTN.¶
The computation of a rerouting path or path segment crossing multi-domains by the centralized controller system (see [PATH-COMP]);¶
The creation of a rerouting LSP segment in each related domain. The Orchestrator(MD) can send the LSP segment rerouting request to the source Controller(G) (e.g., Controller(G) 1 in Figure 7) via MPI interface, and then the Controller(G) can trigger the creation of a rerouting LSP segment through multiple GMPLS domains using GMPLS rerouting signaling. Note that the rerouting LSP segment may traverse a new domain that the working LSP does not traverse (e.g., Domain 3 in Figure 7).¶
[RFC4090] defines two methods of fast reroute: the one-to-one backup method and the facility backup method. For both methods:¶
Path computation of protecting LSP:¶
In Section 6.2 of [RFC4090], the protecting LSP (detour LSP in one-to-one backup or bypass tunnel in facility backup) could be computed by the Point of Local Repair (PLR) using, for example, a Constrained Shortest Path First (CSPF) computation. In the scenario of interworking between GMPLS and a centralized controller system, the protecting LSP could also be computed by the centralized controller system, as it has the global view of the network topology, resources, and information of LSPs.¶
Protecting LSP creation:¶
In the scenario of interworking between GMPLS and a centralized controller system, the protecting LSP could still be created by the RSVP-TE signaling protocol as described in [RFC4090] and [RFC8271].¶
In addition, if the protecting LSP is computed by the centralized controller system, the Secondary Explicit Route Object defined in [RFC4873] could be used to explicitly indicate the route of the protecting LSP.¶
Failure detection and traffic switchover:¶
If a PLR detects that failure occurs, it may significantly shorten the protection switching time by using the distributed mechanisms described in [RFC4090] to switch the traffic to the related detour LSP or bypass tunnel rather than doing so in a centralized way.¶
The reliability of the controller is crucial due to its important role in the network. It is essential that if the controller is shut down or disconnected from the network, all currently provisioned services in the network continue to function and carry traffic. In addition, protection switching to pre-established paths should also work. It is desirable to have protection mechanisms, such as redundancy, to maintain full operational control even if one instance of the controller fails. This can be achieved through controller backup or functionality backup. There are several controller backup or federation mechanisms in the literature. It is also more reliable to have function backup in the network element to guarantee performance in the network.¶
Each network entity, including controllers and network elements, should be managed properly and with the relevant trust and security policies applied (see Section 10), as they will interact with other entities. The manageability considerations in controller hierarchies and network elements still apply, respectively. The overall manageability of the protocols applied in the network should also be a key consideration.¶
The responsibility of each entity should be clarified. The control of function and policy among different controllers should be consistent via a proper negotiation process.¶
This document outlines the interworking between GMPLS and controller hierarchies. The security requirements specific to both systems remain applicable. Protocols referenced herein possess security considerations, which must be adhered to, with their core specifications and identified risks detailed earlier in this document.¶
Security is a critical aspect in both GMPLS and controller-based networks. Ensuring robust security mechanisms in these environments is paramount to safeguard against potential threats and vulnerabilities. Below are expanded security considerations and some relevant IETF RFC references.¶
Authentication and Authorization: It is essential to implement strong authentication and authorization mechanisms to control access to the controller from multiple network elements. This ensures that only authorized devices and users can interact with the controller, preventing unauthorized access that could lead to network disruptions or data breaches. "The Transport Layer Security (TLS) Protocol Version 1.3" [RFC8446] and "Enrollment over Secure Transport" [RFC7030] provide guidelines on secure communication and certificate-based authentication that can be leveraged for these purposes.¶
Controller Security: The controller's security is crucial as it serves as the central control point for the network elements. The controller must be protected against various attacks, such as Denial of Service (DoS), Man in the Middle (MITM), and unauthorized access. Security mechanisms should include regular security audits, application of security patches, firewalls, and Intrusion Detection Systems (IDSs) / Intrusion Prevention Systems (IPSs).¶
Data Transport Security: Security mechanisms on the controller should also safeguard the underlying network elements against unauthorized usage of data transport resources. This includes encryption of data in transit to prevent eavesdropping and tampering as well as ensuring data integrity and confidentiality.¶
Secure Protocol Implementation: Protocols used within the GMPLS and controller frameworks must be implemented with security in mind. Known vulnerabilities should be addressed, and secure versions of protocols should be used wherever possible.¶
Finally, robust network security often depends on Indicators of Compromise (IoCs) to detect, trace, and prevent malicious activities in networks or endpoints. These are described in [RFC9424] along with the fundamentals, opportunities, operational limitations, and recommendations for IoC use.¶
This document has no IANA actions.¶
The authors would like to thank Jim Guichard, Area Director of IETF Routing Area; Vishnu Pavan Beeram, Chair of TEAS WG; Jia He and Stewart Bryant, RTGDIR reviewers; Thomas Fossati, Gen-ART reviewer; Yingzhen Qu, OPSDIR reviewer; David Mandelberg, SECDIR reviewer; David Dong, IANA Services Sr. Specialist; and Éric Vyncke and Murray Kucherawy, IESG reviewers for their reviews and comments on this document.¶