US5841759A - Automated path verification for shin-based restoration - Google Patents

Automated path verification for shin-based restoration Download PDF

Info

Publication number
US5841759A
US5841759A US08/781,495 US78149597A US5841759A US 5841759 A US5841759 A US 5841759A US 78149597 A US78149597 A US 78149597A US 5841759 A US5841759 A US 5841759A
Authority
US
United States
Prior art keywords
end nodes
path
communications path
nodes
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/781,495
Inventor
Will Russ
Mark Wayne Sees
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Verizon Patent and Licensing Inc
Original Assignee
MCI Communications Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MCI Communications Corp filed Critical MCI Communications Corp
Priority to US08/781,495 priority Critical patent/US5841759A/en
Application granted granted Critical
Publication of US5841759A publication Critical patent/US5841759A/en
Assigned to VERIZON PATENT AND LICENSING INC. reassignment VERIZON PATENT AND LICENSING INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MCI COMMUNICATIONS CORPORATION
Assigned to VERIZON PATENT AND LICENSING INC. reassignment VERIZON PATENT AND LICENSING INC. CORRECTIVE ASSIGNMENT TO REMOVE THE PATENT NUMBER 5,835,907 PREVIOUSLY RECORDED ON REEL 032725 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: MCI COMMUNICATIONS CORPORATION
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/14Monitoring arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q3/00Selecting arrangements
    • H04Q3/0016Arrangements providing connection between exchanges
    • H04Q3/0062Provisions for network management
    • H04Q3/0075Fault management techniques
    • H04Q3/0079Fault management techniques involving restoration of networks, e.g. disaster recovery, self-healing networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0057Operations, administration and maintenance [OAM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J2203/00Aspects of optical multiplex systems other than those covered by H04J14/05 and H04J14/07
    • H04J2203/0001Provisions for broadband connections in integrated services digital network using frames of the Optical Transport Network [OTN] or using synchronous transfer mode [STM], e.g. SONET, SDH
    • H04J2203/0057Operations, administration and maintenance [OAM]
    • H04J2203/006Fault tolerance and recovery

Definitions

  • This invention relates to an application by W. Russ entitled “System and Method for Resolving Substantially Simultaneous Bi-directional Requests of Spare Capacity” (Docket No. RIC-95-009), to be assigned to the same assignee as the instant invention and filed concurrently herewith having Ser. No. 08/483,578.
  • This invention is further related to an application by Russ et al. entitled “Method and System for Resolving Contention of Spare Capacity Circuits of a Telecommunications Network” (Docket No. RIC-95-005), to be assigned to the same assignee as the instant invention and filed on Jun. 6, 1995 having Ser. No. 08/468,302.
  • This invention is related to distributed restoration algorithms (DRA) and more particularly to the verification of an alternate route found subsequent to a restorative process based on the self healing network (SHN) restoration of a telecommunications network due to a failure or disruption in the network.
  • DAA distributed restoration algorithms
  • SHN self healing network
  • a self healing network (SHN) distributed restoration algorithm (DRA), is described by W. D. Grover in U.S. Pat. No. 4,956,835, which teaches the restoration of a disrupted traffic due to a failed link.
  • SHN self healing network
  • DPA distributed restoration algorithm
  • W. D. Grover in U.S. Pat. No. 4,956,835
  • the '835 invention assumes that once an alternate route is found to replace the failed link separating the sender and chooser nodes in the telecommunications network, the communications path of which the failed link is a portion is as good as new. That oftentimes may not be case, as an alternate route may end up utilizing a great number of spare circuits that were not used in the earlier communications path to carry the same traffic which was disrupted due to the failed link.
  • an operations support system (OSS) of the telecommunications network retrieves from each of the end nodes that it monitors a special message that contains an identification of the end nodes and the access/egress port associated therewith to which the communications path is connected and through which traffic may be passed to another node or a different environment.
  • the OSS monitors the nodes of the telecommunications network and particularly the end nodes of the various communications paths continuously and retrieves from each of the end nodes its special message on a periodic basis.
  • An objective of the present invention is therefore to provide a method and system for determining whether a restored communications path is a valid path.
  • FIG. 1. is an illustration of a telecommunications network of the present invention in which a plurality of nodes are shown to be cross-connected to each other and to an operational support system;
  • FIG. 2 is an illustration that is the same as the FIG. 1 illustration but after a failure has occurred in the network.
  • a telecommunications network of the present invention comprises a number of nodes 2-22 each connected to adjacent nodes by respective spans such as for example spans 24 between nodes 2 and 4, and 26 between nodes 4 and 6.
  • the telecommunications network may be considered to be divided into an environment that is capable of distributed restoration which, for this invention, may be referred to as a dynamic transmission network restoration (DTNR) domain, designated as 28.
  • DTNR dynamic transmission network restoration
  • Nodes such as 20 and 22 shown to be outside of the DTNR domain 28 may be considered to be in an environment that may not be subject to automated distributed restoration.
  • nodes 20 and 22 may be switches that are connected to multi-plexers or local telephone switches. For the sake of simplicity, such multi-plexers and local switches are not shown.
  • OSS 30 is where the network management monitors the overall operation of the network. In other words, it has an overall view, or map, of the layout of each node within the network.
  • OSS 30 has a central processor 32 that has connected thereto a working memory 34 and a database storage 36.
  • interface unit 38 which has a number of ports for effecting connections to each of the nodes within the network.
  • nodes 2, 8 and 14 are shown to be connected to the ports of interface unit 38 via lines 40, 42 and 44, respectively.
  • Each of the nodes in the network comprises a cross-connect switch such as the 1633-SX broadband cross-connect switch made by the Alcatel Network Systems Company.
  • DCS digital cross-connect switch
  • each DCS has a number of access/egress ports with their own IDs.
  • each DCS has a number of working links and spare and open links. These links may be in the form of fiber optic cables such as the optical cable OC-12 link.
  • an access/egress port may be defined as a STS-1/DS-3 (Synchronous Transport Signal Level 1/Digital Service Level 3) port where a circuit enters and exits DTNR domain and is cross-connected to a working link in the DTNR domain.
  • STS-1/DS-3 Synchronous Transport Signal Level 1/Digital Service Level 3
  • each of the nodes there are also lightwave transmitting equipment connected to each of the nodes for transmitting the light signals to the adjacent nodes.
  • the interface connection for the OC-12 links are illustrated, for example, at nodes 2 and 14 as 2OC and 14OC, respectively.
  • a memory store respectively designated as 2M, 8M and 14M.
  • the access/egress ports are shown as 2A and 14A, respectively.
  • the access/egress ports such as 2A and 14A will send their port numbers through the matrix in each of the DCSs to the working ports such as 2OC and 14OC shown in nodes 2 and 14, respectively.
  • the OC12 ports will insert the port number (which ever port of the plurality of the available ports) and the node ID (node 2 or node 14 for example) into a unique path verification circuit ID (PVCID) message.
  • PVCID path verification circuit ID
  • the PVCID message is encapsulated within a conventional link access procedure-D (LAP-D) protocol frame for transmission in the SONET overhead.
  • this PVCID message is only generated by the end nodes of a communications path, for example a communications path such as that created by the glass through or express pipe 46 which connects node 2 to node 14.
  • a PVCID message such as 48, is generated by node 2 and travels across the communication path comprising link 46 to the far end node of the communication path, in this instance, node 14.
  • This PVCID message 48 is then be read by end node 14 and stored in its memory 14M.
  • node 2 the other far end node which together with node 14 sandwich or bracket the communication path formed by link 46, generates new PVCID messages once every given time interval. These messages are sent from node 2 across the communications path that connects it to node 14 to update the status of the access/egress port and node 2 with node 14. Thus, if a PVCID message is not received from node 2 by node 14 within a given time period, there will be an alarm sent out from node 14 to OSS 30 to inform it that there is a loss of continuity for that particular STS-1 circuit. It is assumed here that each STS-1 circuit forms one communications path between two end nodes and each STS-1 carries its own PVCID message and resulting continuity check.
  • node 14 is generating its own PVCID messages, designated as 50, and forwards those messages across the same STS-1 path, as for example within link 46, across to the other far end node, node 2 for the instant exemplar embodiment.
  • the PVCID message from node 14 likewise is sent periodically to node 2 for updating the status of node 14, both in terms of its access/egress port ID and its node ID.
  • node 2 upon receipt of the PVCID message from node 14, stores the data in the message in its memory 2M and updates this data every time it receives a PVCID message from its far end node, for example node 14 for the FIG. 1 embodiment.
  • the forwarding of the PVCID messages may be done by the interface units 2I and 14I in nodes 2 and 14, respectively.
  • OSS 30 polls each of the nodes of the network cross-connected thereto periodically.
  • end nodes such as 2 and 14
  • OSS 30 retrieves from their respective memories 2M and 14M the stored PVCID data. This data may be stored in database store 36 or in its working memory 34.
  • database store 36 or in its working memory 34.
  • OSS 30 will also know of the termination of a failure event, which otherwise may not be known by the end nodes.
  • OSS 30 provides an overall view of all of nodes of the network and particularly, for this invention, oversees the end nodes to which a communications path is connected.
  • the end nodes and particularly its access/egress ports, provide the medium for exchanging data and/or traffic between the DTNR domain 28 and its environment.
  • this exchange of traffic may occur between end node 14 and outside node 22 via circuit 52.
  • the exchange of traffic between DTNR domain 28 and its environment may be effected between node 2 and node 20 via circuit 54.
  • OSS 30 prior to a failure in the network, OSS 30 has in storage a record of the status of any two end nodes to which a communications path is connected. This record is updated periodically so that if there are changes, the management of the network would be informed, as for example, via terminal 56.
  • a fault 56 for example a fiber cut, is shown to have occurred between node 2 and node 8.
  • This exemplar failure involves 2 links, namely express pipe link 46 that cross-connects node 2 to node 14 and link 58 which cross-connects node 2 to node 8.
  • link 58 which cross-connects node 2 to node 8 may very well be a non-working link, for example a spare or an open link, or another type of back-up link that does not carry traffic.
  • node 2 and node 14 each will detect a loss of signal from its far end, in this instance its opposite end node.
  • a distributed restoration in the form of a SHN scheme, is begun by both nodes 2 and 14.
  • the operation of such SHN scheme can be gleaned from any of the above mentioned related applications, and/or from the incorporated '835 patent.
  • restoration signatures or messages are sent, or flooded, by the respective adjacent nodes, namely nodes 2 and 14, to their respective adjacent links for finding an alt route to bypass the failed link by the sender node of the node/chooser pair.
  • node 2 is assumed to be the sender and node 14 the chooser.
  • node 2 will send out flooding signatures to its adjacent links until node 14 has received the first of such flooding signatures.
  • node 14 will send complement signatures or messages to reverse link, and reserve, the links that have been flooded by the restoration signatures until the complement signatures reach sender node 2.
  • an alt route darkened in FIG. 2 for illustration purpose, is formed between node 2 and node 14. This alt route goes from node 2 to node 4 to node 10 to node 16 and then to node 14, and utilizes the spare links interconnecting sender node 2 and chooser node 14 to the intermediate nodes 4, 10 and 16.
  • a PVCID message 48 is sent, right after the restoration of the communications path, by end node 2 to end node 14.
  • a PVCID message 50 is sent by end node 14 to end node 2.
  • each of the end nodes carries a unique identifier containing information relating to its originating node, as for example the ID of the originating node, and the ID of the access/egress port to which the STS-1 circuit that forms the communications path is connected.
  • PVCID message 48 will contain information relating to node 2 while PVCID message 50 will contain information relating to node 14.
  • the respective data contained in the PVCID messages 48 and 50 are stored in memory 14M and memory 2M, respectively, of end nodes 14 and 2.
  • OSS 30 will poll the nodes of the network right after a disruption to the network
  • the data relating to the ports and the nodes that the ports reside in which form the ends of the communications path within the DTNR domain 28 is retrieved by OSS 30 and put in its memory 34.
  • processor 32 will retrieve the same information relating to the same communications path stored right before the communications path was disrupted, and compares the two sets of access/egress port IDs, as well as the node IDs, to determine if there have been any changes.
  • the unique identifiers of the end nodes of the communications path that were stored prior to the disruption are compared with the unique identifiers of the reestablished communications path. If there is no difference between those unique identifiers, then OSS 30 informs the network management that the newly restored communications path is a valid path. However, if any of the data retrieved from the new PVCID messages is different from the data of those PVCID messages stored just prior to the failure, then OSS 30 will report an error to the network management via terminal 56 so that further action may be taken for restoring the traffic disrupted by the cut link.
  • OSS 30 will compare the PVCID messages received right after the failure event and compare them to the pre-event topology that was stored in its database 36 to confirm that the new alt route STS-1 is connected to the same access/egress ports as before the failure event.

Abstract

To verify that a communications path restored in response to a failure to a telecommunications network is a validly restored path, each of the end nodes terminating the restored communications path sends out a message containing data that identifies that node and the ID of the access/egress port to which the STS-1 circuit forming the communications path is connected. Once the respective path verification messages are exchanged between the two end nodes of the communications path, the Operations Support System (OSS) that oversees the topology of the network retrieves those messages and compares the data contained therein with the data of the same type of messages from the same ends nodes that were stored just prior to the occurrence of the disruption to the communications path. The restored communications path is deemed to be verified if there are no differences between the path verification messages retrieved after the failure event and the path verification messages stored just prior to the failure event.

Description

This application is a divisional of U.S. patent application Ser. No. 08/483,525 filed Jun. 7, 1995.
RELATED APPLICATIONS
This invention relates to an application by W. Russ entitled "System and Method for Resolving Substantially Simultaneous Bi-directional Requests of Spare Capacity" (Docket No. RIC-95-009), to be assigned to the same assignee as the instant invention and filed concurrently herewith having Ser. No. 08/483,578. This invention is further related to an application by Russ et al. entitled "Method and System for Resolving Contention of Spare Capacity Circuits of a Telecommunications Network" (Docket No. RIC-95-005), to be assigned to the same assignee as the instant invention and filed on Jun. 6, 1995 having Ser. No. 08/468,302. The disclosure of the application having the '005 docket number is incorporated by reference to this application. This invention is furthermore related to an application by W. Russ entitled "Automated Restoration of Unrestored Link and Nodal Failures" (Docket No. RIC-95-059), to be assigned to the same assignee as the instant invention and filed concurrently herewith having Ser. No. 08/483,579. The disclosure of the related '005 docket number application may be reviewed for an understanding of the concepts of distributed restoration algorithms. This invention is yet further related to an application by J. Shah entitled "Method and System for Identifying Fault Locations In a Communications Network" (Docket No. RIC-95-022), to be assigned to the same assignee as the instant invention and filed concurrently herewith having Ser. No. 08/481,984. This invention is yet furthermore related to an application by Chow et al. entitled "System and Method for Restoring a Telecommunications Network Based on a Two Prong Approach" filed on Mar. 9, 1994 having Ser. No. 08/207,638 and assigned to the same assignee as the instant invention. The disclosure of the '638 application is incorporated by reference herein.
FIELD OF THE INVENTION
This invention is related to distributed restoration algorithms (DRA) and more particularly to the verification of an alternate route found subsequent to a restorative process based on the self healing network (SHN) restoration of a telecommunications network due to a failure or disruption in the network.
BACKGROUND OF THE INVENTION
A self healing network (SHN) distributed restoration algorithm (DRA), is described by W. D. Grover in U.S. Pat. No. 4,956,835, which teaches the restoration of a disrupted traffic due to a failed link. There are, however, no teachings or suggestions on checking the validity of the alternate route after the restoration. In other words, the '835 invention assumes that once an alternate route is found to replace the failed link separating the sender and chooser nodes in the telecommunications network, the communications path of which the failed link is a portion is as good as new. That oftentimes may not be case, as an alternate route may end up utilizing a great number of spare circuits that were not used in the earlier communications path to carry the same traffic which was disrupted due to the failed link.
A need therefore exists for verifying the integrity of a restored communications path which had been disrupted due to the failure of a portion thereof.
BRIEF DESCRIPTION OF THE PRESENT INVENTION
To determine whether or not a link has been restored by means of an alternate route (alt route), the instant invention utilizes a path verification method and system to provide a true continuity check. In particular, an operations support system (OSS) of the telecommunications network retrieves from each of the end nodes that it monitors a special message that contains an identification of the end nodes and the access/egress port associated therewith to which the communications path is connected and through which traffic may be passed to another node or a different environment. The OSS monitors the nodes of the telecommunications network and particularly the end nodes of the various communications paths continuously and retrieves from each of the end nodes its special message on a periodic basis.
When a fault occurs at one of the links connecting adjacent nodes, such disruption is reported to the OSS. Thereafter, the adjacent nodes that bracket the failed link performs a SHN restoration to find an alt route to restore the disrupted traffic. Once an alt route is found, the communications path that was disrupted because of the failed link is reestablished across the various intermediate nodes. The two ends nodes to which the restored communications path is anchored exchange the special message bearing its node ID and the access/egress port to which the communications path is connected. Once the special messages are exchanged between the two end nodes, the OSS retrieves these special messages and compares each with the previously stored path verification message for each of the end nodes. If there are no changes between the earlier path verification message that was stored prior to the fault and the path verification message sent right after traffic has been restored for both end nodes, then it is clear that the communications path is continuous and valid. On the other hand, if there is a difference between the earlier stored path verification message and the latest path verification message for either one of the end nodes, then the OSS will send out an alarm indicating that the continuity check indicates that there may be a problem with the restored communications path.
An objective of the present invention is therefore to provide a method and system for determining whether a restored communications path is a valid path.
It is another objective of the instant invention to provide an automated scheme for determining the continuity of a restored communications path.
BRIEF DESCRIPTION OF THE DRAWINGS
The above mentioned objectives and advantages of the present invention will become apparent and the invention itself will be best understood by reference to the following description of an embodiment of the invention taken in conjunction with the accompanying drawings, wherein:
FIG. 1. is an illustration of a telecommunications network of the present invention in which a plurality of nodes are shown to be cross-connected to each other and to an operational support system; and
FIG. 2 is an illustration that is the same as the FIG. 1 illustration but after a failure has occurred in the network.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
A telecommunications network of the present invention, as shown in FIG. 1, comprises a number of nodes 2-22 each connected to adjacent nodes by respective spans such as for example spans 24 between nodes 2 and 4, and 26 between nodes 4 and 6. For the instant invention, the telecommunications network may be considered to be divided into an environment that is capable of distributed restoration which, for this invention, may be referred to as a dynamic transmission network restoration (DTNR) domain, designated as 28. Nodes such as 20 and 22 shown to be outside of the DTNR domain 28 may be considered to be in an environment that may not be subject to automated distributed restoration. For example, nodes 20 and 22 may be switches that are connected to multi-plexers or local telephone switches. For the sake of simplicity, such multi-plexers and local switches are not shown.
Within the DTNR domain there is also an operational support system (OSS) 30. OSS 30 is where the network management monitors the overall operation of the network. In other words, it has an overall view, or map, of the layout of each node within the network. For the understanding of this invention, it suffices to say that OSS 30 has a central processor 32 that has connected thereto a working memory 34 and a database storage 36. Also connected to processor 32 is an interface unit 38 which has a number of ports for effecting connections to each of the nodes within the network. Again, for the sake of simplicity, only nodes 2, 8 and 14 are shown to be connected to the ports of interface unit 38 via lines 40, 42 and 44, respectively. Thus, the goings on within each of the nodes of the network is monitored by OSS 30.
Each of the nodes in the network comprises a cross-connect switch such as the 1633-SX broadband cross-connect switch made by the Alcatel Network Systems Company. A more detailed illustration of the digital cross-connect switch (DCS) may be gleaned from either of the above mentioned applications having the '005 and '059 docket numbers. In essence, each DCS has a number of access/egress ports with their own IDs. In addition, each DCS has a number of working links and spare and open links. These links may be in the form of fiber optic cables such as the optical cable OC-12 link. There are 12 SONET synchronous transport signal level-1 (STS-1) circuits in each of the OC-12 link. Thus, even though the circuits connecting the adjacent nodes in FIG. 1 are shown to have only 1 line, in actuality, there are a number of OC-12 links within each span for connecting the adjacent nodes. For the instant invention, an access/egress port may be defined as a STS-1/DS-3 (Synchronous Transport Signal Level 1/Digital Service Level 3) port where a circuit enters and exits DTNR domain and is cross-connected to a working link in the DTNR domain.
Although not shown, there are also lightwave transmitting equipment connected to each of the nodes for transmitting the light signals to the adjacent nodes. The interface connection for the OC-12 links are illustrated, for example, at nodes 2 and 14 as 2OC and 14OC, respectively. Also in each of the nodes, as illustrated in nodes 2, 8 and 14 for example, is a memory store, respectively designated as 2M, 8M and 14M. For nodes 2 and 14, the access/egress ports are shown as 2A and 14A, respectively.
For the instant invention, the access/egress ports such as 2A and 14A will send their port numbers through the matrix in each of the DCSs to the working ports such as 2OC and 14OC shown in nodes 2 and 14, respectively. There, the OC12 ports will insert the port number (which ever port of the plurality of the available ports) and the node ID (node 2 or node 14 for example) into a unique path verification circuit ID (PVCID) message. For each of the 12 STS-1 circuits within the OC-12 link, the PVCID message is encapsulated within a conventional link access procedure-D (LAP-D) protocol frame for transmission in the SONET overhead.
For the instant invention, this PVCID message is only generated by the end nodes of a communications path, for example a communications path such as that created by the glass through or express pipe 46 which connects node 2 to node 14. Thus, for the embodiment network shown in FIG. 1, a PVCID message, such as 48, is generated by node 2 and travels across the communication path comprising link 46 to the far end node of the communication path, in this instance, node 14. This PVCID message 48 is then be read by end node 14 and stored in its memory 14M.
In the meantime, node 2, the other far end node which together with node 14 sandwich or bracket the communication path formed by link 46, generates new PVCID messages once every given time interval. These messages are sent from node 2 across the communications path that connects it to node 14 to update the status of the access/egress port and node 2 with node 14. Thus, if a PVCID message is not received from node 2 by node 14 within a given time period, there will be an alarm sent out from node 14 to OSS 30 to inform it that there is a loss of continuity for that particular STS-1 circuit. It is assumed here that each STS-1 circuit forms one communications path between two end nodes and each STS-1 carries its own PVCID message and resulting continuity check.
At the same time, node 14 is generating its own PVCID messages, designated as 50, and forwards those messages across the same STS-1 path, as for example within link 46, across to the other far end node, node 2 for the instant exemplar embodiment. The PVCID message from node 14 likewise is sent periodically to node 2 for updating the status of node 14, both in terms of its access/egress port ID and its node ID. Similarly, node 2, upon receipt of the PVCID message from node 14, stores the data in the message in its memory 2M and updates this data every time it receives a PVCID message from its far end node, for example node 14 for the FIG. 1 embodiment. The forwarding of the PVCID messages may be done by the interface units 2I and 14I in nodes 2 and 14, respectively.
OSS 30 polls each of the nodes of the network cross-connected thereto periodically. In the case of end nodes such as 2 and 14, OSS 30 retrieves from their respective memories 2M and 14M the stored PVCID data. This data may be stored in database store 36 or in its working memory 34. Thus, there is always an updated record of the end nodes and their respective access/egress port IDs for each communications path in the network. Note that this is necessary insofar as each of the end nodes of a communications path would not know what its far end counterpart is doing without its far end counterpart PVCID message being sent thereto. The OSS will also know of the termination of a failure event, which otherwise may not be known by the end nodes. Thus, OSS 30 provides an overall view of all of nodes of the network and particularly, for this invention, oversees the end nodes to which a communications path is connected.
As shown in FIG. 1, the end nodes, and particularly its access/egress ports, provide the medium for exchanging data and/or traffic between the DTNR domain 28 and its environment. Thus, as shown in FIG. 1, this exchange of traffic may occur between end node 14 and outside node 22 via circuit 52. Alternatively, the exchange of traffic between DTNR domain 28 and its environment may be effected between node 2 and node 20 via circuit 54.
Thus, prior to a failure in the network, OSS 30 has in storage a record of the status of any two end nodes to which a communications path is connected. This record is updated periodically so that if there are changes, the management of the network would be informed, as for example, via terminal 56.
With reference to FIG. 2, a fault 56, for example a fiber cut, is shown to have occurred between node 2 and node 8. This exemplar failure involves 2 links, namely express pipe link 46 that cross-connects node 2 to node 14 and link 58 which cross-connects node 2 to node 8. For the discussion of this embodiment, however, given the fact that only one communications path was discussed above, for the sake of simplicity, we assume that only link 46 that cross-connects node 2 to node 14 is of import. In other words, we are assuming for this discussion that link 58 which cross-connects node 2 to node 8 may very well be a non-working link, for example a spare or an open link, or another type of back-up link that does not carry traffic. Thus, for the DTNR domain 28 shown in FIG. 2, there is a disruption to the traffic which traverses across the communications path, otherwise designated as link 46, between node 2 and node 14.
Given the distributed nature of DTNR domain 28, node 2 and node 14 each will detect a loss of signal from its far end, in this instance its opposite end node. Upon such detection of loss signal, a distributed restoration, in the form of a SHN scheme, is begun by both nodes 2 and 14. The operation of such SHN scheme can be gleaned from any of the above mentioned related applications, and/or from the incorporated '835 patent. In essence, restoration signatures or messages are sent, or flooded, by the respective adjacent nodes, namely nodes 2 and 14, to their respective adjacent links for finding an alt route to bypass the failed link by the sender node of the node/chooser pair.
In the example embodiment, using the conventional higher/lower number node arbitration method, node 2 is assumed to be the sender and node 14 the chooser. Thus, node 2 will send out flooding signatures to its adjacent links until node 14 has received the first of such flooding signatures. In response, node 14 will send complement signatures or messages to reverse link, and reserve, the links that have been flooded by the restoration signatures until the complement signatures reach sender node 2. Thereafter, an alt route, darkened in FIG. 2 for illustration purpose, is formed between node 2 and node 14. This alt route goes from node 2 to node 4 to node 10 to node 16 and then to node 14, and utilizes the spare links interconnecting sender node 2 and chooser node 14 to the intermediate nodes 4, 10 and 16.
Given the fact that nodes 2 and 14 were deemed to be the end nodes of the communications path formed by now failed link 46, it should be appreciated that the new communications path is formed with the two end nodes 2 and 14 cross-connected to intermediate nodes 4, 10 and 16. The traffic that was interrupted when link 46 was cut can now be routed to the new communications path. Note, however, that even though there is a new communications path, in actuality, the new communications path is physically different from the old communications path represented by link 46. Thus, a determination needs to be made on the continuity or integrity of the new communications path between end nodes 2 and 14.
As shown in FIG. 2, as before, a PVCID message 48 is sent, right after the restoration of the communications path, by end node 2 to end node 14. Similarly, a PVCID message 50 is sent by end node 14 to end node 2. As mentioned before, each of the end nodes carries a unique identifier containing information relating to its originating node, as for example the ID of the originating node, and the ID of the access/egress port to which the STS-1 circuit that forms the communications path is connected. Thus, PVCID message 48 will contain information relating to node 2 while PVCID message 50 will contain information relating to node 14. The respective data contained in the PVCID messages 48 and 50 are stored in memory 14M and memory 2M, respectively, of end nodes 14 and 2.
Given the fact that OSS 30 will poll the nodes of the network right after a disruption to the network, the data relating to the ports and the nodes that the ports reside in which form the ends of the communications path within the DTNR domain 28 is retrieved by OSS 30 and put in its memory 34. Thereafter, processor 32 will retrieve the same information relating to the same communications path stored right before the communications path was disrupted, and compares the two sets of access/egress port IDs, as well as the node IDs, to determine if there have been any changes.
In other words, the unique identifiers of the end nodes of the communications path that were stored prior to the disruption are compared with the unique identifiers of the reestablished communications path. If there is no difference between those unique identifiers, then OSS 30 informs the network management that the newly restored communications path is a valid path. However, if any of the data retrieved from the new PVCID messages is different from the data of those PVCID messages stored just prior to the failure, then OSS 30 will report an error to the network management via terminal 56 so that further action may be taken for restoring the traffic disrupted by the cut link. Putting it differently, OSS 30 will compare the PVCID messages received right after the failure event and compare them to the pre-event topology that was stored in its database 36 to confirm that the new alt route STS-1 is connected to the same access/egress ports as before the failure event.
Note that changes to the PVCID messages received at an access/egress port by an end node are considered normal because of the normal provisioning of the network; hence the periodic updating of the PVCID messages in the respective memories of the end nodes. However, after a failure, it is necessary that the correct access/egress port in both of the end nodes be reconnected so that the appropriate communications path be restored. Putting it simply, the present invention does not function to police the misprovisioning of STS-1 circuits between different nodes. Rather, it is used to verify the continuity of a communications path before and after an event, as for example a failure that occurred at the network between two adjacent nodes.
Inasmuch as the present invention is subject to many variations, modifications and changes in detail, it is intended that all matter described throughout this specification and shown in the accompanying drawings be interpreted as illustrative only and not in a limiting sense. Accordingly, it is intended that the present invention be limited only by the spirit and scope of the hereto appended claims.

Claims (11)

We claim:
1. In a telecommunications network, a system operative for verifying the integrity of a communications path connecting two end nodes, comprising:
interface means for communicating with each of said end nodes;
processor means connected to said interface means for retrieving periodically from said each end nodes a path verification message said each end node sent to the other of said end nodes;
store means for storing the path verification messages retrieved from said end nodes;
wherein after a fault has occurred to the communications path connecting said end nodes and an alternate communications path has allegedly been established to reconnect said end nodes, said processor means
retrieving from said each end node the path verification message sent to the other of said end nodes;
comparing the path verification messages retrieved from said end nodes subsequent to the establishment of said alternate communications path with the path verification messages retrieved from said end nodes stored just prior to said fault; and
verifying the integrity of said alternate communications path if no differences exist between the path verification messages of said end nodes stored prior to said fault and the path verification messages retrieved from said end nodes after the establishment of said alternate communications path.
2. The system of claim 1, wherein said interface means further communicates with all nodes within a dynamic transmission network restoration (DTNR) domain of said telecommunications network, further comprising:
a database store means for storing all path verification messages from the end nodes of each communications path established within said DTNR domain.
3. The system of claim 1, further comprising:
terminal means for informing the management of said telecommunications network the status of any two end nodes to which a communications path in said telecommunications network is connected as the status record of said two end nodes is periodically updated.
4. The system of claim 1, wherein each of said path verifying messages includes a unique identifier representative of the end node that sent said each message.
5. The system of claim 4, wherein said each identifier includes the ID of the one access/egress port and the ID of whichever one of the end nodes that sent the path verifying message to which said each identifier resides.
6. In a telecommunications network, a method of verifying the integrity of a communications path connecting two end nodes, comprising the steps of:
a) interfacing communication with each of said end nodes;
b) retrieving periodically from said each end nodes a path verification message said each end node sent to the other of said end nodes;
c) storing the path verification messages retrieved from said end nodes;
wherein after a fault has occurred to the communications path connecting said end nodes and an alternate communications path has allegedly been established to reconnect said end nodes,
d) retrieving from said each end node the path verification message sent to the other of said end nodes;
e) comparing the path verification messages retrieved from said end nodes subsequent to the establishment of said alternate communications path with the path verification messages retrieved from said end nodes stored just prior to said fault; and
f) verifying the integrity of said alternate communications path if no differences exist between the path verification messages of said end nodes stored prior to said fault and the path verification messages retrieved from said end nodes after the establishment of said alternate communications path.
7. The method of claim 6, wherein said step a further comprises the step of:
communicating with all nodes within a dynamic transmission network restoration (DTNR) domain of said telecommunications network.
8. The method of claim 7, further comprising the step of:
storing all path verification messages from the end nodes of each communications path established within said DTNR domain in a database store means.
9. The method of claim 7, further comprising the step of:
informing the management of said telecommunications network the status of any two end nodes to which a communications path in said telecommunications network is connected as the status record of said two end nodes is periodically updated via terminal means.
10. The method of claim 6, wherein said step b further comprises the step of:
retrieving from each of said path verifying messages a unique identifier representative of the end node that sent said each message.
11. The method of claim 10, further comprising the step of:
retrieving from said each identifier the ID of the one access/egress port and the ID of whichever one of the end nodes that sent the path verifying message to which said each identifier resides.
US08/781,495 1995-06-07 1997-01-13 Automated path verification for shin-based restoration Expired - Lifetime US5841759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/781,495 US5841759A (en) 1995-06-07 1997-01-13 Automated path verification for shin-based restoration

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/483,525 US5623481A (en) 1995-06-07 1995-06-07 Automated path verification for SHN-based restoration
US08/781,495 US5841759A (en) 1995-06-07 1997-01-13 Automated path verification for shin-based restoration

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/483,525 Continuation US5623481A (en) 1995-06-07 1995-06-07 Automated path verification for SHN-based restoration

Publications (1)

Publication Number Publication Date
US5841759A true US5841759A (en) 1998-11-24

Family

ID=23920411

Family Applications (2)

Application Number Title Priority Date Filing Date
US08/483,525 Expired - Lifetime US5623481A (en) 1995-06-07 1995-06-07 Automated path verification for SHN-based restoration
US08/781,495 Expired - Lifetime US5841759A (en) 1995-06-07 1997-01-13 Automated path verification for shin-based restoration

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US08/483,525 Expired - Lifetime US5623481A (en) 1995-06-07 1995-06-07 Automated path verification for SHN-based restoration

Country Status (1)

Country Link
US (2) US5623481A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5941992A (en) * 1997-08-13 1999-08-24 Mci Communications Corporation Distributed method and system for excluding components from a restoral route in a communications network
US6147966A (en) * 1995-08-07 2000-11-14 British Telecommunications Public Limited Company Route finding in communications networks
US6294991B1 (en) 1998-09-08 2001-09-25 Mci Communications Corporation Method and system therefor for ensuring a true activation of distributed restoration in a telecommunications network
US6337846B1 (en) * 1998-09-08 2002-01-08 Mci Worldcom, Inc. Quantification of the quality of spare links in a telecommunications network
US6404733B1 (en) 1998-09-08 2002-06-11 Mci Worldcom, Inc. Method of exercising a distributed restoration process in an operational telecommunications network
US6411598B1 (en) 1997-03-12 2002-06-25 Mci Communications Corporation Signal conversion for fault isolation
US6414940B1 (en) 1997-03-12 2002-07-02 Mci Communications Corporation Method and system of managing unidirectional failures in a distributed restoration network
US6418117B1 (en) 1998-09-08 2002-07-09 Mci Worldcom, Inc. Out of band messaging in a DRA network
US6496476B1 (en) * 1997-03-12 2002-12-17 Worldcom, Inc. System and method for restricted reuse of intact portions of failed paths
US20030117962A1 (en) * 2001-12-21 2003-06-26 Nortel Networks Limited Automated method for connection discovery within consolidated network elements
US20030142808A1 (en) * 2002-01-25 2003-07-31 Level (3) Communications Routing engine for telecommunications network
US20030142633A1 (en) * 2002-01-25 2003-07-31 Level (3) Communications Automated installation of network service in a telecommunications network
US6632032B1 (en) * 1998-04-07 2003-10-14 At&T Corp. Remote data network access in a communication network utilizing overhead channels
US6654802B1 (en) * 1999-02-12 2003-11-25 Sprint Communications Company, L.P. Network system and method for automatic discovery of topology using overhead bandwidth
US20040024862A1 (en) * 2002-07-31 2004-02-05 Level 3 Communications, Inc. Order entry system for telecommunications network service
US6813240B1 (en) 1999-06-11 2004-11-02 Mci, Inc. Method of identifying low quality links in a telecommunications network
US20050021744A1 (en) * 1998-03-09 2005-01-27 Stacy Haitsuka Internet service error tracking
US6928615B1 (en) * 1999-07-07 2005-08-09 Netzero, Inc. Independent internet client object with ad display capabilities
US20060028979A1 (en) * 2004-08-06 2006-02-09 Gilbert Levesque Smart resync of data between a network management system and a network element
US20060045006A1 (en) * 2004-08-26 2006-03-02 Pioneer Corporation Node presence confirmation method and apparatus
US20070094410A1 (en) * 2005-10-26 2007-04-26 Level 3 Communications, Inc. Systems and methods for discovering network topology
US20170353366A1 (en) * 2016-06-06 2017-12-07 General Electric Company Methods and systems for network monitoring
US11463349B2 (en) * 2018-03-09 2022-10-04 Huawei Technologies Co., Ltd. Fault diagnosis method and apparatus thereof

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5623481A (en) * 1995-06-07 1997-04-22 Russ; Will Automated path verification for SHN-based restoration
US5802144A (en) * 1996-04-15 1998-09-01 Mci Corporation Minimum common span network outage detection and isolation
US5787271A (en) * 1996-06-26 1998-07-28 Mci Corporation Spare capacity allocation tool
US6327669B1 (en) * 1996-12-31 2001-12-04 Mci Communications Corporation Centralized restoration of a network using preferred routing tables to dynamically build an available preferred restoral route
US6556538B1 (en) * 1996-12-31 2003-04-29 Mci Communications Corporation Integration of a centralized network restoration system with a distributed network restoration system
US6049529A (en) * 1997-03-28 2000-04-11 Mci Communications Corporation Integration of a path verification message within a signal
US6122753A (en) * 1997-04-09 2000-09-19 Nec Corporation Fault recovery system and transmission path autonomic switching system
US6011780A (en) * 1997-05-23 2000-01-04 Stevens Institute Of Technology Transparant non-disruptable ATM network
US6047385A (en) * 1997-09-10 2000-04-04 At&T Corp Digital cross-connect system restoration technique
US6614765B1 (en) * 1997-10-07 2003-09-02 At&T Corp. Methods and systems for dynamically managing the routing of information over an integrated global communication network
US6021113A (en) * 1997-10-29 2000-02-01 Lucent Technologies Inc. Distributed precomputation of network signal paths with table-based link capacity control
US6130875A (en) * 1997-10-29 2000-10-10 Lucent Technologies Inc. Hybrid centralized/distributed precomputation of network signal paths
US6456589B1 (en) * 1998-09-08 2002-09-24 Worldcom, Inc. Method of coordinating the respective operations of different restoration processes
US20030014516A1 (en) * 2001-07-13 2003-01-16 International Business Machines Corporation Recovery support for reliable messaging
US6766482B1 (en) 2001-10-31 2004-07-20 Extreme Networks Ethernet automatic protection switching
US20090088652A1 (en) * 2007-09-28 2009-04-02 Kathleen Tremblay Physiological sensor placement and signal transmission device
US11030063B1 (en) * 2015-03-30 2021-06-08 Amazon Technologies, Inc. Ensuring data integrity during large-scale data migration

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4723241A (en) * 1984-07-28 1988-02-02 U.S. Philips Corporation Data transmission arrangement including a reconfiguration facility
US4745593A (en) * 1986-11-17 1988-05-17 American Telephone And Telegraph Company, At&T Bell Laboratories Arrangement for testing packet switching networks
US4956835A (en) * 1987-11-06 1990-09-11 Alberta Telecommunications Research Centre Method and apparatus for self-restoring and self-provisioning communication networks
US5093824A (en) * 1990-03-27 1992-03-03 Bell Communications Research, Inc. Distributed protocol for improving the survivability of telecommunications trunk networks
US5235599A (en) * 1989-07-26 1993-08-10 Nec Corporation Self-healing network with distributed failure restoration capabilities
US5435003A (en) * 1993-10-07 1995-07-18 British Telecommunications Public Limited Company Restoration in communications networks
US5495471A (en) * 1994-03-09 1996-02-27 Mci Communications Corporation System and method for restoring a telecommunications network based on a two prong approach
US5537532A (en) * 1993-10-07 1996-07-16 British Telecommunications Public Limited Company Restoration in communications networks
US5623481A (en) * 1995-06-07 1997-04-22 Russ; Will Automated path verification for SHN-based restoration

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4723241A (en) * 1984-07-28 1988-02-02 U.S. Philips Corporation Data transmission arrangement including a reconfiguration facility
US4745593A (en) * 1986-11-17 1988-05-17 American Telephone And Telegraph Company, At&T Bell Laboratories Arrangement for testing packet switching networks
US4956835A (en) * 1987-11-06 1990-09-11 Alberta Telecommunications Research Centre Method and apparatus for self-restoring and self-provisioning communication networks
US5235599A (en) * 1989-07-26 1993-08-10 Nec Corporation Self-healing network with distributed failure restoration capabilities
US5093824A (en) * 1990-03-27 1992-03-03 Bell Communications Research, Inc. Distributed protocol for improving the survivability of telecommunications trunk networks
US5435003A (en) * 1993-10-07 1995-07-18 British Telecommunications Public Limited Company Restoration in communications networks
US5537532A (en) * 1993-10-07 1996-07-16 British Telecommunications Public Limited Company Restoration in communications networks
US5495471A (en) * 1994-03-09 1996-02-27 Mci Communications Corporation System and method for restoring a telecommunications network based on a two prong approach
US5623481A (en) * 1995-06-07 1997-04-22 Russ; Will Automated path verification for SHN-based restoration

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6147966A (en) * 1995-08-07 2000-11-14 British Telecommunications Public Limited Company Route finding in communications networks
US6411598B1 (en) 1997-03-12 2002-06-25 Mci Communications Corporation Signal conversion for fault isolation
US6507561B1 (en) 1997-03-12 2003-01-14 Worldcom, Inc. Telecommunications network distributed restoration method and system
US6496476B1 (en) * 1997-03-12 2002-12-17 Worldcom, Inc. System and method for restricted reuse of intact portions of failed paths
US6414940B1 (en) 1997-03-12 2002-07-02 Mci Communications Corporation Method and system of managing unidirectional failures in a distributed restoration network
US5941992A (en) * 1997-08-13 1999-08-24 Mci Communications Corporation Distributed method and system for excluding components from a restoral route in a communications network
US7240110B2 (en) 1998-03-09 2007-07-03 Netzero, Inc. Internet service error tracking
US20050021744A1 (en) * 1998-03-09 2005-01-27 Stacy Haitsuka Internet service error tracking
US6632032B1 (en) * 1998-04-07 2003-10-14 At&T Corp. Remote data network access in a communication network utilizing overhead channels
US6404733B1 (en) 1998-09-08 2002-06-11 Mci Worldcom, Inc. Method of exercising a distributed restoration process in an operational telecommunications network
US6418117B1 (en) 1998-09-08 2002-07-09 Mci Worldcom, Inc. Out of band messaging in a DRA network
US6337846B1 (en) * 1998-09-08 2002-01-08 Mci Worldcom, Inc. Quantification of the quality of spare links in a telecommunications network
US6294991B1 (en) 1998-09-08 2001-09-25 Mci Communications Corporation Method and system therefor for ensuring a true activation of distributed restoration in a telecommunications network
US6654802B1 (en) * 1999-02-12 2003-11-25 Sprint Communications Company, L.P. Network system and method for automatic discovery of topology using overhead bandwidth
US6813240B1 (en) 1999-06-11 2004-11-02 Mci, Inc. Method of identifying low quality links in a telecommunications network
US6928615B1 (en) * 1999-07-07 2005-08-09 Netzero, Inc. Independent internet client object with ad display capabilities
US20030117962A1 (en) * 2001-12-21 2003-06-26 Nortel Networks Limited Automated method for connection discovery within consolidated network elements
US7068608B2 (en) * 2001-12-21 2006-06-27 Nortel Networks Limited Automated method for connection discovery within consolidated network elements
US20070091868A1 (en) * 2002-01-25 2007-04-26 Level 3 Communications, Inc. Routing Engine for Telecommunications Network
US8144598B2 (en) 2002-01-25 2012-03-27 Level 3 Communications, Llc Routing engine for telecommunications network
US8750137B2 (en) 2002-01-25 2014-06-10 Level 3 Communications, Llc Service management system for a telecommunications network
US8254275B2 (en) 2002-01-25 2012-08-28 Level 3 Communications, Llc Service management system for a telecommunications network
US7146000B2 (en) 2002-01-25 2006-12-05 Level (3) Communications Routing engine for telecommunications network
US8238252B2 (en) 2002-01-25 2012-08-07 Level 3 Communications, Llc Routing engine for telecommunications network
US20030142633A1 (en) * 2002-01-25 2003-07-31 Level (3) Communications Automated installation of network service in a telecommunications network
US20030142808A1 (en) * 2002-01-25 2003-07-31 Level (3) Communications Routing engine for telecommunications network
US7251221B2 (en) * 2002-01-25 2007-07-31 Level 3 Communications, Llc Automated installation of network service in a telecommunications network
US20070206516A1 (en) * 2002-01-25 2007-09-06 Level 3 Communications, Llc Automated installation of network service in a telecommunications network
US8155009B2 (en) 2002-01-25 2012-04-10 Level 3 Communications, Llc Routing engine for telecommunications network
US20090323702A1 (en) * 2002-01-25 2009-12-31 Level 3 Communications, Llc Routing engine for telecommunications network
US20100020695A1 (en) * 2002-01-25 2010-01-28 Level 3 Communications, Llc Routing engine for telecommunications network
US7760658B2 (en) 2002-01-25 2010-07-20 Level 3 Communications, Llc Automated installation of network service in a telecommunications network
US20100284307A1 (en) * 2002-01-25 2010-11-11 Level 3 Communications, Llc Service Management System for a Telecommunications Network
US8149714B2 (en) 2002-01-25 2012-04-03 Level 3 Communications, Llc Routing engine for telecommunications network
US7941514B2 (en) * 2002-07-31 2011-05-10 Level 3 Communications, Llc Order entry system for telecommunications network service
US20110211686A1 (en) * 2002-07-31 2011-09-01 Wall Richard L Order entry system for telecommunications network service
US20040024862A1 (en) * 2002-07-31 2004-02-05 Level 3 Communications, Inc. Order entry system for telecommunications network service
US10417587B2 (en) * 2002-07-31 2019-09-17 Level 3 Communications, Llc Order entry system for telecommunications network service
US20060028979A1 (en) * 2004-08-06 2006-02-09 Gilbert Levesque Smart resync of data between a network management system and a network element
US7573808B2 (en) * 2004-08-06 2009-08-11 Fujitsu Limited Smart resync of data between a network management system and a network element
US20060045006A1 (en) * 2004-08-26 2006-03-02 Pioneer Corporation Node presence confirmation method and apparatus
US20070094410A1 (en) * 2005-10-26 2007-04-26 Level 3 Communications, Inc. Systems and methods for discovering network topology
US8990423B2 (en) 2005-10-26 2015-03-24 Level 3 Communications, Llc Systems and methods for discovering network topology
US9787547B2 (en) 2005-10-26 2017-10-10 Level 3 Communications, Llc Systems and method for discovering network topology
US10257044B2 (en) 2005-10-26 2019-04-09 Level 3 Communications, Llc Systems and methods for discovering network topology
US8352632B2 (en) 2005-10-26 2013-01-08 Level 3 Communications, Llc Systems and methods for discovering network topology
US10742514B2 (en) 2005-10-26 2020-08-11 Level 3 Communications, Llc Systems and methods for discovering network topology
US20170353366A1 (en) * 2016-06-06 2017-12-07 General Electric Company Methods and systems for network monitoring
US9935852B2 (en) * 2016-06-06 2018-04-03 General Electric Company Methods and systems for network monitoring
US11463349B2 (en) * 2018-03-09 2022-10-04 Huawei Technologies Co., Ltd. Fault diagnosis method and apparatus thereof

Also Published As

Publication number Publication date
US5623481A (en) 1997-04-22

Similar Documents

Publication Publication Date Title
US5841759A (en) Automated path verification for shin-based restoration
US5862125A (en) Automated restoration of unrestored link and nodal failures
US5636203A (en) Method and system for identifying fault locations in a communications network
JP4305804B2 (en) Method and apparatus for signaling path repair information in a mesh network
US7042912B2 (en) Resynchronization of control and data path state for networks
EP0984574B1 (en) Backwards-compatible failure restoration in bidirectional multiplex section-switched ring transmission systems
US6222821B1 (en) System and method for reconfiguring a telecommunications network to its normal state after repair of fault
JP3631592B2 (en) Error-free switching technology in ring networks
JP3169541B2 (en) Automatic path setting device for synchronous communication system
US20030170020A1 (en) Method and apparatus for capacity-efficient restoration in an optical communication system
JP2000503182A (en) METHOD AND SYSTEM FOR OPTICAL RECOVERY END SWITCH CONNECTION IN FIBER NETWORK
US20030133417A1 (en) Method and message therefor of monitoring the spare capacity of a dra network
JPH11511618A (en) Deterministic selection of optimal restoration routes in telecommunication networks
JP3595239B2 (en) Communication network and sub-network manager therefor and method of filtering alerts sent thereto
US5875172A (en) Automatic transmission network restoring system
US6049529A (en) Integration of a path verification message within a signal
US7774474B2 (en) Communication of control and data path state for networks
US6813240B1 (en) Method of identifying low quality links in a telecommunications network
US6337846B1 (en) Quantification of the quality of spare links in a telecommunications network
US7054558B2 (en) Method for traffic protection in WDM fiber optic transport networks
WO1999046941A1 (en) Backup circuits in a telecommunications network
US20030086367A1 (en) Method for monitoring spare capacity of a dra network
US6418117B1 (en) Out of band messaging in a DRA network
JP2993356B2 (en) Communication system fault monitoring system
JP3505406B2 (en) Ring network system and transmission device

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: VERIZON PATENT AND LICENSING INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MCI COMMUNICATIONS CORPORATION;REEL/FRAME:032725/0001

Effective date: 20140409

AS Assignment

Owner name: VERIZON PATENT AND LICENSING INC., NEW JERSEY

Free format text: CORRECTIVE ASSIGNMENT TO REMOVE THE PATENT NUMBER 5,835,907 PREVIOUSLY RECORDED ON REEL 032725 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:MCI COMMUNICATIONS CORPORATION;REEL/FRAME:033408/0235

Effective date: 20140409