US20050010695A1 - High availability/high density system and method - Google Patents

High availability/high density system and method Download PDF

Info

Publication number
US20050010695A1
US20050010695A1 US10/916,536 US91653604A US2005010695A1 US 20050010695 A1 US20050010695 A1 US 20050010695A1 US 91653604 A US91653604 A US 91653604A US 2005010695 A1 US2005010695 A1 US 2005010695A1
Authority
US
United States
Prior art keywords
network
address
architecture
cards
dual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/916,536
Inventor
Michael Hamilton Coward
Robert Penn Cagle
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Trillium Digital Systems Inc
Radisys Corp
Original Assignee
Trillium Digital Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Trillium Digital Systems Inc filed Critical Trillium Digital Systems Inc
Priority to US10/916,536 priority Critical patent/US20050010695A1/en
Publication of US20050010695A1 publication Critical patent/US20050010695A1/en
Assigned to COMERICA BANK reassignment COMERICA BANK SECURITY AGREEMENT Assignors: CONTINUOUS COMPUTING CORPORATION
Priority to US11/280,332 priority patent/US20060080469A1/en
Assigned to CONTINUOUS COMPUTING CORPORATION reassignment CONTINUOUS COMPUTING CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: COMERICA BANK
Assigned to RADISYS INTERNATIONAL LLC reassignment RADISYS INTERNATIONAL LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: CONTINUOUS COMPUTING CORPORATION
Assigned to RADISYS CORPORATION reassignment RADISYS CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RADISYS INTERNATIONAL LLC
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/65Re-configuration of fast packet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/55Prevention, detection or correction of errors
    • H04L49/552Prevention, detection or correction of errors by ensuring the integrity of packets received through redundant connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/12Protocol engines
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/40Constructional details, e.g. power supply, mechanical construction or backplane
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/10Mapping addresses of different types
    • H04L61/103Mapping addresses of different types across network layers, e.g. resolution of network layer into physical layer addresses or address resolution protocol [ARP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • H04L61/5014Internet protocol [IP] addresses using dynamic host configuration protocol [DHCP] or bootstrap protocol [BOOTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection

Definitions

  • the present invention relates to telecom/datacom computer systems. More particularly, the present invention relates to a system and method for providing high availability to telecom/datacom computer systems.
  • FIG. 1 shows a 2N-Redundant platform composed of a primary node 10 and an identical backup node 20 that will provide service if the primary 10 fails.
  • the backup 120 - 2 must accurately determine the state and take control of all of the I/O cards-all without disturbing the state of the cards or the calls in process at the time of failure.
  • Most operating systems cannot survive a hardware failure, so any fault takes down the whole system.
  • conventional systems must use hardened operating systems, hardened device drivers, and even hardened applications to protect against the failure of an I/O processor or any peripheral. Another consideration is the immaturity of such systems and the lack of standardization.
  • the conventional system bus e.g., PCI or CompactPCI
  • each processor in the system has an IP address.
  • Manually setting the IP address of each processor makes the site install process time consuming and error prone.
  • Existing mechanisms for automatically setting the IP address do not take into account geographic location or board replacement.
  • Existing mechanisms for automatically assigning IP addresses such as the Dynamic Host Configuration Protocol (DHCP) or Reverse Address Resolution Protocol (RARP) rely on a unique hardware address permanently programmed into all computer hardware. This address moves with the hardware, so maintaining a specific IP address for a specific computer location is impossible, given that all hardware will move and/or be replaced at some point in time.
  • DHCP Dynamic Host Configuration Protocol
  • RARP Reverse Address Resolution Protocol
  • the present invention has been made in view of the above circumstances and has an object to provide a cost effective system and method for providing high availability and improved efficiency to telecom/datacom systems. Further objects of the present invention are to provide a system and method that can simplify the programming necessary to implement high availability successfully, avoid potential problems caused by fluid standards, allay concerns brought up by single points of failure of busses, and leverage open standards and the increasingly intelligent I/O cards and peripheral processors that have become available. Additional objects and advantages of the invention will be set forth in part in the description that follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objects and advantages of the invention will be realized by means of the elements and combinations particularly pointed out in written description and claims hereof as well as the appended drawings.
  • FIG. 1 provides a block diagram of a prior art system addressing high availability through full redundancy.
  • FIG. 2 provides a block diagram of a prior art system addressing high availability through duplication of subsets of hardware instead of the entire node.
  • FIG. 3 provides a block diagram of a high availability telecom/datacom computer system in accordance with one embodiment of the present invention.
  • FIG. 4 provides a block diagram of one embodiment of an ethernet switch in accordance with the present invention.
  • FIG. 5 provides a block diagram of a high availability telecom/datacom computer system in accordance with another embodiment of the present invention.
  • FIG. 6 provides a block diagram of one embodiment of a continuous control node in accordance with the present invention.
  • FIG. 7 provides a block diagram showing software components incorporated in accordance with one embodiment of the present invention.
  • FIG. 8 illustrates the role of NETRACK during typical component failures occurring in the system for one embodiment of the present invention.
  • FIG. 9 illustrates PSRM starting and monitoring an application in accordance with one embodiment of the present invention.
  • FIG. 10 provides a block diagram illustrating disk mirroring in accordance with one embodiment of the present invention.
  • FIG. 11 provides a block diagram illustrating IP addressing in accordance with one embodiment of the present invention.
  • FIG. 12 provides a block diagram illustrating IP failover in accordance with another embodiment of the present invention.
  • FIGS. 13-14 provide flowcharts of the enhanced DHCP server operation in accordance with an embodiment of the present invention.
  • FIG. 15 provides a flowchart of the enhanced RARP server for an embodiment of the present invention.
  • FIG. 16 provides a block diagram of a client-server embodiment of the present invention.
  • FIG. 17 provides a block diagram of an independent server embodiment of the present invention.
  • FIG. 18 provides a block diagram of a diskless system embodiment of the present invention.
  • FIG. 3 illustrating an embodiment of the present invention, is a block diagram of a high availability system implementing a network bus architecture.
  • FIG. 3 has been simplified for purposes of explanation and is not intended to show every feature of the system.
  • structure will be referred to collectively or generically using a three-digit number and will be referred to specifically using an extension separated from the three-digit number by a hyphen.
  • I/O cards 110 - 1 to 110 -N will be referred to collectively as I/O cards 110 and an individual I/O card will be referred to generically as I/O card 110 or specifically, for example, as I/O card 110 - 1 .
  • multiple system controllers, CPUs 120 are connected through dual/redundant Ethernet switches 130 to each other and to multiple I/O cards 110 .
  • the ports on the cards of the CPUs 120 and I/O cards 110 are connected directly to the dual/redundant ethernet switches 130 using the ethernet links 100 , leaving the conventional midplane/backplane (not shown) simply to provide power to the various system components.
  • the system controllers, CPUs 120 provide a redundant platform for control and also may provide transparent IP disk mirroring and boot and configuration services to the I/O processors 110 .
  • the dual/redundant ethernet connections allow the CPUs 120 to maintain synchronized states and to have simultaneous access to the I/O cards.
  • the CPUs 120 may be realized using an UltraSPARCTM/SolarisTM implementation. Other implementations will be known to those skilled in the art and are within the scope of the present invention. Further, it will be known to those skilled in the art that the system controller functionality is really a role and not a physical implementation, and as such, the system controller functionality can reside on any of the cards, including the I/O cards.
  • the pair of ethernet switches 130 ensures that the network is not a single point of failure in the system. While the Ethernet is used in illustrating the present invention, the architecture also can be extended to faster networks, such as Gigabit Ethernet or Infiniband. In one embodiment, the switches 130 may be realized using the 24+2 CompactPCI Ethernet switch 300 manufactured by Continuous Computing Corporation, illustrated in FIG. 4 .
  • the 24+2 CompactPCI Ethernet switch 300 is a 26-port, non-blocking fully managed Ethernet switch with 24 10/100 Mbps autosensing, Fast Ethernet ports, and two Gigabit 1000 Base-SX ports.
  • the switch 300 provides full wire-speed Layer 2 switching supporting up to 16k MAC address, 256 IEEE 802.1 Q Virtual Lans, IP multicasting, full-and half-duplex flow control and IEEE 802.1 Q Quality of Service.
  • the switch 300 enables high-speed communications between elements without external hubs that often block airflow.
  • switch 300 can operate in a conventional slot or in an isolated backplane, and can support TCP/IP and serial management interfaces. Other switches will be known to those skilled in the art and are within the scope of the present invention.
  • the network uses TCP/IP running on a 100 Mbit Ethernet.
  • TCP/IP is the industry name for a suite of network protocols that includes, but is not limited to: IP, TCP, UDP, ICMP, ARP, DHCP, SMTP, SNMP.
  • Other network protocols and mediums will be known to those skilled in the art and are within the scope of the present invention.
  • the intelligent I/O processors 110 are application dependant and may include DSP cards, CPU cards, and voice-processing cards combining DSP, telephony, and RISC resources. These processors 110 may be configured with 2N redundancy, or N+M redundancy, or no redundancy at all. However, to take full advantage of the network bus architecture the processors 110 should have multiple Ethernet interfaces and be able to configure multiple IP addresses on an interface, to run a small heartbeat client, and to boot from onboard flash or over the network. Using the network bus architecture of the present invention, the I/O processors 110 are not limited to conventional PCI or CompactPCI I/O slot processors. Because there is no conventional bus, standard system slot processor boards can be used in the I/O slots of the system 140 , considerably increasing the range of cards available.
  • the high availability system also may include dual power supplies (not shown) with either power supply capable of running the whole system.
  • the power supplies are realized using Continuous Computing Telecom Power Supply (CCTPS) manufactured by Continuous Computing Corporation, which offers 350 watts of hot-swappable, load sharing power to the system with dual input feeds and ⁇ 48V DC input.
  • CTPS Continuous Computing Telecom Power Supply
  • Other power supplies will be known to those skilled in the art and are within the scope of the present invention.
  • FIG. 5 illustrates another embodiment of the high availability architecture of the present invention.
  • the system includes a Continuous Control Node (CCN) 150 to monitor and control the CPUs 120 , I/O cards 110 , and the power supplies.
  • the CCN 150 which may be connected to the other system components via the ethernet links, provides presence detect, board health, and reset control for the high availability system. These functions may be accomplished using a set of logic signals that are provided between each system board and the CCN 150 .
  • the logic signals may be, for example, the negative-logic BDSEL#, HEALTH#, and RST# signals described in the Compact PCI Hot Swap Specification and incorporated herein by reference.
  • the CCN 150 can power up and power down individual slots and provide network access for any boards that have serial consoles.
  • the power control may be accomplished using the set of logic signals mentioned above; however, the only requirement is a single-ended logic signal of either logic polarity which, when asserted, will override power control on the I/O board and effect the power down of the board.
  • the software requirements are, at a minimum, a function executed on the CCN 150 that causes the hardware to read and set the logic level of the pertinent signals.
  • Serial console access to the I/O boards 110 is provided in hardware by cabling or making direct midplane connection of the I/O board serial port signals to the CCN serial port signals or in software by configuring one or more of the serial ports onboard, and relaying the serial data stream to the network communications port designated for this console access.
  • the CCN 150 also may function as a single point of contact for management and provisioning of the entire system.
  • the CCN 150 which for one embodiment is always powered, may allow a remote technician to check the system status, power cycle the system or its subcomponents, and/or access the console.
  • the CCN 150 may be realized using the Continuous Control Node manufactured by Continuous Computing Corporation, the Continuous Control Node incorporated herein by reference.
  • all of the CCNs from various high availability systems communicate over a redundant out-of-band network allowing the monitoring and control of a large number of systems from a single network management station.
  • the CCN 150 may offer numerous interfaces for system management, including a serial command line interface, a JAVA GUI and a C/C++ API for interfacing user applications to the control network. The combination of these features provides a powerful tool for system maintenance.
  • the network bus architecture of a further embodiment of the present invention also may incorporate a set of software components and APIs (collectively referred to as SOFTSET) that detects failures in the system, routes around the failure, and alerts a system manager that a repair is required.
  • SOFTSET software components and APIs
  • the set of software components and APIs may be realized, for example, using upSuiteTM by Continuous Computing Corporation, upSuiteTM incorporated herein by reference.
  • the first role of SOFTSET is to guarantee communication among all the components in the system.
  • a heartbeat manager NETRACK 200 of SOFTSET runs on every component on the system and is designed to keep track of which components and services are up and which are down.
  • Heartbeats provided by the individual system components enable NETRACK 200 to detect component failures and to build a map of working network connections. By comparing the network map to the system configuration, NETRACK 200 can detect network failures.
  • Local network applications provide a second source of information for NETRACK 200 .
  • an orderly shutdown script could inform NETRACK 200 that a system was going down.
  • NETRACK 200 would then relay that information to the other NETRACKs 200 whose systems might take action based on that information.
  • another SOFTSET software component a process starter/restarter/monitor (PSRM) 210 , is designed to talk to NETRACK 200 .
  • PSRM process starter/restarter/monitor
  • applications can connect to NETRACK 200 to get status information.
  • a standby media gateway controller application may ask NETRACK 200 for notification if the other server, or even just the other application, goes down.
  • Applications can receive all status information, or they can register to receive just the state changes in which they are interested.
  • FIG. 8 illustrates the role of NETRACK during typical component failures occurring in the system.
  • Each CPU or I/O component has two network addresses.
  • the CPU may be communicating over network A to the first I/O card using the address of the I/O card on network A which is 10.1.1.2.
  • NETRACK constantly sends small packets over both networks to all components and listens for packets from the other components.
  • FIG. 8 there are three points of failure:
  • PSRM 210 The second role of SOFTSET is to guarantee that a software service or application is running and available on the network. This is accomplished using PSRM 210 . While an application can be coded to connect directly to NETRACK 200 , for many applications it makes sense simply to ‘wrap’ the application in PSRM 210 . For “off the shelf” software, this is the only choice. PSRM 210 is designed to give developers maximum flexibility in configuring the behavior of their applications.
  • FIG. 9 illustrates PSRM starting and monitoring an application.
  • PSRM starts the application and monitors its health.
  • PSRM reports the status of the application to NETRACK, which in turn reports to all the other NETRACKs on the other components of the system.
  • NETRACK which in turn reports to all the other NETRACKs on the other components of the system.
  • PSRM has the choice of:
  • PSRM and NETRACK make sure that all system components are able to reach the application over at least one network interface.
  • PSRM 210 keeps NETRACK 200 informed of the state of the process at all times.
  • PSRM 210 might only restart a process that dies with certain exit codes, or PSRM 210 might only try a certain number of restarts within a certain period. This functionality and flexibility is available through simple configuration—it does not require programming or any modifications to an application.
  • applications that require the highest level of integration may communicate directly with PSRM 210 .
  • a third software component of SOFTSET, IPMIR 220 ensures data reliability by providing application-transparent disk mirroring over IP. Data written to disk are transparently ‘mirrored’ over the network via IPMIR 220 ; “transparently” meaning the applications are not necessarily affected by the mirroring.
  • the two system controllers 120 run IPMIR 220 between each other to mirror disk activity. I/O processors use NFS to mount the disks from the active server. In the case of a failure (CPU, disk, network, etc.), the standby server assumes the IP address of the failed Network File System (NFS) server and continues serving files from its mirror of the data.
  • NFS Network File System
  • IPMIR 220 is specifically designed to handle this situation. IPMIR 220 maintains a lookup table for NFS file handles which avoid this problem.
  • a fourth software component SNMP/ag is an SNMP aggregator that allows an entire system to be managed as a single entity.
  • system components are capable of providing system status and control via SNMP, there is still the need to map out the whole system, collect all the MIBs (management information bases), and figure out how to manage all the components.
  • MIBs management information bases
  • SOFTSET knows the configuration of the system and the state of the system
  • SNMP/ag can provide a simple, comprehensive view to the system manager.
  • the system manager's SNMP browser talks to SNMP/ag, which provides a single MIB for the entire system.
  • SNMP/ag in turn, talks to all the SNMP agents within the system, and relays requests and instructions.
  • a fifth software component REDIRECT is a rules-based engine that manages the system based on the input from NETRACK 200 and SNMP/ag. These rules dictate the actions that should be taken if any component fails, and provides methods to migrate services, IP addresses, and other network resources. REDIRECT provides the high-level mechanisms to support N+1 and N+M redundancy without complicated application coding.
  • SOFTSET An integral part of SOFTSET is the ability to remotely manage all aspects of the system, including both hardware and software status and control.
  • SOFTSET supports a Java-based GUI that runs on any host and uses a network connection to talk to the system.
  • this management capability is provided by an interface with the CCPUnetTM hardware monitoring platform manufactured by Continuous Computing Corporation, internal SNMP agents, and other SOFTSET components to provide a complete “one system” view.
  • CCPUnetTM incorporated herein by reference, provides a distributed network approach for managing clusters of compute or I/O nodes. By using CCPUnetTM and its associated software and hardware interfaces, applications can control node power, monitor critical voltages and temperatures, monitor hardware and software alarms, access CPU or I/O controller consoles and have remote CPU reboot and shutdown capabilities.
  • a further role of SOFTSET is to guarantee that network services (applications) are available to external network clients and to network client software that is inside the system, but which does not talk directly to NETRACK.
  • SOFTSET does this by using ‘public’ network addresses and by moving those public addresses from component to component as appropriate.
  • every component in the system has two independent paths to every other component in the system with each interface having a unique private IP address (10.1.x.y), and each active service having a unique public IP address (192.168.1.x).
  • ‘private IP’ refers to an address known only within the configuration
  • public IP refers to an address that might be accessed from outside the system, for example, through a router.
  • Failover techniques involving the network bus architecture of the present invention depend on the location of the failed component. Within the system, network failures are handled via the redundant links. For example, NETRACK 200 and PSRM 210 on one server can continue communicating with NETRACK 200 and PSRM 210 on another server if an Ethernet port, cable, or even an entire switch fails. For public services (IPMIR 220 on the system controllers, DSP applications on I/O processors, etc.), failover to a second Ethernet port and failover to a standby server are both handled by moving the IP address for that service. The failover actions are dictated by the system configuration and are coordinated by NETRACK 200 and PSRM 210 .
  • MAC address physical Ethernet address
  • SOFTSET SOFTSET
  • NETRACK tells the other components to reach the application running at 192.168.1.1 through 20.1.1.1.
  • PSRM or a special process started by PSRM tells the external router to reach 192.168.1.1 through 20.1.1.1.
  • PSRM on CPU A (or a special process started by PSRM) reconfigures CPU A to stop using the public address 192.168.1.1, reconfigures CPU B to use the public address 192.168.1.1, and then starts the application on CPU B.
  • NETRACK and PSRM tell the other components and any external routers that 192.168.1.1 is now reachable through 10.1.1.2
  • a unique IP addressing scheme is implemented using the physical location of a board instead of its Ethernet address to determine the IP address of the board at boot time.
  • the IP address remains constant even if the board is replaced.
  • the physical location is determined by mapping the board's current Ethernet address to a physical port on the switch. This physical port number, acting as a geographic ID, is then mapped to the desired IP address by modified DHCP/RARP servers when DHCP/RARP requests are made.
  • the modified DHCP/RARP servers perform a set of SNMP queries to the switch to determine the port number of the requesting client.
  • the Ethernet switch 130 is responsible for automatically learning the hardware ethernet addresses of any computer connected to it, and for maintaining this information in a form that can be queried by outside entities using the SNMP protocol. Once the switch has been queried and the DHCP and RARP servers have determined on which port of the Ethernet switch 130 a given request was received, this information can be used to assign an IP address based on the physical port number of the switch to which the requester is connected. The information can be further mapped to a real geographic location as long as the network wiring topology is known.
  • the modified servers run on the CCN 150 , which is the only element in the system with a fixed IP address. Using DHCP/RARP allows the update and maintenance of the IP-to-physical-port table in one place, the CCN.
  • DHCP Dynamic Host Configuration Protocol
  • DHCP Dynamic Host Configuration Protocol
  • a set of SNMP queries are made to the switch 130 to determine the port number of the requesting client.
  • FIGS. 13-14 Flowcharts of the enhanced DHCP server operation are shown in FIGS. 13-14 .
  • DHCP server receives a request for internet connection from a device with a MAC address (Media Access Control, i.e, hardware address) known by the DHCP server (step 400 ).
  • the DHCP queries the Ethernet switch 130 to determine the physical port to which the MAC address is connected. If the port number is provided, the IP address is determined from the port information (step 440 ); otherwise, the request for connection is ignored (step 430 ).
  • the DHCP server checks to ensure that the IP address is not currently in use on the network (step 450 ). If the IP address is already in use, the DHCP server marks the IP address as unusable and ignores the request for connection (step 460 ). If the IP address is not in use, the DHCP server assigns the IP address to the requesting device (step 470 ).
  • the request for connection from the device includes a request for a desired IP address (step 500 ).
  • the DHCP queries the Ethernet switch 130 to determine the physical port to which the MAC address is connected. If the port number is provided, the IP address is determined from the port information (step 540 ); otherwise, an error message is generated (step 530 ). Once the IP address has been determined, the DHCP server compares the provided address to the requested address (step 550 ). If the two addresses do not match, an error message is generated (step 560 ). If the addresses are the same, a check is made by the DHCP server to ensure that the IP address is not currently in use on the network (step 570 ).
  • the DHCP server marks the IP address as unusable and generates an error message (step 580 ). If the IP address is not in use, the DHCP server assigns the IP address to the requesting device (step 590 ).
  • RARP Reverse Address Resolution Protocol
  • ARP Address Resolution Protocol
  • a client program running on the device requests its IP address from the RARP server on the router.
  • the enhanced RARP server of the present invention queries the switch 130 to determine the port number of the requesting client.
  • FIG. 15 provides a flowchart of the enhanced RARP server.
  • a request is made from a device with a MAC address known to the RARP server (step 600 ).
  • the RARP server queries the switch 130 to determine the physical port to which the device with the associated MAC address is connected (step 610 ). If the port is found, the RARP server provides the IP address associated with the port to the requesting device (step 640 ). If the port information is not found, the RARP server ignores the request (step 630 ).
  • FIGS. 16-18 respectively illustrate client-server, independent server, and diskless system embodiments of the present invention.
  • Advantages associated with the network bus architecture of the present invention are numerous. For example, redundant communication is provided by the network bus architecture since each component in the system has two separate paths to every other component. Any single component-even an entire switch-can fail, and the system continues to operate. Moreover, the boards in the system are coupled only by network connections, and therefore should a component fail, the failure is interpreted by the other components as a network failure instead of hardware failure. This interpretation simplifies the programming task associated with failover with more reliable results than in conventional systems.
  • the failure of a network link merely means failover to a second link while the failure of a single component merely means dropped network connections to other components.
  • the system does not experience a hardware failure in the conventional sense.
  • the CPU control nodes do not have to run hardened operating systems, but can run rich, popular systems such as Solaris or Linux.
  • the present invention also leverages the increasing intelligence of available I/O cards.
  • the intelligent I/O processors work largely independent of the system controller, relieving the CPU of having to carry out functions that can be assumed by more peripheral parts of the architecture. While the I/O processors receive services from the controllers (boot image, network disk, configuration management, etc.), I/O is directly to and from the network, instead of over the midplane/backplane from the controller. Much like an object-oriented programming model, this allows I/O processors to present themselves as an encapsulated set of resources available over TCP/IP rather than as a set of device registers to be managed by the CPU.
  • the intelligent I/O processors can continue handling calls even while system controllers are failing over. Both the active and standby system controllers know the system state while intelligent I/O processors tend to know about their own state and can communicate it upon request. Failover to the standby component can be handled either by the standby component assuming the network address of the failed component (IP or MAC failover), by coordination among the working components, or by a combination of both.
  • IP or MAC failover the network address of the failed component
  • the failure of network components is handled via the redundant ethernet links 130 while communication to equipment outside the system can be handled transparently via IP failover, or by the standby component re-registering with a new IP address (e.g. re-registering with a softswitch).
  • the density of the system is also improved by the inventive network bus architecture since the combination of system controllers, I/O processors, or CPUs is not limited by the chassis. Further, there are no geographical constraints associated with the network bus architecture; embodiments of the present invention can be composed of nodes located on opposite sides of a Central Office space, a building, or even a city to increase redundancy and availability in case of a disaster. The flexibility of a fully-distributed architecture via a network dramatically increases the possibilities for design.
  • Hot-swap advantages also are provided by a network bus architecture. Replacing failed boards or upgrading boards is simply a matter of swapping the old board out and the new board in. There are no device drivers to shut down or bring back up. Depending on the application, however, it may well be wise to provide support for taking a board out of service and managing power smoothly.
  • Bandwidth benefits are also available due to the network bus architecture of the present invention. While nominal midplane/backplane bus speeds tend to be higher than nominal network speeds, a fully non-blocking, full-duplex switch connected in accordance with the present invention can provide higher aggregate throughput. For example, a cPCI bus provides 1 Gbit/sec to a maximum of 8 slots. A full-duplex 100 Mbit ethernet switch provides 1.6 Gbit/sec to an 8 slot system and 4.2 Gbit/sec to a 21 slot system. In a redundant configuration, those numbers could be as high as 3.2 and 8.4, respectively, although an application would have to allow for reduced throughput in the case of a network component failure.
  • the inventive IP addressing scheme which allows a specific IP address to be assigned to a specific computer location.
  • This scheme is accomplished using enhanced DHCP and RARP protocol servers to determine the geographic location of any computer requesting an IP address. Once determined, the geographic location is then used to assign a specific IP address to the requestor; the IP address is fixed to that particular location, and independent of all hardware specific ethernet addresses. If computer hardware at one location is replaced with different hardware, or the computer at that location is rebooted, the IP address for that computer remains the same. If computer hardware at one location is moved to another location, a new IP address will be assigned to the hardware based on its new location.

Abstract

A system and method for providing high availability for telecommunications and data communications by implementing a network bus architecture at a card level. The network bus architecture, which may be a combination of hardware, software, and APIs, replaces the conventional midplane/backplane as the system bus for PCI purposes. The system provides physical redundancy to the system by connecting ports of the various system cards using dual/redundant ethernet switches. Further, since the system cards are connected through network connections, failure of any component is interpreted and addressed as a network failure instead of as a hardware failure.

Description

  • This application is a continuation application of U.S. patent application Ser. No. 09/688,859 filed Oct. 17, 2000, which is hereby incorporated by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to telecom/datacom computer systems. More particularly, the present invention relates to a system and method for providing high availability to telecom/datacom computer systems.
  • 2. Description of the Related Art
  • Providing fault tolerance to high availability systems typically involves providing redundant configuration and application data, redundant communication between control nodes and processing nodes, redundant control nodes, and a dynamically assignable pool of processing resources. In traditional high availability systems, providing fault tolerance is often accomplished by replicating the system components to remove single points of failure within the system. FIG. 1 shows a 2N-Redundant platform composed of a primary node 10 and an identical backup node 20 that will provide service if the primary 10 fails. Such a solution is fairly easy to implement both in terms of design and the software that must be used to manage it, but can be prohibitively expensive as every component must be duplicated.
  • Furthermore, such a design carries a more significant drawback in that when the primary node fails all calls in process are dropped. This failure occurs even if the backup node comes up quickly. In some applications, such as voicemail, this is acceptable, as a user can simply call back to replay the messages. However, when a user who is attempting to manage a Wall Street conference call with 200 analyst participants, such a situation results in an enormous inconvenience and lost revenue.
  • Another approach to providing high availability systems has been the duplication of subsets of hardware instead of the entire node, as shown in FIG. 2. Most often, this translates to duplication of the CPU 120. Should the primary CPU 120-1 fail, the backup 120-2 awakens from standby and begins to provide service. This design addresses the cost issues that arise when compared to the 2N Redundant architecture since not all of the hardware in a given node need be replicated. However, while this architecture is less expensive to implement, the software challenges for such a platform are formidable. Should the primary CPU 120-1 fail and the backup 120-2 take over its function, the backup 120-2 must accurately determine the state and take control of all of the I/O cards-all without disturbing the state of the cards or the calls in process at the time of failure. Most operating systems cannot survive a hardware failure, so any fault takes down the whole system. To get around this, conventional systems must use hardened operating systems, hardened device drivers, and even hardened applications to protect against the failure of an I/O processor or any peripheral. Another consideration is the immaturity of such systems and the lack of standardization. Furthermore, the conventional system bus (e.g., PCI or CompactPCI) remains a single point of failure. A single misbehaving I/O card can take down the entire system much like a single bulb in a string of Christmas lights.
  • In addition to system failures, inefficiencies within the system are also of concern. For example, each processor in the system has an IP address. Manually setting the IP address of each processor makes the site install process time consuming and error prone. Existing mechanisms for automatically setting the IP address do not take into account geographic location or board replacement. Existing mechanisms for automatically assigning IP addresses, such as the Dynamic Host Configuration Protocol (DHCP) or Reverse Address Resolution Protocol (RARP), rely on a unique hardware address permanently programmed into all computer hardware. This address moves with the hardware, so maintaining a specific IP address for a specific computer location is impossible, given that all hardware will move and/or be replaced at some point in time.
  • SUMMARY OF THE INVENTION
  • The present invention has been made in view of the above circumstances and has an object to provide a cost effective system and method for providing high availability and improved efficiency to telecom/datacom systems. Further objects of the present invention are to provide a system and method that can simplify the programming necessary to implement high availability successfully, avoid potential problems caused by fluid standards, allay concerns brought up by single points of failure of busses, and leverage open standards and the increasingly intelligent I/O cards and peripheral processors that have become available. Additional objects and advantages of the invention will be set forth in part in the description that follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objects and advantages of the invention will be realized by means of the elements and combinations particularly pointed out in written description and claims hereof as well as the appended drawings.
  • It should be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed. It will be apparent to those skilled in the art that various modifications and variations can be made without departing from the scope or spirit of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
  • FIG. 1 provides a block diagram of a prior art system addressing high availability through full redundancy.
  • FIG. 2 provides a block diagram of a prior art system addressing high availability through duplication of subsets of hardware instead of the entire node.
  • FIG. 3 provides a block diagram of a high availability telecom/datacom computer system in accordance with one embodiment of the present invention.
  • FIG. 4 provides a block diagram of one embodiment of an ethernet switch in accordance with the present invention.
  • FIG. 5 provides a block diagram of a high availability telecom/datacom computer system in accordance with another embodiment of the present invention.
  • FIG. 6 provides a block diagram of one embodiment of a continuous control node in accordance with the present invention.
  • FIG. 7 provides a block diagram showing software components incorporated in accordance with one embodiment of the present invention.
  • FIG. 8 illustrates the role of NETRACK during typical component failures occurring in the system for one embodiment of the present invention.
  • FIG. 9 illustrates PSRM starting and monitoring an application in accordance with one embodiment of the present invention.
  • FIG. 10 provides a block diagram illustrating disk mirroring in accordance with one embodiment of the present invention.
  • FIG. 11 provides a block diagram illustrating IP addressing in accordance with one embodiment of the present invention.
  • FIG. 12 provides a block diagram illustrating IP failover in accordance with another embodiment of the present invention.
  • FIGS. 13-14 provide flowcharts of the enhanced DHCP server operation in accordance with an embodiment of the present invention.
  • FIG. 15 provides a flowchart of the enhanced RARP server for an embodiment of the present invention.
  • FIG. 16 provides a block diagram of a client-server embodiment of the present invention.
  • FIG. 17 provides a block diagram of an independent server embodiment of the present invention.
  • FIG. 18 provides a block diagram of a diskless system embodiment of the present invention.
  • DETAILED DESCRIPTION
  • The present invention finds applicability in a variety of high availability telecom/datacom computer systems that may include platforms for Media Gateway Controllers, SS7 Gateways, OAM&P, Billing, and Call Processing. Other applications will be known to those skilled in the art and are within the scope of the present invention. FIG. 3, illustrating an embodiment of the present invention, is a block diagram of a high availability system implementing a network bus architecture. FIG. 3 has been simplified for purposes of explanation and is not intended to show every feature of the system. As used herein, structure will be referred to collectively or generically using a three-digit number and will be referred to specifically using an extension separated from the three-digit number by a hyphen. For example, I/O cards 110-1 to 110-N will be referred to collectively as I/O cards 110 and an individual I/O card will be referred to generically as I/O card 110 or specifically, for example, as I/O card 110-1.
  • As shown in FIG. 3, multiple system controllers, CPUs 120, are connected through dual/redundant Ethernet switches 130 to each other and to multiple I/O cards 110. The ports on the cards of the CPUs 120 and I/O cards 110 are connected directly to the dual/redundant ethernet switches 130 using the ethernet links 100, leaving the conventional midplane/backplane (not shown) simply to provide power to the various system components. The system controllers, CPUs 120, provide a redundant platform for control and also may provide transparent IP disk mirroring and boot and configuration services to the I/O processors 110. The dual/redundant ethernet connections allow the CPUs 120 to maintain synchronized states and to have simultaneous access to the I/O cards. These connections, therefore, permit a standby CPU, or an active CPU also serving in a standby capacity, to take over quickly for a failed CPU. In one embodiment the CPUs 120 may be realized using an UltraSPARC™/Solaris™ implementation. Other implementations will be known to those skilled in the art and are within the scope of the present invention. Further, it will be known to those skilled in the art that the system controller functionality is really a role and not a physical implementation, and as such, the system controller functionality can reside on any of the cards, including the I/O cards.
  • The pair of ethernet switches 130 ensures that the network is not a single point of failure in the system. While the Ethernet is used in illustrating the present invention, the architecture also can be extended to faster networks, such as Gigabit Ethernet or Infiniband. In one embodiment, the switches 130 may be realized using the 24+2 CompactPCI Ethernet switch 300 manufactured by Continuous Computing Corporation, illustrated in FIG. 4. The 24+2 CompactPCI Ethernet switch 300 is a 26-port, non-blocking fully managed Ethernet switch with 24 10/100 Mbps autosensing, Fast Ethernet ports, and two Gigabit 1000 Base-SX ports. The switch 300 provides full wire-speed Layer 2 switching supporting up to 16k MAC address, 256 IEEE 802.1 Q Virtual Lans, IP multicasting, full-and half-duplex flow control and IEEE 802.1 Q Quality of Service. The switch 300 enables high-speed communications between elements without external hubs that often block airflow. Moreover, switch 300 can operate in a conventional slot or in an isolated backplane, and can support TCP/IP and serial management interfaces. Other switches will be known to those skilled in the art and are within the scope of the present invention. In one embodiment of the present invention, the network uses TCP/IP running on a 100 Mbit Ethernet. TCP/IP is the industry name for a suite of network protocols that includes, but is not limited to: IP, TCP, UDP, ICMP, ARP, DHCP, SMTP, SNMP. Other network protocols and mediums will be known to those skilled in the art and are within the scope of the present invention.
  • The intelligent I/O processors 110 are application dependant and may include DSP cards, CPU cards, and voice-processing cards combining DSP, telephony, and RISC resources. These processors 110 may be configured with 2N redundancy, or N+M redundancy, or no redundancy at all. However, to take full advantage of the network bus architecture the processors 110 should have multiple Ethernet interfaces and be able to configure multiple IP addresses on an interface, to run a small heartbeat client, and to boot from onboard flash or over the network. Using the network bus architecture of the present invention, the I/O processors 110 are not limited to conventional PCI or CompactPCI I/O slot processors. Because there is no conventional bus, standard system slot processor boards can be used in the I/O slots of the system 140, considerably increasing the range of cards available.
  • The high availability system also may include dual power supplies (not shown) with either power supply capable of running the whole system. In one embodiment the power supplies are realized using Continuous Computing Telecom Power Supply (CCTPS) manufactured by Continuous Computing Corporation, which offers 350 watts of hot-swappable, load sharing power to the system with dual input feeds and −48V DC input. Other power supplies will be known to those skilled in the art and are within the scope of the present invention.
  • FIG. 5 illustrates another embodiment of the high availability architecture of the present invention. In this embodiment, the system includes a Continuous Control Node (CCN) 150 to monitor and control the CPUs 120, I/O cards 110, and the power supplies. The CCN 150, which may be connected to the other system components via the ethernet links, provides presence detect, board health, and reset control for the high availability system. These functions may be accomplished using a set of logic signals that are provided between each system board and the CCN 150. The logic signals may be, for example, the negative-logic BDSEL#, HEALTH#, and RST# signals described in the Compact PCI Hot Swap Specification and incorporated herein by reference.
  • In one embodiment of the present invention, the CCN 150 can power up and power down individual slots and provide network access for any boards that have serial consoles. The power control may be accomplished using the set of logic signals mentioned above; however, the only requirement is a single-ended logic signal of either logic polarity which, when asserted, will override power control on the I/O board and effect the power down of the board. The software requirements are, at a minimum, a function executed on the CCN 150 that causes the hardware to read and set the logic level of the pertinent signals.
  • Serial console access to the I/O boards 110 is provided in hardware by cabling or making direct midplane connection of the I/O board serial port signals to the CCN serial port signals or in software by configuring one or more of the serial ports onboard, and relaying the serial data stream to the network communications port designated for this console access.
  • The CCN 150 also may function as a single point of contact for management and provisioning of the entire system. The CCN 150, which for one embodiment is always powered, may allow a remote technician to check the system status, power cycle the system or its subcomponents, and/or access the console. The CCN 150 may be realized using the Continuous Control Node manufactured by Continuous Computing Corporation, the Continuous Control Node incorporated herein by reference.
  • In a further embodiment of the present invention illustrated in FIG. 6, all of the CCNs from various high availability systems communicate over a redundant out-of-band network allowing the monitoring and control of a large number of systems from a single network management station. The CCN 150 may offer numerous interfaces for system management, including a serial command line interface, a JAVA GUI and a C/C++ API for interfacing user applications to the control network. The combination of these features provides a powerful tool for system maintenance.
  • As illustrated in FIG. 7, the network bus architecture of a further embodiment of the present invention also may incorporate a set of software components and APIs (collectively referred to as SOFTSET) that detects failures in the system, routes around the failure, and alerts a system manager that a repair is required. The set of software components and APIs may be realized, for example, using upSuite™ by Continuous Computing Corporation, upSuite™ incorporated herein by reference.
  • The first role of SOFTSET is to guarantee communication among all the components in the system. To accomplish this, a heartbeat manager NETRACK 200 of SOFTSET runs on every component on the system and is designed to keep track of which components and services are up and which are down. Heartbeats provided by the individual system components enable NETRACK 200 to detect component failures and to build a map of working network connections. By comparing the network map to the system configuration, NETRACK 200 can detect network failures.
  • Local network applications provide a second source of information for NETRACK 200. For example, an orderly shutdown script could inform NETRACK 200 that a system was going down. NETRACK 200 would then relay that information to the other NETRACKs 200 whose systems might take action based on that information. In particular, another SOFTSET software component, a process starter/restarter/monitor (PSRM) 210, is designed to talk to NETRACK 200.
  • Finally, applications can connect to NETRACK 200 to get status information. For example, a standby media gateway controller application may ask NETRACK 200 for notification if the other server, or even just the other application, goes down. Applications can receive all status information, or they can register to receive just the state changes in which they are interested.
  • FIG. 8 illustrates the role of NETRACK during typical component failures occurring in the system. Each CPU or I/O component has two network addresses. For example, the CPU may be communicating over network A to the first I/O card using the address of the I/O card on network A which is 10.1.1.2. NETRACK constantly sends small packets over both networks to all components and listens for packets from the other components. In FIG. 8 there are three points of failure:
      • 1) Network A fails. In this case, NETRACK on the CPU detects that the first I/O card is no longer responding via 10.1.1.2, but is responding via 20.1.1.2. NETRACK immediately instructs the application to stop using 10.1.1.2 on network A and to start using 20.1.1.2 on network B.
      • 2) The first I/O card's link to network A fails. Again, NETRACK on the CPU detects that 10.1.1.2 has stopped working but that 20.1.1.2 is working, and instructs the application as in (1).
      • 3) The first I/O card fails completely. NETRACK on the CPU detects that the first I/O card is not responding at all, but that the second I/O card is responding on both networks, and it instructs the application to use 10.1.1.3 or 20.1.1.3.
  • The second role of SOFTSET is to guarantee that a software service or application is running and available on the network. This is accomplished using PSRM 210. While an application can be coded to connect directly to NETRACK 200, for many applications it makes sense simply to ‘wrap’ the application in PSRM 210. For “off the shelf” software, this is the only choice. PSRM 210 is designed to give developers maximum flexibility in configuring the behavior of their applications.
  • FIG. 9 illustrates PSRM starting and monitoring an application. PSRM starts the application and monitors its health. PSRM reports the status of the application to NETRACK, which in turn reports to all the other NETRACKs on the other components of the system. There are several possible failures:
      • 1) The application dies. For example, a Unix or Linux operating system is running applications as independent processes that may ‘die’. Other operating systems will be known to those of ordinary skill in the art and are within the scope of the present invention.
      • 2) The application stops responding.
      • 3) The application is having trouble: it can still communicate with PSRM, but it may be running slowly, or be unable to allocate resources, etc.
  • In any of these failures, PSRM has the choice of:
      • 1) restarting the application
      • 2) stopping the application and asking another component (CPU B in FIG. 9) to start an equivalent application
      • 3) doing nothing (e.g. if the application is merely running slowly)
  • In all cases, PSRM and NETRACK make sure that all system components are able to reach the application over at least one network interface. PSRM 210 keeps NETRACK 200 informed of the state of the process at all times. In a more complicated case, PSRM 210 might only restart a process that dies with certain exit codes, or PSRM 210 might only try a certain number of restarts within a certain period. This functionality and flexibility is available through simple configuration—it does not require programming or any modifications to an application. Finally, applications that require the highest level of integration may communicate directly with PSRM 210.
  • A third software component of SOFTSET, IPMIR 220, ensures data reliability by providing application-transparent disk mirroring over IP. Data written to disk are transparently ‘mirrored’ over the network via IPMIR 220; “transparently” meaning the applications are not necessarily affected by the mirroring. As illustrated in FIG. 10, the two system controllers 120 run IPMIR 220 between each other to mirror disk activity. I/O processors use NFS to mount the disks from the active server. In the case of a failure (CPU, disk, network, etc.), the standby server assumes the IP address of the failed Network File System (NFS) server and continues serving files from its mirror of the data. Although this IP failover technique will not work for other IP disk mirroring packages (Unix administrators may be familiar with the ‘Stale NFS Handle’ problem), IPMIR 220 is specifically designed to handle this situation. IPMIR 220 maintains a lookup table for NFS file handles which avoid this problem.
  • A fourth software component SNMP/ag is an SNMP aggregator that allows an entire system to be managed as a single entity. Typically, while system components are capable of providing system status and control via SNMP, there is still the need to map out the whole system, collect all the MIBs (management information bases), and figure out how to manage all the components. Because SOFTSET knows the configuration of the system and the state of the system, SNMP/ag can provide a simple, comprehensive view to the system manager. The system manager's SNMP browser talks to SNMP/ag, which provides a single MIB for the entire system. SNMP/ag, in turn, talks to all the SNMP agents within the system, and relays requests and instructions.
  • A fifth software component REDIRECT is a rules-based engine that manages the system based on the input from NETRACK 200 and SNMP/ag. These rules dictate the actions that should be taken if any component fails, and provides methods to migrate services, IP addresses, and other network resources. REDIRECT provides the high-level mechanisms to support N+1 and N+M redundancy without complicated application coding.
  • An integral part of SOFTSET is the ability to remotely manage all aspects of the system, including both hardware and software status and control. As part of this remote management capability, SOFTSET supports a Java-based GUI that runs on any host and uses a network connection to talk to the system. In one embodiment, this management capability is provided by an interface with the CCPUnet™ hardware monitoring platform manufactured by Continuous Computing Corporation, internal SNMP agents, and other SOFTSET components to provide a complete “one system” view. CCPUnet™, incorporated herein by reference, provides a distributed network approach for managing clusters of compute or I/O nodes. By using CCPUnet™ and its associated software and hardware interfaces, applications can control node power, monitor critical voltages and temperatures, monitor hardware and software alarms, access CPU or I/O controller consoles and have remote CPU reboot and shutdown capabilities.
  • A further role of SOFTSET is to guarantee that network services (applications) are available to external network clients and to network client software that is inside the system, but which does not talk directly to NETRACK. SOFTSET does this by using ‘public’ network addresses and by moving those public addresses from component to component as appropriate. As illustrated in FIG. 11, every component in the system has two independent paths to every other component in the system with each interface having a unique private IP address (10.1.x.y), and each active service having a unique public IP address (192.168.1.x). In the context of this discussion, ‘private IP’ refers to an address known only within the configuration, ‘public IP’ refers to an address that might be accessed from outside the system, for example, through a router. This designation is different than the common use of ‘public IP’, which designates an address generally available from the Internet at large. In FIG. 11, an application is providing a network service at 192.168.1.1. Internal to this system, NETRACK tells all the components that the application running at public IP address 192.168.1.1 can be reached via private IP address 10.1.1.2. This allows network clients running on those components to reach the application even if those clients do not talk directly to NETRACK. External to this system, an external router (or routers) relays packets from external clients to 192.168.1.1 via 10.1.1.2.
  • Failover techniques involving the network bus architecture of the present invention depend on the location of the failed component. Within the system, network failures are handled via the redundant links. For example, NETRACK 200 and PSRM 210 on one server can continue communicating with NETRACK 200 and PSRM 210 on another server if an Ethernet port, cable, or even an entire switch fails. For public services (IPMIR 220 on the system controllers, DSP applications on I/O processors, etc.), failover to a second Ethernet port and failover to a standby server are both handled by moving the IP address for that service. The failover actions are dictated by the system configuration and are coordinated by NETRACK 200 and PSRM 210. Further, in some applications, it may be desirable to move the MAC address (physical Ethernet address) to a new port. This requires some care to preserve the private IP addresses in a working state, but is also supported by SOFTSET. Note that when the MAC address is moved packets may be sent through the switch-to-switch link, depending on the capabilities of the router. Four points of failure within one embodiment of the present invention are illustrated in FIG. 12: network A fails (1), CPU A's link to network A fails (2), CPU A fails (3), and the application on CPU A fails (4).
  • If network A fails (1) or CPU A's link to network A fails (2), NETRACK tells the other components to reach the application running at 192.168.1.1 through 20.1.1.1. PSRM (or a special process started by PSRM) tells the external router to reach 192.168.1.1 through 20.1.1.1.
  • If CPU A fails (3), NETRACK on CPU B notices the failure and notifies PSRM. Upon notification, PSRM (or a special process started by PSRM) reconfigures CPU B to use the public address 192.168.1.1 and starts the application on CPU B. NETRACK and PSRM tell the other components and any external routers that 192.168.1.1 is now reachable through 10.1.1.2
  • Finally, if the application on CPU A fails (4), the two NETRACKs and PSRMs notice the failure. PSRM on CPU A (or a special process started by PSRM) reconfigures CPU A to stop using the public address 192.168.1.1, reconfigures CPU B to use the public address 192.168.1.1, and then starts the application on CPU B. NETRACK and PSRM tell the other components and any external routers that 192.168.1.1 is now reachable through 10.1.1.2
  • In a further embodiment of the present invention, a unique IP addressing scheme is implemented using the physical location of a board instead of its Ethernet address to determine the IP address of the board at boot time. In this scheme, the IP address remains constant even if the board is replaced. The physical location is determined by mapping the board's current Ethernet address to a physical port on the switch. This physical port number, acting as a geographic ID, is then mapped to the desired IP address by modified DHCP/RARP servers when DHCP/RARP requests are made. To support the IP addressing scheme, the modified DHCP/RARP servers perform a set of SNMP queries to the switch to determine the port number of the requesting client. The Ethernet switch 130 is responsible for automatically learning the hardware ethernet addresses of any computer connected to it, and for maintaining this information in a form that can be queried by outside entities using the SNMP protocol. Once the switch has been queried and the DHCP and RARP servers have determined on which port of the Ethernet switch 130 a given request was received, this information can be used to assign an IP address based on the physical port number of the switch to which the requester is connected. The information can be further mapped to a real geographic location as long as the network wiring topology is known. The modified servers run on the CCN 150, which is the only element in the system with a fixed IP address. Using DHCP/RARP allows the update and maintenance of the IP-to-physical-port table in one place, the CCN.
  • DHCP (Dynamic Host Configuration Protocol) is a protocol, that allows a network administrator to manage centrally and to automate the assignment of IP addresses to the computers in a network. DHCP dynamically assigns an IP address to a computer when a connection to the Internet is needed. With the enhanced DHCP servers of the present invention, a set of SNMP queries are made to the switch 130 to determine the port number of the requesting client. Flowcharts of the enhanced DHCP server operation are shown in FIGS. 13-14.
  • Referring to FIG. 13, DHCP server receives a request for internet connection from a device with a MAC address (Media Access Control, i.e, hardware address) known by the DHCP server (step 400). At step 410, the DHCP queries the Ethernet switch 130 to determine the physical port to which the MAC address is connected. If the port number is provided, the IP address is determined from the port information (step 440); otherwise, the request for connection is ignored (step 430). Once the IP address has been determined, the DHCP server checks to ensure that the IP address is not currently in use on the network (step 450). If the IP address is already in use, the DHCP server marks the IP address as unusable and ignores the request for connection (step 460). If the IP address is not in use, the DHCP server assigns the IP address to the requesting device (step 470).
  • In FIG. 14, the request for connection from the device includes a request for a desired IP address (step 500). At step 510, the DHCP queries the Ethernet switch 130 to determine the physical port to which the MAC address is connected. If the port number is provided, the IP address is determined from the port information (step 540); otherwise, an error message is generated (step 530). Once the IP address has been determined, the DHCP server compares the provided address to the requested address (step 550). If the two addresses do not match, an error message is generated (step 560). If the addresses are the same, a check is made by the DHCP server to ensure that the IP address is not currently in use on the network (step 570). If the IP address is already in use, the DHCP server marks the IP address as unusable and generates an error message (step 580). If the IP address is not in use, the DHCP server assigns the IP address to the requesting device (step 590).
  • RARP (Reverse Address Resolution Protocol) is a protocol by which a device can request its IP address maintained in an Address Resolution Protocol (ARP) table typically located in a network router; the table mapping the MAC address of the device to a corresponding IP address. When a device is connected to the network, a client program running on the device requests its IP address from the RARP server on the router. As with the enhanced DHCP server, the enhanced RARP server of the present invention queries the switch 130 to determine the port number of the requesting client. FIG. 15 provides a flowchart of the enhanced RARP server.
  • As illustrated in FIG. 15, a request is made from a device with a MAC address known to the RARP server (step 600). The RARP server queries the switch 130 to determine the physical port to which the device with the associated MAC address is connected (step 610). If the port is found, the RARP server provides the IP address associated with the port to the requesting device (step 640). If the port information is not found, the RARP server ignores the request (step 630).
  • FIGS. 16-18 respectively illustrate client-server, independent server, and diskless system embodiments of the present invention.
  • Advantages associated with the network bus architecture of the present invention are numerous. For example, redundant communication is provided by the network bus architecture since each component in the system has two separate paths to every other component. Any single component-even an entire switch-can fail, and the system continues to operate. Moreover, the boards in the system are coupled only by network connections, and therefore should a component fail, the failure is interpreted by the other components as a network failure instead of hardware failure. This interpretation simplifies the programming task associated with failover with more reliable results than in conventional systems. The failure of a network link merely means failover to a second link while the failure of a single component merely means dropped network connections to other components. The system does not experience a hardware failure in the conventional sense. The CPU control nodes, in turn, do not have to run hardened operating systems, but can run rich, popular systems such as Solaris or Linux.
  • The present invention also leverages the increasing intelligence of available I/O cards. The intelligent I/O processors work largely independent of the system controller, relieving the CPU of having to carry out functions that can be assumed by more peripheral parts of the architecture. While the I/O processors receive services from the controllers (boot image, network disk, configuration management, etc.), I/O is directly to and from the network, instead of over the midplane/backplane from the controller. Much like an object-oriented programming model, this allows I/O processors to present themselves as an encapsulated set of resources available over TCP/IP rather than as a set of device registers to be managed by the CPU.
  • Further, the actual failover process is simplified. The intelligent I/O processors can continue handling calls even while system controllers are failing over. Both the active and standby system controllers know the system state while intelligent I/O processors tend to know about their own state and can communicate it upon request. Failover to the standby component can be handled either by the standby component assuming the network address of the failed component (IP or MAC failover), by coordination among the working components, or by a combination of both. The failure of network components is handled via the redundant ethernet links 130 while communication to equipment outside the system can be handled transparently via IP failover, or by the standby component re-registering with a new IP address (e.g. re-registering with a softswitch). The density of the system is also improved by the inventive network bus architecture since the combination of system controllers, I/O processors, or CPUs is not limited by the chassis. Further, there are no geographical constraints associated with the network bus architecture; embodiments of the present invention can be composed of nodes located on opposite sides of a Central Office space, a building, or even a city to increase redundancy and availability in case of a disaster. The flexibility of a fully-distributed architecture via a network dramatically increases the possibilities for design.
  • Hot-swap advantages also are provided by a network bus architecture. Replacing failed boards or upgrading boards is simply a matter of swapping the old board out and the new board in. There are no device drivers to shut down or bring back up. Depending on the application, however, it may well be wise to provide support for taking a board out of service and managing power smoothly.
  • Bandwidth benefits are also available due to the network bus architecture of the present invention. While nominal midplane/backplane bus speeds tend to be higher than nominal network speeds, a fully non-blocking, full-duplex switch connected in accordance with the present invention can provide higher aggregate throughput. For example, a cPCI bus provides 1 Gbit/sec to a maximum of 8 slots. A full-duplex 100 Mbit ethernet switch provides 1.6 Gbit/sec to an 8 slot system and 4.2 Gbit/sec to a 21 slot system. In a redundant configuration, those numbers could be as high as 3.2 and 8.4, respectively, although an application would have to allow for reduced throughput in the case of a network component failure.
  • Further, efficiency within the system is improved by using the inventive IP addressing scheme which allows a specific IP address to be assigned to a specific computer location. This scheme is accomplished using enhanced DHCP and RARP protocol servers to determine the geographic location of any computer requesting an IP address. Once determined, the geographic location is then used to assign a specific IP address to the requestor; the IP address is fixed to that particular location, and independent of all hardware specific ethernet addresses. If computer hardware at one location is replaced with different hardware, or the computer at that location is rebooted, the IP address for that computer remains the same. If computer hardware at one location is moved to another location, a new IP address will be assigned to the hardware based on its new location.
  • Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples herein be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims (15)

1. A high availability telecom/datacom architecture comprising:
dual/redundant ethernet switches;
a plurality of I/O cards connected to the dual/redundant ethernet switches; and
at least two CPUs coupled to each other and to the plurality of I/O cards via the ethernet switches.
2. The architecture of claim 1 further comprising:
a system monitor and control module coupled to the at least two CPUs and the plurality of I/O cards via the ethernet switches.
3. A high availability telecom/datacom architecture comprising:
dual/redundant network links;
a plurality of I/O cards connected to the dual/redundant network links; and
a system controller coupled to the plurality of I/O cards via the dual/redundant network links.
4. The architecture of claim 3 wherein the system controller comprises multiple CPUs coupled to each other and to the plurality of I/O cards via the dual/redundant network links.
5. The architecture of claim 3 wherein the dual/redundant network links are implemented using ethernet switches.
6. The architecture of claim 4 wherein the dual/redundant network links are implemented using ethernet switches.
7. The architecture of claim 3 further comprising:
a system monitor and control module coupled to the system controller and the plurality of I/O cards via the network links.
8. The architecture of claim 5 further comprising:
a system monitor and control module coupled to the system controller and the plurality of I/O cards via the network links.
9. A network bus architecture for providing high availability to telecom/datacom systems comprising:
dual/redundant network links;
a plurality of I/O cards connected to the dual/redundant network links; and,
a system controller coupled to each other and to the plurality of I/O cards via the dual/redundant network links.
10. The network bus architecture of claim 9 wherein the network links are implemented using ethernet switches.
11. The architecture of claim 10 wherein the system controller comprises multiple CPUs coupled to each other and to the plurality of I/O cards via the ethernet switches.
12. The architecture of claim 9 wherein the system controller comprises multiple CPUs coupled to each other and to the plurality of I/O cards via the network links.
13. The architecture of claim 9 further comprising:
a system monitor and control node coupled to the system controller and the plurality of I/O cards.
14. The architecture of claim 12 further comprising:
a system monitor and control node coupled to the system controller and the plurality of I/O cards.
15. The architecture of claim 9 further comprising software and APIs.
US10/916,536 2000-10-17 2004-08-12 High availability/high density system and method Abandoned US20050010695A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/916,536 US20050010695A1 (en) 2000-10-17 2004-08-12 High availability/high density system and method
US11/280,332 US20060080469A1 (en) 2000-10-17 2005-11-17 High availability/high density system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US68885900A 2000-10-17 2000-10-17
US10/916,536 US20050010695A1 (en) 2000-10-17 2004-08-12 High availability/high density system and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US68885900A Continuation 2000-10-17 2000-10-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/280,332 Continuation US20060080469A1 (en) 2000-10-17 2005-11-17 High availability/high density system and method

Publications (1)

Publication Number Publication Date
US20050010695A1 true US20050010695A1 (en) 2005-01-13

Family

ID=33565349

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/874,249 Expired - Fee Related US6854072B1 (en) 2000-10-17 2001-06-06 High availability file server for providing transparent access to all data before and after component failover
US10/916,536 Abandoned US20050010695A1 (en) 2000-10-17 2004-08-12 High availability/high density system and method
US11/280,332 Abandoned US20060080469A1 (en) 2000-10-17 2005-11-17 High availability/high density system and method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/874,249 Expired - Fee Related US6854072B1 (en) 2000-10-17 2001-06-06 High availability file server for providing transparent access to all data before and after component failover

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/280,332 Abandoned US20060080469A1 (en) 2000-10-17 2005-11-17 High availability/high density system and method

Country Status (1)

Country Link
US (3) US6854072B1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050180439A1 (en) * 2004-01-21 2005-08-18 Wataru Kondo Network system, terminal setting method, address resolving server, and client terminal
US20050256973A1 (en) * 2004-04-21 2005-11-17 Microsoft Corporation Method, system and apparatus for managing computer identity
US20060209736A1 (en) * 2005-03-18 2006-09-21 Barnhart Randy C Data handling in a distributed communication network
US20070157016A1 (en) * 2005-12-29 2007-07-05 Dayan Richard A Apparatus, system, and method for autonomously preserving high-availability network boot services
US7519735B1 (en) * 2001-05-08 2009-04-14 Juniper Networks, Inc. Single board routing arrangement
US20090196290A1 (en) * 2008-02-01 2009-08-06 Microsoft Corporation On-demand mac address lookup
CN102347899A (en) * 2011-07-28 2012-02-08 中国船舶重工集团公司第七一六研究所 Intelligent dual-redundant gigabit Ethernet processing board
US20130117225A1 (en) * 2011-11-03 2013-05-09 Michael W. Dalton Distributed storage medium management for heterogeneous storage media in high availability clusters
US20130198408A1 (en) * 2012-01-26 2013-08-01 Schneider Electric Industries Sas IP Parameter Determination and Configuration
US20140371883A1 (en) * 2013-06-13 2014-12-18 Dell Products L.P. System and method for switch management
US9419861B1 (en) * 2013-10-25 2016-08-16 Ca, Inc. Management information base table creation and use to map unique device interface identities to common identities
CN112859578A (en) * 2021-01-13 2021-05-28 北京铁科时代科技有限公司 Backup redundant industrial personal computer based on Ethernet bus scheme

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020188713A1 (en) * 2001-03-28 2002-12-12 Jack Bloch Distributed architecture for a telecommunications system
US20030033463A1 (en) * 2001-08-10 2003-02-13 Garnett Paul J. Computer system storage
US20030032425A1 (en) * 2001-08-11 2003-02-13 Hong-Sik Kim Schema change method of dual system
US7240211B2 (en) * 2001-10-09 2007-07-03 Activcard Ireland Limited Method of providing an access request to a same server based on a unique identifier
EP1309135B1 (en) * 2001-10-30 2005-03-02 Alcatel Forwarding of IP packets for routing protocols
US7372804B2 (en) * 2002-01-11 2008-05-13 Nec Corporation Multiplex communication system and method
US8055772B2 (en) * 2002-03-14 2011-11-08 Alcatel Lucent System for effecting communication among a plurality of devices and method for assigning addresses therefor
US7114095B2 (en) * 2002-05-31 2006-09-26 Hewlett-Packard Development Company, Lp. Apparatus and methods for switching hardware operation configurations
US7668123B1 (en) * 2002-06-28 2010-02-23 Nortel Networks Limited Network access device location
US20040153709A1 (en) * 2002-07-03 2004-08-05 Burton-Krahn Noel Morgen Method and apparatus for providing transparent fault tolerance within an application server environment
US7117303B1 (en) * 2003-03-14 2006-10-03 Network Appliance, Inc. Efficient, robust file handle invalidation
US7206963B2 (en) * 2003-06-12 2007-04-17 Sun Microsystems, Inc. System and method for providing switch redundancy between two server systems
US20050068888A1 (en) * 2003-09-26 2005-03-31 Komarla Eshwari P. Seamless balde failover in platform firmware
US7499988B2 (en) * 2003-10-16 2009-03-03 International Business Machines Corporation Method for SAN-based BOS install volume group
US7539722B2 (en) * 2003-10-24 2009-05-26 Microsoft Corporation Method and system for accessing a file
US7966294B1 (en) * 2004-01-08 2011-06-21 Netapp, Inc. User interface system for a clustered storage system
US7359317B1 (en) * 2004-02-20 2008-04-15 Excel Switching Corporation Redundancy arrangement for telecommunications switch
US7486689B1 (en) * 2004-03-29 2009-02-03 Sun Microsystems, Inc. System and method for mapping InfiniBand communications to an external port, with combined buffering of virtual lanes and queue pairs
US7920577B2 (en) * 2004-07-08 2011-04-05 Avaya Communication Israel Ltd. Power saving in wireless packet based networks
JP4117684B2 (en) * 2004-12-20 2008-07-16 日本電気株式会社 Fault-tolerant / duplex computer system and its control method
JP4516439B2 (en) * 2005-02-01 2010-08-04 富士通株式会社 Relay program, relay method, and relay device
US7844702B1 (en) * 2005-11-21 2010-11-30 Oracle America, Inc. Method and apparatus for physically locating a network component
US20070220301A1 (en) * 2006-02-27 2007-09-20 Dell Products L.P. Remote access control management module
WO2007102209A1 (en) 2006-03-08 2007-09-13 Yamatake Corporation Communication relaying device and communication relaying method
DE102006045906A1 (en) * 2006-09-28 2008-04-17 Infineon Technologies Ag Module with a controller for a chip card
WO2008046317A1 (en) * 2006-10-17 2008-04-24 Hangzhou H3C Technologies Co., Ltd. System of implementing the integration of different components, network forwarding component and independent application component
US7633932B2 (en) * 2006-11-22 2009-12-15 Avaya Inc. Accelerated removal from service of a signal processor at a media gateway
US8305876B2 (en) * 2006-11-22 2012-11-06 Avaya Inc. Accelerated recovery during negotiation between a media gateway and a media gateway controller
US7917614B2 (en) * 2008-06-10 2011-03-29 International Business Machines Corporation Fault tolerance in a client side pre-boot execution
JP5839774B2 (en) * 2010-01-06 2016-01-06 三菱重工業株式会社 Computer, computer management method, and computer management program
US8473783B2 (en) 2010-11-09 2013-06-25 International Business Machines Corporation Fault tolerance in distributed systems
US8656211B2 (en) 2011-02-18 2014-02-18 Ca, Inc. Avoiding failover identifier conflicts
US8341198B1 (en) * 2011-09-23 2012-12-25 Microsoft Corporation File system repair with continuous data availability
TW201321942A (en) * 2011-11-17 2013-06-01 Hon Hai Prec Ind Co Ltd Fan control system and method of getting motherboard temperature parameters
KR101694288B1 (en) * 2012-06-08 2017-01-09 한국전자통신연구원 Method for managing data in asymmetric cluster file system
US9917798B2 (en) * 2013-07-09 2018-03-13 Nevion Europe As Compact router with redundancy
US10642783B2 (en) * 2018-01-12 2020-05-05 Vmware, Inc. System and method of using in-memory replicated object to support file services wherein file server converts request to block I/O command of file handle, replicating said block I/O command across plural distributed storage module and performing said block I/O command by local storage module
CN109639461A (en) * 2018-11-23 2019-04-16 中国船舶重工集团公司第七0七研究所 A kind of highly reliable dual redundant Ethernet real-time reversion method
US11115312B1 (en) 2019-12-04 2021-09-07 Sprint Communications Company L.P. File control for data packet routers using consensus and inter-planetary file system (IPFS)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4607365A (en) * 1983-11-14 1986-08-19 Tandem Computers Incorporated Fault-tolerant communications controller system
US5473771A (en) * 1993-09-01 1995-12-05 At&T Corp. Fault-tolerant processing system architecture
US5596569A (en) * 1994-03-08 1997-01-21 Excel, Inc. Telecommunications switch with improved redundancy
US5781530A (en) * 1996-04-01 1998-07-14 Motorola, Inc. Redundant local area network
US5852724A (en) * 1996-06-18 1998-12-22 Veritas Software Corp. System and method for "N" primary servers to fail over to "1" secondary server
US5996001A (en) * 1994-09-27 1999-11-30 Quarles; Philip High availability on-line transaction processing system
US6014437A (en) * 1997-02-03 2000-01-11 International Business Machines Corporation Multi service platform architecture for telephone networks
US6049825A (en) * 1997-03-19 2000-04-11 Fujitsu Limited Method and system for switching between duplicated network interface adapters for host computer communications
US6052733A (en) * 1997-05-13 2000-04-18 3Com Corporation Method of detecting errors in a network
US6058116A (en) * 1998-04-15 2000-05-02 3Com Corporation Interconnected trunk cluster arrangement
US6078957A (en) * 1998-11-20 2000-06-20 Network Alchemy, Inc. Method and apparatus for a TCP/IP load balancing and failover process in an internet protocol (IP) network clustering system
US6175490B1 (en) * 1997-10-01 2001-01-16 Micron Electronics, Inc. Fault tolerant computer system
US6496940B1 (en) * 1992-12-17 2002-12-17 Compaq Computer Corporation Multiple processor system with standby sparing
US6581121B1 (en) * 2000-02-25 2003-06-17 Telica, Inc. Maintenance link system and method
US6594776B1 (en) * 2000-06-28 2003-07-15 Advanced Micro Devices, Inc. Mechanism to clear MAC address from Ethernet switch address table to enable network link fail-over across two network segments
US6742068B2 (en) * 1997-06-30 2004-05-25 Emc Corporation Data server with hot replaceable processing unit modules

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4412281A (en) 1980-07-11 1983-10-25 Raytheon Company Distributed signal processing system
US4967344A (en) 1985-03-26 1990-10-30 Codex Corporation Interconnection network for multiple processors
US4710926A (en) 1985-12-27 1987-12-01 American Telephone And Telegraph Company, At&T Bell Laboratories Fault recovery in a distributed processing system
JP3206006B2 (en) 1991-01-25 2001-09-04 株式会社日立製作所 Duplex bus control method and device
US5404465A (en) 1992-03-18 1995-04-04 Aeg Transportation Systems, Inc. Method and apparatus for monitoring and switching over to a back-up bus in a redundant trainline monitor system
GB2268817B (en) 1992-07-17 1996-05-01 Integrated Micro Products Ltd A fault-tolerant computer system
US5513314A (en) 1995-01-27 1996-04-30 Auspex Systems, Inc. Fault tolerant NFS server system and mirroring protocol
DE19509558A1 (en) 1995-03-16 1996-09-19 Abb Patent Gmbh Process for fault-tolerant communication under high real-time conditions
US5822512A (en) 1995-05-19 1998-10-13 Compaq Computer Corporartion Switching control in a fault tolerant system
US5713017A (en) * 1995-06-07 1998-01-27 International Business Machines Corporation Dual counter consistency control for fault tolerant network file servers
US5737514A (en) 1995-11-29 1998-04-07 Texas Micro, Inc. Remote checkpoint memory system and protocol for fault-tolerant computer system
US5802265A (en) 1995-12-01 1998-09-01 Stratus Computer, Inc. Transparent fault tolerant computer system
US5805785A (en) 1996-02-27 1998-09-08 International Business Machines Corporation Method for monitoring and recovery of subsystems in a distributed/clustered system
US5852715A (en) 1996-03-19 1998-12-22 Emc Corporation System for currently updating database by one host and reading the database by different host for the purpose of implementing decision support functions
US6052797A (en) 1996-05-28 2000-04-18 Emc Corporation Remotely mirrored data storage system with a count indicative of data consistency
US5796934A (en) * 1996-05-31 1998-08-18 Oracle Corporation Fault tolerant client server system
US6108300A (en) * 1997-05-02 2000-08-22 Cisco Technology, Inc Method and apparatus for transparently providing a failover network device
US6490610B1 (en) * 1997-05-30 2002-12-03 Oracle Corporation Automatic failover for clients accessing a resource through a server
US6092214A (en) 1997-11-06 2000-07-18 Cisco Technology, Inc. Redundant network management system for a stackable fast ethernet repeater
US6070191A (en) * 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access
US6249879B1 (en) * 1997-11-11 2001-06-19 Compaq Computer Corp. Root filesystem failover in a single system image environment
US6185695B1 (en) * 1998-04-09 2001-02-06 Sun Microsystems, Inc. Method and apparatus for transparent server failover for highly available objects
US6247141B1 (en) * 1998-09-24 2001-06-12 Telefonaktiebolaget Lm Ericsson (Publ) Protocol for providing replicated servers in a client-server system

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4607365A (en) * 1983-11-14 1986-08-19 Tandem Computers Incorporated Fault-tolerant communications controller system
US6496940B1 (en) * 1992-12-17 2002-12-17 Compaq Computer Corporation Multiple processor system with standby sparing
US5473771A (en) * 1993-09-01 1995-12-05 At&T Corp. Fault-tolerant processing system architecture
US5596569A (en) * 1994-03-08 1997-01-21 Excel, Inc. Telecommunications switch with improved redundancy
US5996001A (en) * 1994-09-27 1999-11-30 Quarles; Philip High availability on-line transaction processing system
US5781530A (en) * 1996-04-01 1998-07-14 Motorola, Inc. Redundant local area network
US5852724A (en) * 1996-06-18 1998-12-22 Veritas Software Corp. System and method for "N" primary servers to fail over to "1" secondary server
US6014437A (en) * 1997-02-03 2000-01-11 International Business Machines Corporation Multi service platform architecture for telephone networks
US6049825A (en) * 1997-03-19 2000-04-11 Fujitsu Limited Method and system for switching between duplicated network interface adapters for host computer communications
US6052733A (en) * 1997-05-13 2000-04-18 3Com Corporation Method of detecting errors in a network
US6742068B2 (en) * 1997-06-30 2004-05-25 Emc Corporation Data server with hot replaceable processing unit modules
US6175490B1 (en) * 1997-10-01 2001-01-16 Micron Electronics, Inc. Fault tolerant computer system
US6058116A (en) * 1998-04-15 2000-05-02 3Com Corporation Interconnected trunk cluster arrangement
US6078957A (en) * 1998-11-20 2000-06-20 Network Alchemy, Inc. Method and apparatus for a TCP/IP load balancing and failover process in an internet protocol (IP) network clustering system
US6581121B1 (en) * 2000-02-25 2003-06-17 Telica, Inc. Maintenance link system and method
US6594776B1 (en) * 2000-06-28 2003-07-15 Advanced Micro Devices, Inc. Mechanism to clear MAC address from Ethernet switch address table to enable network link fail-over across two network segments

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7519735B1 (en) * 2001-05-08 2009-04-14 Juniper Networks, Inc. Single board routing arrangement
US20050180439A1 (en) * 2004-01-21 2005-08-18 Wataru Kondo Network system, terminal setting method, address resolving server, and client terminal
US8554889B2 (en) * 2004-04-21 2013-10-08 Microsoft Corporation Method, system and apparatus for managing computer identity
US20050256973A1 (en) * 2004-04-21 2005-11-17 Microsoft Corporation Method, system and apparatus for managing computer identity
US20060209736A1 (en) * 2005-03-18 2006-09-21 Barnhart Randy C Data handling in a distributed communication network
US7701891B2 (en) * 2005-03-18 2010-04-20 Raytheon Company Data handling in a distributed communication network
AU2006227658B2 (en) * 2005-03-18 2011-07-07 Raytheon Company Data handling in a distributed communication network
US20070157016A1 (en) * 2005-12-29 2007-07-05 Dayan Richard A Apparatus, system, and method for autonomously preserving high-availability network boot services
US20090196290A1 (en) * 2008-02-01 2009-08-06 Microsoft Corporation On-demand mac address lookup
US7778203B2 (en) * 2008-02-01 2010-08-17 Microsoft Corporation On-demand MAC address lookup
CN102347899A (en) * 2011-07-28 2012-02-08 中国船舶重工集团公司第七一六研究所 Intelligent dual-redundant gigabit Ethernet processing board
US9063939B2 (en) * 2011-11-03 2015-06-23 Zettaset, Inc. Distributed storage medium management for heterogeneous storage media in high availability clusters
US20130117225A1 (en) * 2011-11-03 2013-05-09 Michael W. Dalton Distributed storage medium management for heterogeneous storage media in high availability clusters
US20130198408A1 (en) * 2012-01-26 2013-08-01 Schneider Electric Industries Sas IP Parameter Determination and Configuration
US20140371883A1 (en) * 2013-06-13 2014-12-18 Dell Products L.P. System and method for switch management
US9477276B2 (en) * 2013-06-13 2016-10-25 Dell Products L.P. System and method for switch management
US10318315B2 (en) 2013-06-13 2019-06-11 Dell Products L.P. System and method for switch management
US9419861B1 (en) * 2013-10-25 2016-08-16 Ca, Inc. Management information base table creation and use to map unique device interface identities to common identities
CN112859578A (en) * 2021-01-13 2021-05-28 北京铁科时代科技有限公司 Backup redundant industrial personal computer based on Ethernet bus scheme

Also Published As

Publication number Publication date
US20060080469A1 (en) 2006-04-13
US6854072B1 (en) 2005-02-08

Similar Documents

Publication Publication Date Title
US20050010695A1 (en) High availability/high density system and method
US6728780B1 (en) High availability networking with warm standby interface failover
US6954436B1 (en) Method and apparatus for selecting redundant routers using tracking
US6760859B1 (en) Fault tolerant local area network connectivity
US6732186B1 (en) High availability networking with quad trunking failover
US6763479B1 (en) High availability networking with alternate pathing failover
US8842518B2 (en) System and method for supporting management network interface card port failover in a middleware machine environment
US7656788B2 (en) High-reliability cluster management
US7055173B1 (en) Firewall pooling in a network flowswitch
US6470013B1 (en) Use of enhanced ethernet link—loop packets to automate configuration of intelligent linecards attached to a router
EP1048145B1 (en) Cross-platform server clustering using a network flow switch
US7152179B1 (en) IP redundancy with improved failover notification
US6535990B1 (en) Method and apparatus for providing fault-tolerant addresses for nodes in a clustered system
US8891358B2 (en) Method for application broadcast forwarding for routers running redundancy protocols
US20070002883A1 (en) Methods and devices for networking blade servers
US20080215910A1 (en) High-Availability Networking with Intelligent Failover
US11855809B2 (en) Resilient zero touch provisioning
US9384102B2 (en) Redundant, fault-tolerant management fabric for multipartition servers
JP2006129446A (en) Fault tolerant network architecture
WO2005044887A9 (en) Telecommunications device and method
KR102569484B1 (en) Systems and methods for supporting inter-chassis manageability of nvme over fabrics based systems
US7769862B2 (en) Method and system for efficiently failing over interfaces in a network
US7039922B1 (en) Cluster with multiple paths between hosts and I/O controllers
US7451208B1 (en) Systems and methods for network address failover
US7660234B2 (en) Fault-tolerant medium access control (MAC) address assignment in network elements

Legal Events

Date Code Title Description
AS Assignment

Owner name: COMERICA BANK, CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:CONTINUOUS COMPUTING CORPORATION;REEL/FRAME:016735/0602

Effective date: 20050708

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: CONTINUOUS COMPUTING CORPORATION, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:COMERICA BANK;REEL/FRAME:023094/0094

Effective date: 20090811

AS Assignment

Owner name: RADISYS INTERNATIONAL LLC, DELAWARE

Free format text: CHANGE OF NAME;ASSIGNOR:CONTINUOUS COMPUTING CORPORATION;REEL/FRAME:029548/0520

Effective date: 20120330

AS Assignment

Owner name: RADISYS CORPORATION, OREGON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RADISYS INTERNATIONAL LLC;REEL/FRAME:029539/0822

Effective date: 20121206