US20060072555A1 - Defining logical trunk groups in a packet-based network - Google Patents

Defining logical trunk groups in a packet-based network Download PDF

Info

Publication number
US20060072555A1
US20060072555A1 US11/238,663 US23866305A US2006072555A1 US 20060072555 A1 US20060072555 A1 US 20060072555A1 US 23866305 A US23866305 A US 23866305A US 2006072555 A1 US2006072555 A1 US 2006072555A1
Authority
US
United States
Prior art keywords
trunk group
logical trunk
data
packet
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/238,663
Inventor
Kenneth St. Hilaire
Ronald Grippo
Fardad Farahmand
Wassim Matragi
Sunil Menon
James Pasco-Anderson
Glenn Stewart
William Templeton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonus Networks Inc
Original Assignee
Sonus Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=36143022&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20060072555(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Sonus Networks Inc filed Critical Sonus Networks Inc
Priority to US11/238,663 priority Critical patent/US20060072555A1/en
Assigned to SONUS NETWORKS, INC. reassignment SONUS NETWORKS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PASCO-ANDERSON, JAMES A., MATRAGI, WASSIM, ST. HILAIRE, KENNETH R., MENON, SUNIL K., GRIPPO, RONALD V., STEWART, GLENN W., TEMPLETON, WILLIAM C., FARAHMAND, FARDAD
Publication of US20060072555A1 publication Critical patent/US20060072555A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/102Gateways
    • H04L65/1033Signalling gateways
    • H04L65/104Signalling gateways in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/66Arrangements for connecting between networks having differing types of switching systems, e.g. gateways
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/24Multipath
    • H04L45/245Link aggregation, e.g. trunking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/15Flow control; Congestion control in relation to multipoint traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/41Flow control; Congestion control by acting on aggregated flows or links
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/43Assembling or disassembling of packets, e.g. segmentation and reassembly [SAR]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/72Admission control; Resource allocation using reservation actions during connection setup
    • H04L47/724Admission control; Resource allocation using reservation actions during connection setup at intermediate nodes, e.g. resource reservation protocol [RSVP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/801Real time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/822Collecting or measuring resource availability data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/825Involving tunnels, e.g. MPLS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/82Miscellaneous aspects
    • H04L47/828Allocation of resources per group of connections, e.g. per group of users
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/606Hybrid ATM switches, e.g. ATM&STM, ATM&Frame Relay or ATM&IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/102Gateways
    • H04L65/1023Media gateways
    • H04L65/103Media gateways in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/10Architectures or entities
    • H04L65/102Gateways
    • H04L65/1043Gateway controllers, e.g. media gateway control protocol [MGCP] controllers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1101Session protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M7/00Arrangements for interconnection between switching centres
    • H04M7/12Arrangements for interconnection between switching centres for working between exchanges having different types of switching equipment, e.g. power-driven and step by step or decimal and non-decimal
    • H04M7/1205Arrangements for interconnection between switching centres for working between exchanges having different types of switching equipment, e.g. power-driven and step by step or decimal and non-decimal where the types of switching equipement comprises PSTN/ISDN equipment and switching equipment of networks other than PSTN/ISDN, e.g. Internet Protocol networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M7/00Arrangements for interconnection between switching centres
    • H04M7/12Arrangements for interconnection between switching centres for working between exchanges having different types of switching equipment, e.g. power-driven and step by step or decimal and non-decimal
    • H04M7/1205Arrangements for interconnection between switching centres for working between exchanges having different types of switching equipment, e.g. power-driven and step by step or decimal and non-decimal where the types of switching equipement comprises PSTN/ISDN equipment and switching equipment of networks other than PSTN/ISDN, e.g. Internet Protocol networks
    • H04M7/1245Arrangements for interconnection between switching centres for working between exchanges having different types of switching equipment, e.g. power-driven and step by step or decimal and non-decimal where the types of switching equipement comprises PSTN/ISDN equipment and switching equipment of networks other than PSTN/ISDN, e.g. Internet Protocol networks where a network other than PSTN/ISDN interconnects two PSTN/ISDN networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5663Support of N-ISDN
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5671Support of voice
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/64Hybrid switching systems
    • H04L12/6418Hybrid transport
    • H04L2012/6443Network Node Interface, e.g. Routing, Path finding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0836Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability to enhance reliability, e.g. reduce downtime
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication
    • H04L41/344Out-of-band transfers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • H04L41/5012Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF] determining service availability, e.g. which services are available at a certain point in time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5032Generating service level reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/508Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement
    • H04L41/5087Network service management, e.g. ensuring proper service fulfilment according to agreements based on type of value added network service under agreement wherein the managed service relates to voice services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0823Errors, e.g. transmission errors
    • H04L43/0829Packet loss
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the description describes defining logical trunk groups in a packet-based network.
  • a “trunk” or “trunk circuit” is a connection between distributed switches—a trunk may be a physical wire between the switches or any other way of transmitting data.
  • a “trunk group” is a group of common or similar circuits (i.e., trunks) that originate from the same physical location, e.g., a switchboard. Trunk groups are used to route calls through the traditional network using the telephone number as the routing key that provides routing instructions to distributed switches.
  • trunks impose physical limitations on the amount of data (and hence the number of calls) that may be transmitted over the trunk group. Such limits are based on the capacity of the circuit to transmit data. As physical limits of a trunk group are approached, the number of additional calls that can be routed over that particular trunk group decreases.
  • One known solution to increase the call capacity of a trunk group is to add more trunk circuits to the trunk group.
  • IP Internet Protocol
  • VOIP Voice over Internet Protocol
  • a packet-based telephone network employs packet-switches (also referred to as gateways, media gateways, media gateway controllers, switching components, softswitches, data sources, or call processors).
  • a packet assembler can convert a signal received from a traditional telephone network call into a set of data packets for transmission through the IP network.
  • Packet-based networks do not require dedicated circuits for transmitting data associated with a call (sometimes referred to as a call, a call session, a set of data packets, or data packets), and as such, do not encounter the same physical limitations as circuit-switched networks.
  • Packet-based networks include components with an interface to the packet-based network, for example, an IP address. Packet-switches are analogous to circuit-based switches, and data links are analogous to trunk circuits. However, unlike circuit-based network calls, packet-based network calls employ an IP address as the routing key.
  • a chokepoint can develop in a packet-based network when data packets arrive at a packet switch faster than the packet switch can process the data packets. The chokepoint can result in lost or delayed transmission of data packets that affects existing calls.
  • a packet-switch is a router, the router generally lacks the processor capacity and signaling capability to reroute incoming data packets to prevent further network slowdown due to the chokepoint.
  • Previous attempts to avoid chokepoints in a network and associated slowdowns have included providing a server to identify an available IP network route based on available bandwidth associated with the route.
  • the route (composed of undefined, ad hoc route segments between IP routers known as “path links”) defines a bandwidth capability for data transmission that consists of the sum of bandwidth available on each path link. Therefore, a route can change as available bandwidth of the constituent path links fluctuates.
  • the server also called a Virtual Provisioning Server (“VPS”), communicates the route having the most available bandwidth to a signaling gateway that can transmit data over that route defined by the VPS.
  • the VPS has knowledge of the topology of the IP network to be able to route the traffic. For example, the VPS obtains available bandwidth and other routing type information from the routers making up the portion of the IP network through which the VPS routes its traffic.
  • the description describes methods and apparatus, including computer program products, for defining logical trunk groups in a packet-based network.
  • the method involves managing calls through a packet-based network without knowledge of the topology of the packet-based network.
  • the method involves defining a plurality of logical trunk groups for a first media gateway in communication with a packet-based network.
  • Each of the logical trunk groups is associated with one or more media gateways in communication with the first media gateway over the packet-based network.
  • the method involves associating packet data with a particular call.
  • the packet data is received or transmitted by the first media gateway.
  • the method involves associating the packet data with a first logical trunk group of the plurality of logical trunk groups.
  • the method also involves collecting statistical data associated with the first logical trunk group.
  • the method involves associating call data through a packet-based network that is received from a data source or transmitted to a call destination with a logical trunk group based at least in part on a characteristic or identifier of the call data, the data source, the data destination, or any combination of these.
  • the characteristic or identifier includes a name, an Internet Protocol (IP) address, a signaling protocol, a transport service access point, a port, a virtual local area network (VLAN) identifier, or any combination of these.
  • IP Internet Protocol
  • VLAN virtual local area network
  • the method involves defining a first logical trunk group for a first media gateway in communication with a packet-based network.
  • the first logical trunk group is associated with a second media gateway in communication with the first media gateway over the packet-based network.
  • the method involves associating packets corresponding to a call that is being routed to the second media gateway with a first logical trunk group.
  • the method involves generating a first set of statistics associated with the first logical trunk group.
  • the method involves defining a second logical trunk group for the second media gateway in communication with the packet-based network.
  • the second logical trunk group is associated with the first media gateway in communication with the second media gateway over the packet network.
  • the method involves associating packets corresponding to a call being routed from the first media gateway with the second logical trunk group and generating a second set of statistics associated with the second logical trunk group.
  • the method involves associating a network quality with the packet-based network based in part on the first or second set of statistics or both.
  • the system includes a packet-based network and a first media gateway in communication with the packet-based network.
  • the first media gateway includes a plurality of logical trunk groups, and each of the plurality of logical trunk groups is associated with one or more additional media gateways over the packet network.
  • the system includes a first module adapted to associate packet data associated with a call with a first logical trunk group selected from the plurality of logical trunk groups for transmission through the packet-based network.
  • the system includes a collection module adapted to collect statistical data associated with the first logical trunk group.
  • the computer program product is tangibly embodied in an information carrier, the computer program product including instructions being operable to cause data processing apparatus to define a plurality of logical trunk groups for a first media gateway in communication with a packet-based network. Each of the plurality of logical trunk groups is associated with one or more media gateways in communication with the first media gateway over the packet-based network.
  • the computer program product includes instructions operable to cause data processing apparatus to associate packet data that is received or transmitted by the first media gateway with a particular call and to associate the packet data with a first logical trunk group selected from the plurality of logical trunk groups.
  • the computer program product includes instructions operable to cause data processing apparatus to collect statistics associated with the first logical trunk group.
  • any of the aspects above can include one or more of the following features.
  • associating the packet data with a first logical trunk group of the plurality of logical trunk groups includes associating the packet data with a first logical trunk group based in part on a characteristic or identifier.
  • the characteristic or identifier can include a name, an IP address, a signaling protocol, a transport service access point, a port, a VLAN identifier, or any combination of these.
  • Some embodiments include selecting a logical trunk group for association with the packet data based in part on a network address or a network mask or both associated with the packet-based network, a second packet-based network, or both.
  • selecting a logical trunk group is based in part on a most-specific address match algorithm.
  • the characteristic or identifier is associated with a data source, a data destination, a logical trunk group, or any combination thereof. In some embodiments, the characteristic or identifier is associated with a data source and a data destination.
  • the data source or data destination includes a gateway, a call processor, a switch, a trunk group or any combination of these.
  • a resource parameter is associated with the logical trunk group.
  • the resource parameter can include a scalar parameter, a vector parameter, an operational state parameter, or any combination of these.
  • more than one resource parameter can be associated with the logical trunk group.
  • an operational state parameter and a scalar parameter can be associated with the logical trunk, or more than one scalar parameter can be associated with the logical trunk
  • a scalar parameter can include a call capacity, signal processing resources, a data packet volume, a bandwidth or any combination of these.
  • a vector parameter can include a directional characteristic.
  • An operational state parameter can include an in-service or an out-of-service operational state.
  • a first resource parameter is associated with the first logical trunk group
  • a second resource parameter is associated with a second logical trunk group.
  • the first logical trunk group and the second logical trunk group can be associated with a hierarchical group, and the hierarchical group can be associated with at least a portion of the first resource parameter, the second resource parameter or both.
  • a first hierarchical group associated with the first logical trunk group is associated with a hierarchical combination.
  • a trunk resource parameter (e.g., the first resource parameter) can be determined based at least in part based on a combination resource parameter associated with the hierarchical combination or a combination resource parameter associated with the hierarchical group or both.
  • a second set of call data received from a second data source or transmitted to a second data destination through the packet-based network is associated with a second logical trunk group.
  • a network node includes the data source, the second data source, the data destination, the second data destination, or any combination of these.
  • the first logical trunk group and the second logical trunk group can be associated with a combined communication channel in communication with the network node.
  • the combined communication channel is in communication with a second packet-based network.
  • call between the first media gateway and the second media gateway are managed based on the network quality based in part on the first or second set of statistics, or both.
  • an allocation module is adapted to associate the resource parameter with the first logical trunk group.
  • the resource parameter can include a scalar parameter, a vector parameter, an operational state parameter, or any combination of these.
  • Implementations can realize one or more of the following advantages.
  • the requirement of centralized control over IP routes e.g., by a VPS) is eliminated.
  • knowledge of the network topology is not required to control time-sensitive data transmission through the IP network.
  • Implementations realize increased scalability because knowledge of network topology is not required.
  • Further advantages include controlling data through an IP network based on characteristics or parameters other than bandwidth capacity. Faster processing and more efficient network resource management is realized due to decreased data communications from a centralized control.
  • Another advantage includes increased visibility from the perspective of, for example, a network administrator into the IP network backbone.
  • the increased visibility improves network management functions by allowing a network administrator to configure or manipulate call traffic through the IP network based on performance statistics.
  • the performance statistics can provide visibility by reporting on the performance associated with various sets of pathways or calls.
  • the performance statistics also relate to the quality of the IP network and devices used by the network.
  • the performance statistics relate to call transmission or data transmission associated with a set of pathways. More particularly, the performance statistics allow a network administrator to track packet transmission through an IP network by only monitoring the devices that transmit and receive data packets. For example, the performance statistics can indicate whether a certain network (e.g., PSTN or IP network) is difficult to reach, which enables a network administrator to route calls around or away from the network that is difficult to reach.
  • the performance statistics can be industry networking or traffic standard statistics such as Telcordia GR-477 or Telcordia TR-746 standard statistics that are used by PSTN administrators or individual statistics developed independently to determine network performance in IP networks. Knowledge of the packet-based network's performance allows an administrator to narrow or tailor troubleshooting efforts to improve performance to those areas within the packet-based network that are experiencing difficulty.
  • Another advantage associated with increased visibility includes improved ease of migration from circuit-switched telephony networks (e.g., the PSTN) to packet-switched telephony networks (e.g., IP networks) by allowing an administrator to employ similar tools for managing the IP networks as are available for managing the PSTN.
  • circuit-switched telephony networks e.g., the PSTN
  • packet-switched telephony networks e.g., IP networks
  • FIGS. 1-3 are block diagrams showing exemplary networks and devices related to routing data associated with a call through a packet-based network.
  • FIG. 4 is an exemplary graphical user interface for configuring control features.
  • FIGS. 5-6 depict exemplary networks and devices involved with distributed control features.
  • FIGS. 7-8 are block diagrams illustrating a hierarchical configuration of call controls and related implementations.
  • FIG. 9 is a block diagram illustrating exemplary networks and devices for call processing.
  • FIG. 1 depicts a system 100 that includes exemplary networks and devices associated with routing data associated with a call through a packet-based network.
  • Data associated with a call can include one or more sets of data packets and may be referred to herein as data packets, a set or sets of data packets, a call or calls, a call leg, or some combination of these terms.
  • the call data described in this description is referencing media data (e.g., voice, video), the call data may also include signaling data without departing from the scope of the invention.
  • the system 100 includes a PSTN 105 that is in communication with a media gateway 110 over communication channel 115 .
  • the communication channel 115 includes PSTN trunk groups.
  • the gateway 110 can be, for example, a GSX9000 sold by Sonus Networks, Inc., of Chelmsford, Mass.
  • the gateway 110 is in communication with a first packet network 120 over a communication channel 125 .
  • the gateway 110 is also in communication with a second packet network 130 over a communication channel 135 .
  • the first packet network 120 and the second packet network 130 can be separate packet networks, for example, one being a public packet network (e.g., the Internet) and one being a private packet network (e.g., an intranet).
  • the first packet network 120 and the second packet network 130 can be the same packet network, e.g., the Internet.
  • the separation is shown to illustrate egress and ingress call data for the gateway 110 .
  • the gateway 110 uses the packet network 120 to communicate with a media gateway 140 and a media gateway 145 (e.g., by transmitting packet data through the packet network 120 ).
  • the gateway 110 uses the packet network 130 to communicate with a media gateway 150 and a media gateway 155 .
  • the media gateways 140 , 145 , 150 , 155 may be located in different geographical areas to allow the service provider to provide national service at reduced costs.
  • the gateway 110 can be in Texas
  • the gateway 140 can be in Oregon
  • the gateway 145 can be in California
  • the gateway 150 can be in Massachusetts
  • the gateway 155 can be in New Jersey.
  • a call is received from the PSTN 105 at the gateway 110 and transformed from a circuit-based call into a packet-based call (e.g., by a packet assembler module or a packet assembler disassembler module).
  • the packet data associated with the call is transmitted from the gateway 110 to the appropriate gateway, for example gateway 145 , and converted back to a circuit-based call at the gateway 145 , if the called party is connected to another portion of the PSTN, or can be left in packet form if the called party is connected directly to a packet-based system (e.g., IP-based telephony).
  • a packet-based system e.g., IP-based telephony
  • a service provider that manages the gateway 110 does not always have knowledge of the topology of the packet networks 120 and 130 , particularly if the packet networks 120 and 130 represent a public packet network such as the Internet.
  • a service provider obtains access to a packet network through agreements with a third party (e.g., a service-level or quality agreement).
  • a third party e.g., a service-level or quality agreement.
  • One advantageous feature of the described configuration allows the service provider to evaluate or understand the underlying packet network inferentially (e.g., by monitoring traffic or performance statistics), which enables the service provider to verify that the third party is meeting the quality guarantees in the agreement.
  • the gateway 110 transmits the packets associated with the call to the packet network 120 through communication channel 125 and the packet network 120 takes responsibility for routing the packets to the gateway 145 .
  • the packets are associated with a call, the packets are time-sensitive. As such, problems within the packet network 120 can affect the whether and how fast the packets are transported to the gateway 145 . Delays and lost packets lead to a loss of quality of the call.
  • the gateway 110 advantageously includes logical trunk groups TG-A 160 , TG-B 165 , TG-C 170 , and TG-D 175 .
  • logical trunk groups represent a virtual communication channel through a packet network.
  • logical trunk groups can be represented as objects in an objected oriented data processing paradigm.
  • the service provider managing the gateway 110 can define the logical trunk groups 160 , 165 , 170 , and 175 to be associated with the gateways 140 , 145 , 150 , and 155 , respectively.
  • the gateway 110 receives calls from the PSTN 105 , transforms the call data into packets, and transmits those packets to the appropriate gateway (e.g., the gateways 140 , 145 , 150 , and 155 ), the gateway 110 associates the packets with the appropriate or corresponding logical trunk group. For example, as a call is received and routed to the gateway 145 , the packets associated with that call are associated with the logical trunk group TG-B 165 .
  • gateway 110 As additional calls come into gateway 110 and are routed to gateway 145 , they are also associated with the logical trunk group TG-B 165 . After packet data has been associated with a logical trunk group, statistics about that packet data can be collected and tracked. In some examples, these statistics are aggregated to provide statistics at the call level. In some examples, statistics aggregated at the call level can be aggregated to provide statistics about the logical trunk group (e.g., statistics associated with a group of one or more calls).
  • the service provider managing the gateway 110 has knowledge that packet network 120 has some issues in the set of pathways from the gateway 110 to the gateway 145 , even though the service provider does not necessarily know the topology of the packet network 120 .
  • the data associated with each logical trunk group models characteristics (e.g., capacity, hardware failures, bandwidth, etc.) of the set of pathways between the gateway 110 and the other gateway(s) associated with the particular logical trunk group.
  • references to the set of pathways between two devices refer to any combination of path links through the packet network (e.g., the packet network 120 ) from one device (e.g., the media gateway 110 ) to another device (e.g., the media gateway 145 ).
  • information associated with the performance of a network can be inferred from statistics associated with “edge devices” such as the gateway 110 and a media gateway 140 , 145 , 150 , 155 .
  • the logical trunk groups can be associated with more than one media gateway.
  • the service provider managing the gateway 110 can define the logical trunk group TG-A 160 to be associated with the gateways 140 and 145 (e.g., the gateways in communication with the gateway 110 via packet network 120 ).
  • the service provider managing the gateway 110 can define the logical trunk group TG-B 165 to be associated with the gateways 150 and 155 (e.g., the gateways in communication with the gateway 110 via packet network 130 ).
  • the packets associated with that call are associated with the logical trunk group TG-A 160 .
  • Service providers whose networks include a portion of the PSTN networks typically have managers that have managed the PSTN using statistics collected about PSTN trunk groups. For example, the Telcordia GR-477 or the Telcordia TR-746 standard on network traffic management deals with PSTN trunk group reservation and PSTN trunk group controls. By establishing logical trunk groups for the packet-based traffic, analogous management techniques can advantageously be applied to the packet-based traffic as is used for the PSTN trunk groups. Managers can quickly adapt to managing the packet-based traffic using their PSTN trunk group skill set.
  • the logical trunk groups can be used to allocate resources and to provision services.
  • the logical trunk group TG-A 160 can represent the network used for calls provided on a retail basis and the logical trunk group TG-B 165 can represent the network used for calls provided on a wholesale basis. Because the retail calls provide a higher margin, limited resources, such as DSPs, can be allocated in higher percentages, or on a higher priority basis, to the calls associated with the logical trunk group TG-A 160 , regardless of the final destination of the call data. Similarly, services can be provisioned to a call according to the logical trunk group with which that call is associated.
  • the telephone call has certain data associated with that call (e.g., video, voice, signaling).
  • the PSTN 105 communicates the data to a gateway 110 , which includes at least one network card with an IP address (e.g., a network interface).
  • IP address e.g., a network interface
  • calls are received by the gateway 110 at a physical location or port (e.g., a DS0 or a DS1 port) from the PSTN 105 over the communication channel 115 .
  • the gateway 110 processes the data and transmits the data according to a characteristic of the device receiving the data.
  • the device(s) receiving the packetized call data can be the gateway 140 , the gateway 145 , the gateway 150 , and/or the gateway 155 .
  • the characteristic can include a name, an IP address, a signaling protocol, a virtual local area network (“VLAN”) identifier or any combination of these.
  • VLAN virtual local area network
  • a “set of pathways” is not necessarily a fixed physical concept but relates to the concept of communications between various network equipment (e.g., the gateway 110 and the media gateway 140 ) through the packet network 120 .
  • the logical trunk groups TG-A 160 , TG-B 165 , TG-C 170 , and TG-D 175 are referred to as “IP trunk groups” when the networks 120 and 130 are IP-based networks.
  • Individual pathways are logically associated to form the sets of pathways from one call processing device (e.g., gateway 110 ) to another call processing device (e.g., gateway 140 ).
  • One species of logical association involves associating individual pathways based on IP addresses associated with the call processing devices (also referred to herein as call processors) in communication with the pathways (e.g., gateways 110 , 140 , 145 , 150 , 155 ).
  • an IP trunk group is a named object that represents one or more call processing devices and the data communication paths that connect the network elements.
  • the gateway 110 By associating a logical trunk group with the sets of pathways between two call processing devices, the amount of data transmitted over any of the sets of pathways can be controlled by the gateway 110 as discussed in more detail below with respect to call admission controls (e.g., whether calls are transmitted as determined by statistics associated with a given IP trunk group).
  • Transmission of a call from the gateway 110 over the IP network 120 to the destination gateway 140 is referred to as an “egress call leg” with respect to gateway 110 .
  • the egress call leg is associated with an IP trunk group (sometimes referred to as an “egress IP trunk group”).
  • Reception of the call at the destination 140 is referred to as an “ingress call leg” with respect to the destination 140 .
  • the ingress call leg is associated with an IP trunk group defined for the gateway 140 (sometimes referred to as an “ingress IP trunk group).
  • the egress call leg is associated with the same IP trunk group as the ingress call leg, but the egress call leg is not required to be associated with the same IP trunk group as the ingress call leg.
  • the gateway 110 can be a data or call source with respect to the destination 140 and can include a characteristic that may be used for future call routing (e.g., where the destination 140 provides subsequent call processing and is not the final destination of the call).
  • a data source or a data destination can include any gateway, PSTN trunk group, or any set of pathways (e.g., represented by logical trunk groups) with associated characteristics or identifiers.
  • a media path is associated with a pathway or set of pathways.
  • the media path transmits out-of-band non-voice data associated with the set of pathways (e.g., for videoconferencing) and can be used in association with IP trunk groups.
  • data associated with a call is received from a data source, in this example the gateway 140 , at the gateway 110 over a particular pathway included in the set of pathways between the gateway 110 and the gateway 140 through the packet network 120 .
  • This set of pathways can be associated with the logical trunk group TG-A 160 (e.g., an IP trunk group when the packet-based network 120 is an IP network).
  • the gateway 110 processes the data and associates the data with the logical trunk group TG-A 160 based in part on a characteristic of the data source including name (e.g., FROM_Oregon), IP address of the gateway 140 , a transport service access point (“TSAP”) associated with the gateway 140 , signaling protocol between the gateways 140 and 110 , a VLAN identifier associated with one of the gateways 11 0 , 140 , or a combination of these.
  • name e.g., FROM_Oregon
  • IP address of the gateway 140 e.g., IP address of the gateway 140
  • TSAP transport service access point
  • signaling protocol between the gateways 140 and 110 e.g., a VLAN identifier associated with one of the gateways 11 0 , 140 , or a combination of these.
  • the call data is associated with the logical trunk group TG-A 160 based in part on a characteristic of a data destination), and the characteristic can include the name of the destination, the IP address of the destination, a TSAP associated with the destination, the signaling protocol between the destination and the gateway 110 , a VLAN identifier associated with the destination or the gateway 110 , or any combination of these. Because data is received from the gateway 140 at the gateway 110 , the selected logical trunk group is the logical trunk group association for ingress call data.
  • the gateway 110 can be a data source with respect to gateway 140 (e.g., for return data traffic).
  • the same trunk group or a different trunk group can be used.
  • the trunk group TG-A 160 can be used.
  • IP addresses are the characteristics to determine the association
  • the IP address of the data source can be used for ingress calls and the IP address of the data destination can be used for the egress calls, which in both cases is the IP address for the gateway 140 .
  • the IP address of either the data source or the data destination can be used for either ingress call legs or egress call legs.
  • the IP addresses of the data source and the destination can be used to associate the call with a logical trunk group.
  • the association with the logical trunk group TG-A 160 can be based on the IP address of the data source (e.g., when the data source is the gateway 140 ) or the data destination.
  • the association with the logical trunk group TG-B 165 can be based on the IP address of the data destination (e.g., when the data destination is the gateway 140 ) or the data source. In such examples, for ingress calls from the gateway 140 , the data is associated with the logical trunk group TG-A 160 and for egress calls to the gateway 140 , the data is associated with the logical trunk group TG-B 165 ).
  • network elements 110 , 140 , 145 , 150 , and 155 are referred to repeatedly as gateways, they can also represent groups of signaling peers, points of presence, central offices, network nodes, other telephony equipment and/or the like in communication with the networks 120 and 130 without departing from the scope of the invention.
  • Calls and associated data received by the gateway 110 from the PSTN 105 can be transmitted through the packet networks 120 and 130 to different destinations 140 , 145 , 150 , and/or 155 .
  • calls and associated data received by the gateway 110 over the packet networks 120 and 130 from data sources 140 , 145 , 150 , and/or 155 can be transmitted to the PSTN 105 over PSTN trunk group(s) that is included in the communications channel 115 .
  • the gateway 110 is an interface between circuit-switched networks like the PSTN 105 and packet-based networks 120 and 130 . Call data between the PSTN 105 and the gateway 110 is controlled based on circuit availability (e.g., PSTN trunk group management).
  • Call data between the gateway 110 and the other gateways 140 , 145 , 150 , and/or 155 do not have such limitations.
  • a network administrator can configure the operation of the gateway 110 to impose such limitations and manage the call data transmitted across the packet-based networks 120 and 130 based on performance statistics associated with the logical trunk groups analogous to those techniques and performance statistics used to manage the PSTN trunk groups included in the communication channel 115 .
  • FIG. 2 depicts a system 200 that includes exemplary networks and devices for call routing and control in connection with packet-to-packet call processing.
  • a data source 202 representing a group of call processing devices, transmits data associated with a call over a first set of pathways 204 , and the data are received by a switch 206 .
  • the switch 206 can include a packet-peering switch for peer-to-peer data transmission.
  • the switch 206 can be, for example, a GSX9000 sold by Sonus Networks, Inc. of Chelmsford, Mass.
  • the switch 206 can then select a second set of pathways 208 to transmit the data to a destination 210 , also representing a group of call processing devices.
  • the first set of pathways 204 and the second set of pathways 208 are implemented in an IP-based packet network, so the logical trunk groups are referred to as IP trunk groups.
  • the first set of pathways 204 and the second set of pathways 208 are each associated with a distinct logical trunk group. Specifically, the first set of pathways 204 is associated with a logical trunk group “IP-A” and the second set of pathways 208 is associated with a logical trunk group “IP-B”.
  • the first set of pathways 204 (e.g., the logical trunk group IP-A) is associated with an ingress call leg
  • the second set of pathways e.g., the logical trunk group IP-B
  • the switch 206 associates call data with the logical trunk group IP-A because the call arrives from the data source 202 .
  • the switch 206 associates call data with the logical trunk group IP-B because the call is being transmitted to the data destination 210 .
  • the switch 206 operates in a packet-based environment (e.g., a packet-based network), so the data transmitted by the first set of pathways 204 is not required to correspond one-to-one to the data packets transmitted by the second set of pathways 208 . More specifically, some members of the set of data packets can be transmitted by the second set of pathways 208 , and some members can be transmitted over a third set of pathways (not shown).
  • the data packets are reassembled at a remote switch (not shown) that is in communication with the final destination of the call.
  • Data source 202 or destination 210 or both can be network node that includes one or more pieces of telephone equipment (e.g., the switch 206 or the gateway 110 of FIG. 1 ).
  • the network node can include an interface with a packet-based network via the first or second sets of pathways 204 , 208 , or via a connection to network node rather than to individual telephony equipment within the node.
  • FIG. 3 shows a system 300 including exemplary networks and devices associated with data routing in a packet-based network core.
  • data associated with a call is routed through a packet-based network 302 in which substantially all of the network equipment is operated by a single entity or controller, for example a VOIP network administrator.
  • the network 302 includes a first switching component 303 , a second switching component 304 , and a control component 306 .
  • Switching components 303 , 304 can be gateways as described above with respect to FIG. 1 (e.g., a GSX9000 sold by Sonus Networks of Chelmsford, Mass.) and control component 306 can be a policy server, controller, monitor, or other network component for storing and implementing control features.
  • the control component 306 can be a PSX policy server for use in the Insignus SoftswitchTM architecture, both sold by Sonus Networks of Chelmsford Mass.
  • the switching components 303 , 304 are “edge devices” (e.g., components that provide an entry point into the core network 302 , such as that of a telephone carrier or internet service provider (“ISP”)).
  • ISP internet service provider
  • a first network 308 (e.g., a PSTN or a portion of the general PSTN) is in communication with switching component 303 via a first connection A 310 (e.g., a PSTN trunk group).
  • the first network 308 can be a packet-based network (e.g., an IP network), and the first connection A 310 can be a logical trunk group (e.g., an IP trunk group) without departing from the scope of the invention.
  • the network 308 may also be referred to as PSTN 308
  • the connection A 310 may also be referred to as PSTN connection A 310 .
  • the first PSTN connection A 310 can be a PSTN trunk group that is in communication with switching component 303 (e.g., over wire, fiber optic cable, or other data transmission media). Data associated with a telephone call can originate from a caller in the PSTN 308 and is transmitted to the first switching component 303 for routing through the network 302 by the first PSTN connection A 310 .
  • Switching component 303 can communicate with other network equipment (e.g., switching component 304 ) via one or more sets of pathways that are associated with a logical trunk group B 312 .
  • Switching component 303 communicates data packets associated with the call to the switching component 304 via a set of pathways associated with the logical trunk group B 312 .
  • Data packets are received by switching component 304 via the set of pathways between the components 303 and 304 and the switching component 304 associates this data with a logical trunk group C 314 .
  • the data packets transmitted over the first set of pathways 312 may not directly correspond to the data packets received over the second set of pathways 314 . More particularly, the same set of pathways can have a different name with respect to different components.
  • the set of pathways between the components 303 and 304 are associated with the egress logical trunk group B 312 with respect to one component (e.g., the switching component 303 ) and associated with the ingress logical trunk group C 314 with respect to another component (e.g., the switching component 304 ).
  • the first set of pathways (e.g., the set of pathways actually taken for the data traveling from the component 303 to the component 304 ) does not correspond directly (e.g., hop for hop through the packet-based network 302 ) with the second set of pathways (e.g., the set of pathways actually taken by the data going from the component 304 to the component 303 ).
  • the set of data packets associated with the original call received by the component 304 is reassembled into a signal by a packet assembler/disassembler that can be co-located with switching component 304 .
  • a second connection D 316 (e.g., a PSTN trunk group) transmits the reassembled data from switching component 304 to a second network 318 (e.g., a PSTN network or a portion of the general PSTN) for further processing or communication to the intended call recipient.
  • a second network 318 e.g., a PSTN network or a portion of the general PSTN
  • the second network 318 can be a packet-based network (e.g., an IP network), and the second connection D 316 can be a logical trunk group (e.g., an IP trunk group) without departing from the scope of the invention.
  • the network 318 may also be referred to as the PSTN 318
  • the connection D 316 may also be referred to as PSTN connection D 316 .
  • the network 302 can appear as a single distributed switch.
  • the control component 306 provides routing instructions to switching component 303 based on the characteristic of the data source (e.g., the PSTN 308 or the first PSTN connection A 310 ) or the data destination (e.g., the switching component 304 , the second PSTN connection D 316 , or the PSTN 318 ), where the characteristic includes name, signaling protocol, TSAP, IP address or any combination of these. More particularly, the control component 306 indicates to the switching component 303 routing data (e.g., the IP address of the component 304 ) to employ for transmitting data through the core network 302 based in part on the characteristic.
  • routing data e.g., the IP address of the component 304
  • data associated with a call is received by the switching component 303 over the PSTN connection 310 , and the switching component 303 transmits a signal (e.g., a policy request) to the control component 306 indicating that the data was received or requesting routing or transmission instructions.
  • a signal e.g., a policy request
  • the control component 306 can provide to the switching component 303 routing information through the network 302 for the switching component 304 to ultimately transmit the data to the PSTN connection D 316 —the control component 306 can provide routing information based on a characteristic associated with the data or a characteristic of a data source or destination. More particularly, the control component 306 provides routing information through the network 302 without knowledge of the particular topology of the network 302 by, for example, using the name of the logical trunk group with which a set of pathways is associated (e.g., logical trunk group B 312 ) to select the route based in part on, for example, the port (e.g., a DS0 or a DS1 port) on switching component 303 at which the data arrived from PSTN connection 310 . Other characteristics can be used by control component 306 to select and provide routing information. From the point of view of the PSTNs 308 , 318 , the network 302 routes the data from the first PSTN connection A 310 to the second PSTN connection D 316 .
  • the components 303 and 304 associate the logical trunk groups B 312 and C 314 with the call.
  • the component 303 associates the logical trunk group B 312 with the call as the call egresses the component 303 .
  • the component 304 associates the logical trunk group C 314 with the call.
  • This procedure for associating a logical trunk group in the network core (e.g., the network 302 ) with a call is sometimes referred to as logical trunk group selection (or IP trunk group selection in the case where the core network is an IP-based network).
  • control component 306 is employed in the logical trunk group selection.
  • information regarding the route through the network is available to the switching component 303 without communication to the control component 306 (e.g., the information can be stored on or available to the switching component 303 ).
  • data associated with a call is processed by the network 302 .
  • the data can be processed by a call processor having an interface to the network 302 (e.g., a network card with an IP address).
  • a call processor generally refers to a module for implementing functionality associated with call transmission through a network, for example, by providing signaling information, routing information, or implementing other functionalities with respect to data associated with a call such as compression or decompression or signal processing.
  • a call processor can include the switching components 303 , 304 and/or the control component 306 or other devices or modules not illustrated.
  • data associated with a call includes information relating to the characteristic (e.g., information related to the characteristic forms part of a data packet).
  • the characteristic can include a name, an IP address, a signaling protocol, a transport service access point, or any combination of these.
  • the characteristic can be associated with a data source, a call processor, a set of pathways, a logical trunk group, or combinations of such elements.
  • a selector selects a logical trunk group associated with a set of pathways over which a call processor (e.g., the switching component 303 ) can transmit the data based at least in part on the characteristic.
  • the selector can be a module operable within the network 302 .
  • the call processor e.g., the switching component 303
  • the selector is located remotely from a call processor (e.g., co-location with control component 306 ).
  • a user can prioritize characteristics such that the selector considers first the highest-priority characteristic for routing (e.g., name) and then considers a second-highest-priority characteristic (e.g., IP address) only if routing is not possible using the highest-priority characteristic.
  • a default logical trunk group associated with a default set of pathways is available for selection by the selector if the lowest-priority set is unavailable.
  • route data data associated with a route
  • the route data can be used to associate the call data (e.g., the packets associated with the call) with a logical trunk group (e.g., the logical trunk group B 312 or the logical trunk group C 314 ) representing a set of pathways through the packet network 302 .
  • the logical trunk group associated with the route can be denoted with a name (e.g., “Logical Trunk Group B 312” or “Logical Trunk Group C 314”).
  • a telephony device with a network interface can be configured to transmit the data through the network based on the name of the associated logical trunk group (e.g., the logical trunk group B 312 ) or the destination of the route (e.g., the second PSTN connection D 316 ).
  • the route through the network 302 is chosen by the selector based on the name of the logical trunk group, which allows an administrator to control the route or pathways of data transfer in a manner similar to that employed by an administrator with respect to PSTN trunk groups.
  • the entries associated with “Route Entry 1” can correspond to switching component 303 . More specifically, the entries can refer to transmitting data across switching component 303 .
  • Data Destination can correspond to the characteristic “name,” and “Destination IP Address” can correspond to the characteristic “IP address.”
  • Switching element 303 can communicate with or transmit data associated with a call to the set of pathways associated with the logical trunk group B 312 .
  • “Route Signaling Protocol” identifies the signaling protocol used to communicate between switching elements (e.g., between switching element 303 and switching element 304 ).
  • “Route Signaling Protocol” refers to the signaling protocol associated with a set of pathways (e.g., the set of pathways associated with the logical trunk group B 312 ). “Route Signaling Protocol” can identify a physical location with respect to the network or call processors in communication with the network. For example, the null entry associated with Route Entry 1 indicates that the call is transmitted from one location on the switching component 303 to another location (e.g., the set of pathways associated with the logical trunk group B 312 or the TSAP associated with the set of pathways associated with the logical trunk group B 312 )—the null entry is the signaling protocol associated with transmitting data within or across a switching component 303 and generally is not configured by an administrator.
  • the entries associated with “Route Entry 2” can correspond to switching component 304 , e.g., the destination of the data after the call egresses switching component 303 .
  • the entry associated with “Destination Trunk Group” can refer to the logical trunk group associated with the egress call leg with respect to the switching component 303 (e.g., the logical trunk group B 314 ).
  • the “Route Signaling Protocol” can indicate that the call is be transmitted to switching component 304 using SIP signaling protocol. More particularly, a network administrator can configure call routing based on the desired communication protocol to be used between two switching components. For example, the Route Signaling Protocol associated with Route Entry 3 is SIP-T.
  • SIP-T signaling protocol
  • SIP or SIP-T signaling protocol can indicate transmission to a switching component logically remote from switching component 303 but still operating within the network core (e.g., switching component 304 ).
  • the signaling protocols of Table 1 are merely illustrative and other signaling protocols may be used, for example H.323.
  • a “Route Signaling Protocol” entry that is not a null entry (e.g., an entry of SIP, SIP-T, or H.323)indicates, for example, data transmission to a call processor or switching component (not shown) logically remote from switching component 303 or operating outside the network 302 core.
  • the signaling protocol is an industry standard signaling protocol or proprietary signaling protocols between edge devices, such as a GW-GW protocol, which refers generally to the signaling protocol between two gateways.
  • the characteristic that controls the logical trunk group chosen by the selector for associating the transmitted or received data is the IP address or signaling protocol associated with a set of pathways or with the call processor.
  • the characteristic is a VLAN.
  • network equipment e.g., call processors or routers
  • IP network address e.g., IP network address
  • the call processor can maintain a mapping of IP network addresses that are associated with the logical trunk groups. In some embodiments, the mapping is contained in an object or a table that is available to the call processor (e.g., Table 1).
  • a separate mapping is maintained for data that is received by a call processor (e.g., ingress call legs) and data that is transmitted by a call processor (e.g., egress call legs).
  • the characteristic that is associated with the logical trunk group e.g., a name, IP address, signaling protocol, VLAN identifier, or a combination of these
  • the characteristic is associated generally with a data source, a data destination or a trunk group.
  • the characteristic includes more than one type of characteristic from some combination of data sources, data destinations, or trunk groups.
  • a logical trunk group can be associated with or assigned to the call. More particularly, data associated with a call that is transmitted to switching component 303 from PSTN connection 310 can be associated with a first logical trunk group. For example, the ingress call leg established at the switching component 303 from the PSTN connection A 310 can be associated with a logical trunk group. As the switching component 303 processes the call, data associated with the call is associated with the logical trunk group B 312 based on the destination of the call. In such a configuration, the PSTN connection A 310 and the set of pathways associated with the logical trunk group B 312 are each in communication with the call processor (e.g., switching component 303 ). An egress call leg is associated with the logical trunk group B 312 .
  • the call processor e.g., switching component 303
  • the logical trunk group B 312 is selected based on the source of the call (e.g., the PSTN 308 or PSTN connection A 310 ) or, the signaling protocol in which the signaling data associated with the call was received (e.g., SS7).
  • the source of the call e.g., the PSTN 308 or PSTN connection A 310
  • the signaling protocol in which the signaling data associated with the call was received e.g., SS7.
  • each route through the packet-based network includes route specific data associated with the logical trunk group chosen by the call processor to associate with the transmitted or received call data.
  • the route specific data can include a list of peer-signaling addresses, peer-signaling protocol, and other configuration data that can be used to determine characteristics of a particular call.
  • Some specific examples of the specific data include the codec (e.g., compression/decompression standard used to assemble the packet), diffserve code point, packet size, codec options (e.g., silence suppression), fax/modem/data call handling, and DTMF tone handling.
  • the specific data can include data associated with signaling to determine the content of signaling messages and the sequence or order of signaling messages.
  • the route specific data can be invoked or accessed by a switching component (e.g., the switching component 303 ) to provide insight into the behavior (e.g., the signaling behavior) associated with a particular signaling peer.
  • the route specific data can be associated with network equipment that forms at least a part of the set of pathways associated with that logical trunk group.
  • the logical trunk group is selected based in part on the IP network to be traversed by the set of pathways.
  • the selector also referred to herein as an IP network selector
  • the selector in effect, selects a set of pathways, and thus the associated logical trunk group, by selecting the IP network.
  • the selector implicitly selects the logical trunk groups associated with the sets of pathways associated with or passing through network 302 (e.g., the logical trunk group B 312 and the logical trunk group C 314 ) by routing calls to the switching component 303 .
  • Table 2 An exemplary embodiment of a table for selecting an IP network or logical trunk groups associated with that network is depicted in Table 2.
  • Table 2 Ingress IP Trunk Group Selection Table IP Network Selector Signaling Network Protocol Logical Entry Number Network Mask Selector trunk group 1 209.131.0.0 255.255.0.0 SIP-T FROM_LA 2 209.131.16.0 255.255.240.0 SIP-T FROM_MA 3 171.1.0.0 255.255.0.0 SIP FROM_UK 4 172.4.0.0 255.255.0.0 SIP FROM_UK
  • Entry 1 of Table 2 represents a range of IP host addresses from 209.131.0.0 to 209.131.255.255, and Entry 2 represents the range of IP host addresses from 209.131.16.0 to 209.131.31.255. In such an embodiment, both Entry 1 and Entry 2 indicate that SIP-T signaling protocol is being used (e.g., for data transmissions within a network core).
  • Entry 1 of Table 2 is associated with a logical trunk group named FROM_LA, and Entry 2 is associated with a logical amed FROM_MA.
  • the range of host addresses defined by Entry 2 is defined within the range specified by Entry 1 (e.g., the range specified by Entry 2 is a subset of the range specified by Entry 1).
  • the characteristic that is used to select a logical trunk group is an IP address.
  • Data arriving from a call processor having an IP address of, for example, 209.131.17.43 and employing SIP protocol matches both Entry 1 and Entry 2 of Table 2.
  • Entry 2 provides a more specific match as a subset of the range of IP addresses of Entry 1, and thus, Entry 2 is selected as an egress call leg (e.g., by a selector).
  • the associated logical trunk group with which the data is associated for that call is the logical trunk group named “FROM_MA”.
  • the selector can look up Table 2 and determine that Entry 2 is a more specific address match when the characteristic is an IP address.
  • the selector can then provide information about Entry 2 to a call processor (e.g., the switching component 303 ) that egresses the call over the network over the set of pathways associated with the logical trunk group named FROM_MA in part in response to the information returned by the selector.
  • a call processor e.g., the switching component 303
  • the information returned by the selector after consultation with Table 2 can change as the characteristic changes, for example, in response to a user-provided input or configuration.
  • Table 2 includes characteristics associated with a name of a logical trunk group (e.g., FROM_UK), a signaling protocol associated with a set of pathways associated with that trunk group (e.g., SIP), an IP address associated with a set of pathways associated with that trunk group (e.g., 172.4.0.0), or a combination of these characteristics. Any of the characteristics of Table 2 can be used by the selector for determining which set of pathways over which to transmit the data and thus the logical trunk group with which to associate the data.
  • a signaling protocol associated with a set of pathways associated with that trunk group e.g., SIP
  • IP address e.g., 172.4.0.0
  • the list of IP networks available to the IP network selector is not contiguous.
  • Entry 2 and Entry 3 in Table 2 represent two non-contiguous IP networks, as illustrated by the “Network Number” associated with each entry.
  • Entry 2 is associated with a network located in the United States
  • Entry 3 is associated with a network located in the United Kingdom.
  • Such a configuration can provide flexibility in the network design to aggregate a number of networks, represented by a range of host IP addresses, into one configuration element or object (e.g., Table 2) at the application level and for promoting network growth.
  • a second packet-based network 172.4.0.0/16 can be added to the network associated with a set of pathways originating from the United Kingdom.
  • both 171.1.0.0/16 and 172.4.0.0/16 are associated with the logical trunk group named FROM_UK.
  • the IP network address associated with an IP network or set of pathways through the IP network can include an IP network number and an IP network mask.
  • Such a configuration allows transparent communication between the network in which the data originates (e.g., the PSTN 308 ) and the network over which data is transmitted (e.g., network 302 ) because data is transmitted to a call processor having an IP address (e.g., a network card with an IP address) associated with the network mask (e.g., IP address 255.255.0.0 of Entry 1 of Table 2).
  • the IP address of the network that actually transmits the data e.g., IP address 209.131.0.0 of Entry 1 of Table 2) can change without requiring reconfiguration of Table 2 or the PSTN connection 310 . More particularly, the IP address associated with Entry 1 can be changed (e.g., by replacing the network card of the switching component 303 or the switching component 303 itself) without affecting the address to which the PSTN 308 transmits the data.
  • a logical trunk group can be associated with the IP network and can include a characteristic as described above.
  • the logical trunk group is chosen using the longest prefix match algorithm (e.g., the most specific entry from, for example Table 2, is chosen).
  • different methods for selecting an IP network can be employed to arrive at the association with a logical trunk group.
  • each logical trunk group (e.g., the logical trunk group B 312 and the logical trunk group C 314 ) can be configured by using an IP network selector rather than a particular set of nearest neighbors (e.g., a particular topology).
  • An IP network selector generally selects a network node (e.g., the data source 202 of FIG. 2 ) rather than an individual gateway, switch or call processor (e.g., switching component 304 ) for connecting and transmitting data associated with a call.
  • network equipment that can be added to a particular network node is automatically associated with that network node when the network node is selected for data transmission by the IP network selector. Because the network node is selected for data transmission rather than individual network equipment or sets of pathways, the selector does not require knowledge of the network node's topology or composition. More particularly, a logical trunk group is selected when the network node is selected to transmit data associated with a call, even if a particular set of pathways or piece of network equipment associated with the set of pathways was added to the network node after the creation of the object (e.g., Table 2).
  • FIG. 4 illustrates an exemplary graphical user interface for implementing control features in a packet-based network.
  • the graphical user interface (“GUI”) 400 includes a plurality of fields associated with resource parameters for controlling data in an IP network, particularly data packets associated with a logical trunk group through an IP network (e.g., set of pathways associated with the logical trunk group B 312 of FIG. 3 ).
  • the GUI 400 can be displayed on a display means (e.g., a computer monitor) in communication with elements of an IP network (e.g., switching component 303 or control component 306 ).
  • a module is adapted to generate the GUI 400 .
  • the GUI 400 allows a user to determine or control various configurable variables in connection with a logical trunk group (e.g., displayed in the GUI 400 as field 402 ).
  • the configurable variable is associated with a control feature or resource parameter that is associated with a logical trunk group.
  • the appearance of the GUI 400 differs depending on the value of the field 402 .
  • the values of the resource parameters e.g., the with fields 404 , 406 , 408 , and 410 and the sub-fields associated with those fields 404 , 406 , 408 , 410
  • field 402 can refer to a group of logical trunk groups or a combination of groups as discussed below with respect to FIGS. 7 and 8 .
  • the resource parameter can include scalar quantities (e.g., bandwidth capacity, call capacity, signal processing speed, or data packet volume of the field 404 ), vector quantities (e.g., a directional characteristic of field 406 ), or an operational state or toggle-like quantity (e.g., in-service or out-of-service operational states of field 408 ), or a combination of any of these.
  • scalar quantities e.g., bandwidth capacity, call capacity, signal processing speed, or data packet volume of the field 404
  • vector quantities e.g., a directional characteristic of field 406
  • an operational state or toggle-like quantity e.g., in-service or out-of-service operational states of field 408
  • more than one user interface can be used to implement control functions. Exemplary resource parameters and control features will be described with reference to the exemplary networks and devices depicted in FIG. 3 , but the resource parameters and control features can be implemented with respect to other networks or devices (e.g., the exemplary configuration of FIGS. 1
  • the illustrated scalar parameters involving configurable variables include fields for number of calls 404 a, DSP resources 404 b, data packet volume 404 c, bandwidth 404 d, or other resources (not shown) associated with a signaling peer.
  • the scalar parameters can be configured by inputting a data value into the sub-field or through, for example a drop-down menu.
  • a trigger event occurs (e.g., the data packets are not transmitted over the set of pathways associated with logical trunk group with which the trigger event is associated).
  • a particular call or call session can be more “expensive” in terms of scalar parameters required to guarantee a minimum quality of service.
  • An administrator can configure the scalar parameters to limit the scalar parameters that are used for each call, which allows the administrator to limit the number of calls from a particular signaling peer (e.g., switching component 303 of FIG. 3 ).
  • Vector parameters can include a directional characteristic 406 a such as, for example, a “one-directional” characteristic 406 b or a “bi-directional” characteristic 406 c.
  • the vector parameters 406 can be configured by check-off boxes. Other, more complicated arrangements of vector parameters will be appreciated.
  • the directional characteristic 406 can be configured to allow bi-directional call traffic (e.g., by enabling bi-directional characteristic 406 c ).
  • the administrator can configure the directional characteristic 406 a to allocate a certain amount of resources for calls in a first direction (e.g., incoming calls) and a certain, not necessarily equal, amount of resources for calls in a second direction (e.g., outgoing calls).
  • Another field 408 involves the operational state associated with a logical trunk group (e.g., “in-service” 408 a or “out-of-service” 408 b ) that allows a user to control whether the particular set of pathways associated with the logical trunk group indicated in field 402 is accessible.
  • a logical trunk group can be in-service for calls in one direction and out-of-service for calls in the other direction.
  • the packet outage detection field 410 permits a user to determine or define whether a trigger event occurs with respect to the state of a set of data packets in the set of pathways associated with a particular logical trunk group. More particularly, the packet outage detection field 410 allows the user to determine or define parameters associated with various states of sets of data packets (e.g., a transmitted, received, lost, or queued state).
  • a logical trunk group (e.g., the logical trunk group B 312 ) includes a vector parameter 406 or directional characteristic 406 a.
  • the directional characteristic 406 a can refer to the direction of a call leg with respect to a particular switching component (e.g., the switching component 303 ).
  • ingress call legs are referred to as “inbound” calls and egress call legs are referred to as “outbound” calls.
  • the two endpoints can include gateways, call processors, data sources, switching components, or other network equipment (e.g., between the switching components 303 , 304 or, more generally, between the PSTN 308 and the PSTN 318 ).
  • the endpoints include a monitor or controller (e.g., the control component 306 ) and a switching component (e.g., the switching component 303 ) that can apply access control to data packets associated with the logical trunk group (e.g., the logical trunk group B 312 ).
  • a monitor or controller e.g., the control component 306
  • a switching component e.g., the switching component 303
  • One advantage achieved by this embodiment includes preventing further congestion of data traffic downstream of the switching component.
  • a “one-way” directional characteristic 408 a includes “inbound” or “outbound.”
  • a logical trunk group having the “inbound” associated directional characteristic can have control measures imposed (e.g., resource limitations) on the data packets received at a call processor (e.g., the switching component 303 ).
  • the controller does not allow data packets associated with an egress (e.g., outbound) call leg to access a set of pathways associated with that logical trunk group—only data packets associated with ingress call legs are allowed to potentially access the set of pathways.
  • data associated with an ingress call leg can gain access to a set of pathways associated with the logical trunk group if the set includes sufficient resources to process the call (e.g., non-congested data traffic).
  • a directional characteristic can impose a control in addition to scalar parameters described above.
  • a vector characteristic can be imposed on a set of pathways associated with a logical trunk group without a scalar quantity for the resource parameter.
  • a resource parameter can be shared by incoming call traffic and outgoing call traffic associated with the same logical trunk group, or each logical trunk group can include its own resource parameter.
  • Data packets and/or calls are permitted access to the set of pathways associated with a logical trunk group provided the logical trunk group has sufficient resources available to transmit the packets (e.g., the data packet resource requirements do not exceed the resource parameter associated with the logical trunk group).
  • a call can be connected across a given IP network (e.g., the logical trunk group is selected based on whether the call is an incoming call or an outgoing call with respect to a particular switching component.
  • a two-way directional characteristic involves resource reservation.
  • data packets e.g., associated with a call
  • a call processor e.g., the switching component 303
  • a resource parameter e.g., a scalar quantity such as number of calls or a vector parameter such as a directional characteristic
  • resources can reserved for outgoing calls from a particular switching component to ensure that, for example, an increased number of incoming calls does not consume all of the resources of the switching component.
  • Such a configuration can provide a quality of service assurance because an increase in data traffic associated with that logical trunk group does not affect the resources already reserved, which can be “shielded” from the increased traffic to prevent a call from being disconnected or dropped.
  • a configuration reduces loss of data packets associated with an increase in call traffic over a data link associated with that logical trunk group (e.g., the set of pathways associated with the logical trunk group B 312 ).
  • a logical trunk group can include an operational state parameter 408 such as “in-service” 408 a or “out-of-service” 408 b.
  • the operational state parameter 408 can be controlled by a central user, e.g., a network administrator, to add or remove available pathways associated with the logical trunk group for data transmission.
  • the user changes the operational state parameter 408 of a logical trunk group during an existing call session. The change to the operational state parameter does not affect existing call sessions, only future call sessions.
  • An operational state parameter 408 can add a level of control (in addition to scalar parameters 404 or vector parameters 406 ) to a set of pathways associated with a logical trunk group.
  • a logical trunk group associated with the out-of-service operational state 408 b is not available for handling call sessions (e.g., data packets associated with a call are not able to access that set of pathways associated with the logical trunk group, regardless of the scalar parameters 404 or vector parameters 406 associated with that logical trunk group).
  • resource parameters and access to a set of pathways associated with a logical trunk group can be controlled either statically or dynamically.
  • the process of implementing control features can be referred to as “enforcing limits,” “applying controls,” “implementing control features,” or “comparing data to resource parameters” with respect to a logical trunk group.
  • Other expressions for implementing control features with respect to data traffic can be used.
  • the state of the data packets with respect to a logical trunk group can be monitored.
  • a monitor module can provide the monitoring of the state associated with the data packets.
  • the monitor when the state is lost or queued, provides information about the state to the selector.
  • the selector in turn can select a set of pathways associated with a logical trunk group that is not associated with a lost or queued state (e.g., associated with a transmitted or received state) to transmit the additional data packets.
  • the monitor can also observe and monitor data requirements associated with the data packets and compare those requirements with the resource parameter associated with the logical trunk group. If the data requirements exceed the resource parameter, the data can be rerouted, and the process is iterated until a logical trunk group (and its associated set of pathways) is found that can transmit the data packets.
  • a monitor module provides monitoring of a state associated with a set of data packets. When the state is lost or queued, the monitor can provide information about the state to the selector or a controller.
  • the controller e.g., a user or control component 306
  • the controller can adjust or configure the resource parameter associated with the logical trunk group, which in turn can allow a second set of data packets to be transmitted by the set of pathways associated with the logical trunk group.
  • the controller can increase the value of a scalar parameter 404 or change the value of the vector parameter 406 (e.g., from a “one-way” 406 a directional characteristic to a “two-way” 406 b directional characteristic) or the operational state parameter 408 (e.g., from an “out-of-service” 408 b operational state to an “in-service” 408 a operational state).
  • the vector parameter 406 e.g., from a “one-way” 406 a directional characteristic to a “two-way” 406 b directional characteristic
  • the operational state parameter 408 e.g., from an “out-of-service” 408 b operational state to an “in-service” 408 a operational state.
  • a set of pathways associated with a logical trunk group can be selected regardless of the state associated with data packets associated with that logical trunk group. More particularly, the set of pathways selected to transmit data associated with a call can be the set of pathways associated with a logical trunk group without optimal resource capacity. In some embodiments, the size of the set of data packets associated with a logical trunk group increases until a lost state arises (e.g., some packets in the set are not transmitted across the set of pathways associated with that logical trunk group as determined by the packet data received at a downstream call processor), at which point the selector no longer routes data packets to the set of pathways associated with that logical trunk group. In addition to limiting or selecting a set of pathways based on a state of the logical trunk group, the selection can be based in part on delay or jitter (e.g., variations in delay) measurements associated with a logical trunk group.
  • delay or jitter e.g., variations in delay
  • controlling access dynamically can be accomplished by three cooperating algorithms implemented at three layers of operation in the controller: a set layer, a pathway layer, and a data admission layer.
  • the algorithm for the set layer determines a state, also called a congestion state, for the logical trunk group that can be communicated to the selector, controller, or the pathway layer (e.g., control component 306 ).
  • the algorithm for the pathway layer determines the resource capacity of the logical trunk group (e.g., by determining the resource capacities of each pathway that is a member of the set of pathways associated with that logical trunk group).
  • the congestion state as determined by the set layer provides an input for the pathway layer algorithm.
  • a capacity associated with a resource parameter of a logical trunk group can be an output.
  • the output of the pathway layer can be an input to the data admission layer algorithm.
  • the data admission layer can compare the number of calls 404 a (e.g., the input from the pathway layer and the configurable resource parameter of the GUI 400 related to number of calls 404 a ) to a maximum number of calls for reliable transmission. If the number of calls being processed by the logical trunk group exceeds the configurable variable 404 a, the data admission layer requests additional resources (e.g., an increased maximum number of calls) for that logical trunk group.
  • a call processor or media gateway e.g., the switching component 110 of FIG. 1
  • Dynamic resource allocation at the gateway or switch level permits decentralized resource management and reduces processing demands on a centralized resource manager (e.g., control component 306 of FIG. 3 ).
  • a monitor module monitors the quality and performance of call traffic (e.g., various parameters) through a packet network at the packet level (e.g., by monitoring lost packets, jitter, or delay) and can extrapolate or aggregate the quality and performance to the call or logical trunk group level.
  • the monitor module can invoke a trigger action to prevent additional calls from being associated with the particular logical trunk group that is experiencing reduced quality or performance.
  • a network administrator can adjust call admission policies associated with the logical trunk groups in response to the trigger action or various other performance statistics.
  • a specific implementation of traffic-level control includes a packet outage detection (“POD”) algorithm associated with various trigger events.
  • the POD algorithm is implemented for use with a logical trunk group, particularly by digital signal processors associated with the logical trunk group.
  • a flag associated with the state can be transmitted with the data packets for receiving with the transmission. The flag can indicate the size of the packet outage (e.g., the ratio of “lost” or “queued” packets to “transmitted” or “received” packets) or location in the set of data packets that the outage occurred.
  • the packet outage is determined from the ratio of received packets to transmitted packets, or other combinations of states.
  • the state is detected by network equipment such as switching elements or gateways (e.g., switching component 303 or 304 ).
  • real-time control protocol also known as “RTCP” statistics or implementations can detect the state.
  • Various trigger events can prompt a packet outage alert to be associated with a logical trunk group.
  • a user can adjust a resource parameter (e.g., on the GUI 400 ) to respond to the alert.
  • a resource parameter e.g., on the GUI 400
  • an administrator can operationally remove a logical trunk group from service or can adjust the trunk group's resource allocation in response to the outage alert.
  • trigger events can be manually configured by a user to allow some control over network performance (e.g., by configuring the packet outage detection fields 410 a - d ). More particularly, a user-provided configuration can define the trigger event. For example, the user can provide certain minimum or maximum parameters, or default parameters can be used. The user-provided configuration can be referred to as a configurable variable. Trigger events or configurable variables can include the following: packet outage minimum duration 410 a, outage detection intervals 410 b, amount of data detecting packet outage 410 c, minimum number of calls 410 d, or a combination of any of these.
  • the configurable variable can allow an administrator to specify criteria for adding a flag to the set of data packets (e.g., by inputting a value associated with the criteria into the GUI 400 , associated with packet outage detection).
  • a sustained packet outage can be detrimental to call quality and can indicate a failure condition somewhere in the packet-based network (e.g., some piece or pieces of network equipment associated with the set of pathways associated with the logical trunk group).
  • a burst loss of packets can perceptibly affect the quality of a phone call but without significant effect on the end-users (e.g., the call participants).
  • a momentary loss of packets can be considered “normal”; for example, temporary router congestion can result in momentary loss of packets without disconnecting the call.
  • One advantageous feature of packet outage minimum duration can allow the administrator (e.g., the user providing the configuration to control component 306 via the GUI 400 ) to customize the trigger to match the conditions of a particular packet-based network.
  • the minimum packet outage duration 410 a is configurable in units of milliseconds, but any suitable interval is contemplated.
  • Some embodiments involve a configurable outage detection interval variable 410 b.
  • a user or an administrator can ignore the state of a set of data packets for data that older in time than a specified time value.
  • One advantageous feature of this configuration allows the administrator to address transient fault conditions.
  • Packet outage events can be cumulated within a time interval.
  • the detection algorithm can be based on the sum of all of the packet outages that occurred during the interval.
  • the time interval could further be divided into, for example, three sub-intervals.
  • the packet outage detection interval is specified in seconds, but any suitable interval is contemplated.
  • the configurable variable can allow a user or an administrator to specify when to trigger a flag based on the effect of packet outage on the rest of the network (e.g., switching component 314 , the second PSTN 318 or the call recipient). For example, if packet outage is detected in a small number of calls (e.g., less than the number specified by the configurable variable 410 c, 410 d in the GUI 400 ), the flag is not triggered.
  • the flag can be triggered.
  • the configurable variable can be defined, for example, as a number of calls or as a percentage of calls processed that are associated with a logical trunk group (e.g., the calls processed having data that travel through the logical trunk group B 312 ).
  • Traffic data can be measured and made available to call processors for more effective transmission. Additionally, traffic data can be displayed on the GUI 400 and available to a user. In some embodiments, the traffic data can be communicated to a resource manager (e.g., a controller such as control component 306 ) or to various gateways (e.g., switching components 303 , 304 ). The data can be reformulated, manipulated, or calculated into statistics, or it can be stored and archived without processing. In some embodiments, the traffic data can be used by a network management system (e.g., control component 306 ) to facilitate call routing, data transmission, or data traffic engineering.
  • a network management system e.g., control component 306
  • the user can view and understand how various network equipment performs or is performing with respect to call processing.
  • the user can then adjust or manipulate configurable variables and parameters 402 , 404 , 406 , 408 , 410 fields of the GUI 400 to improve network performance in part in response to problems that are perceived or observed with respect to the network.
  • data associated with a call can be used to assemble call performance statistics with respect to a particular logical trunk group (e.g., the logical trunk group B 312 ), and thus give a user an insight into the performance of the packet network (e.g., the set of pathways through the network 302 associated with the logical trunk group B 312 , even if there is no direct knowledge of the topology of the packet network 302 ).
  • a particular logical trunk group e.g., the logical trunk group B 312
  • the packet network e.g., the set of pathways through the network 302 associated with the logical trunk group B 312
  • the statistics of multiple logical trunk groups can also give the user additional insights to the packet network when the overall topology is not known.
  • the performance statistics can be observed or adjusted by a user or utilized by call processors (e.g., switching component 303 ).
  • the network management system adjusts the control features without requiring the input of a user. It is worth noting that for controlling quality of PSTN calls and managing a PSTN, the Telcordia GR-477 standard on network traffic management deals with PSTN trunk group reservation and PSTN trunk group controls. Through the use of logical trunk groups, the control variables and performance statistics collected can mirror those used in the PSTN, for example, those defined by the GR-477 standard, to provide analogous management of calls going through packet-based networks.
  • FIG. 5 a block diagram of a system 500 including exemplary networks and devices associated with distributed control features associated with access to sets of pathways associated with logical trunk groups is depicted.
  • the exemplary configuration of the system 500 includes several call processors 502 a - c, which combine to form a logical network node 504 .
  • Each call processor 502 a - c has a defined logical trunk group 506 a - c, respectively that is associated with call data transmitted through a packet-based communication channel 508 .
  • each call processor 502 a - c or logical trunk group 506 a - c can be associated with call admissions controls.
  • each component The admissions controls of each component are combined to form an aggregated admissions control for the combined communications channel 508 for the node 504 .
  • the admissions controls are not required to be equally distributed; more particularly, each individual component (e.g., the call processors 502 a - c, the logical trunk groups 506 a - c, or combinations of them) can be associated with an individual set of admissions controls. In some embodiments, other types of data sources are in communication with the logical trunk groups 506 a - c.
  • the combined communications channel 508 is associated with a resource parameter or limitation, and the parameter or limitation is distributed to the call processors 502 a - c.
  • resource parameters of the communications channel 508 can be determined either by the combining resource parameters associated with the individual call processors 502 a - c or trunk groups 506 a - c or by a centralized configuration, e.g., by an administrator (not shown).
  • the resource parameter associated with the channel 508 can include additional features or different resource parameters than the result of combining the parameters or features of the call processors 502 a - c or trunk groups 506 a - c.
  • resources or parameters can be enhanced by allowing crank-back.
  • Crank-back can involve selecting a new logical trunk group (e.g., the logical trunk group 506 b ) when admission to a selected trunk group (e.g., the logical trunk group 506 a ) fails.
  • Call admission fails when data associated with a call setup cannot be reliably transmitted over the set of pathways associated with the selected trunk group (e.g., the logical trunk group 506 a ).
  • the control channel 508 can include the logical trunk groups 506 a - c associated with the node 504 , and can interface with a remote packet-based network 510 .
  • the packet-based network 510 can be physically and logically remote from the packet-based network (not shown) that includes the logical trunk groups 506 a - c (e.g., the communications channel 508 communicates with a component having an interface to the packet-based network 510 ).
  • the communications channel 508 is referred to as a group.
  • FIG. 6 a block diagram of a system 600 including exemplary networks and devices including a component for centralized control of distributed network components within a logical network node is depicted.
  • the system 600 of FIG. 6 includes the exemplary components depicted with respect to FIG. 5 with the addition of a resource manager 602 .
  • the resource manager 602 provides monitoring and controlling functions for the logical trunk groups 506 a - c.
  • the resource manager 602 can be, for example, a module for managing and allocating resources among call processors in a network node (e.g., network node 504 of FIG. 5 ).
  • the resource manager 602 can be in communication with the call processors 502 a - c.
  • the call processors 502 a - c request resources or resource parameters from the resource manager 602 as data associated with calls are processed.
  • the resource manager 602 employs a failure-minimizing algorithm to determine resource allocation among the call processors 502 a - c.
  • One advantageous feature associated with the configuration of FIG. 6 includes no special routing requirements to distribute the call control features or access control to the call processors 502 a - c.
  • the resource manager 602 can also monitor and/or maintain statistics associated with data traffic associated with the logical trunk groups 506 a - c. Such information can be accessible to other network components (not shown).
  • resource manager 602 can be analogous to the control component 306 depicted above with respect to FIG. 3 .
  • FIG. 7 depicts an exemplary hierarchical configuration 700 associated with call control.
  • Three levels 702 , 704 , 706 of hierarchy are depicted.
  • the lowest level 702 includes trunk groups 708 a - e.
  • the trunk groups 708 a - e include associated control features as described above (e.g., resource parameters and admission controls).
  • the second level 704 of the hierarchical structure includes two hierarchical groups 710 a - b including the logical trunk groups 708 a - e.
  • the first hierarchical group 710 a includes three logical trunk groups 708 a - c
  • the second hierarchical group 710 b includes two logical trunk groups 708 d - e.
  • control features associated with logical trunk groups 708 a - c are included in the hierarchical group 710 a (e.g., amalgamated or combined to define the resource parameters of the group 710 a ). Additional control features may be associated with group 710 a that are not included individually in any of trunk groups 708 a - c. Likewise, the control features associated with logical trunk groups 708 d - e are included in hierarchical group 710 b, and additional control features may be associated with hierarchical group 710 b that are not included individually in any of the logical trunk groups 708 d - e.
  • the third level 706 of the hierarchical structure includes a hierarchical combination 712 of the hierarchical groups 710 a - b.
  • Hierarchical combination 712 can include all of the control features associated with the hierarchical groups 710 a - b and additional control features that are not included in either hierarchical group 710 a - b or in any logical trunk groups 708 a - e.
  • the resource parameter associated with the hierarchical combination 712 or hierarchical groups 710 a - b can include scalar, vector, operational state parameters, or any combination of them, as described above with respect to FIGS. 3 and 4 . If an administrator knows about a portion of the topology of the packet-based network (e.g., the topology on the node 504 , then the hierarchical trunk group relationships (e.g., the hierarchical configuration 700 ) can advantageously be used for a logical representation of that network architecture or topology.
  • the resource manager 602 of FIG. 6 includes the hierarchical configuration 700 .
  • the resource manager 602 can configure the resources among the call processors 502 a - c either directly by allocating resources for the trunk groups 506 a - c or by allocating resources to the control channel 508 to manage and control call traffic.
  • Allocating resources to the control channel 508 is analogous to providing a configuration at the second level of hierarchy 704 or the third level of hierarchy 706 .
  • the hierarchical configuration 700 affects call processing. For example, to use logical trunk group 708 a for call processing (e.g., the resources and set of pathways associated with that logical trunk group), both the hierarchical group 710 a and the hierarchical combination 712 should be in service (e.g., the operational state parameter associated with the hierarchical group 710 a and the hierarchical combination 712 is not set to “out-of-service”). Similarly, individual logical trunk groups 708 a - e can be removed from operational service by removing a hierarchical group 710 a - b or the hierarchical combination 712 from operational service (e.g., associating an operational state of “out-of-service” with the hierarchical combination 712 ).
  • logical trunk group 708 a for call processing (e.g., the resources and set of pathways associated with that logical trunk group)
  • both the hierarchical group 710 a and the hierarchical combination 712 should be in service (e.g., the operational
  • a user can associate or configure the hierarchical group 710 a or hierarchical combination 712 with an operational state of “out-of-service” to prevent routing of data to pathways associated with logical trunk groups in subhierarchical levels (e.g., by using the GUI 400 of FIG. 4 ).
  • This concept is embodied in a peg counter.
  • the peg counter can be incremented in the logical trunk group 708 a and the hierarchical group 710 a, for example indicating a failure to allocate resources up the hierarchical configuration 700 (e.g., by the hierarchical combination 712 ).
  • FIG. 8 depicts a system 800 including exemplary networks and devices employing a hierarchical configuration of call controls and related implementations.
  • the configuration includes four network nodes 802 a - d, which are groups including three media gateways, although in general, the network nodes can include a varying number of telephony equipment.
  • a network node (sometimes referred to as a point of presence (“POP”)) can include a configuration in which network services are provided to subscribers that are connected to the network node (e.g., the network node 802 a ).
  • the four network nodes 802 a - d can each include interfaces to a packet-based network 804 .
  • the interfaces are depicted in FIG. 8 as multiplexers 806 a - d, respectively.
  • one or more of the multiplexer 806 a - d include the resource manager functionality of the resource manager 602 of FIG. 6 and can implement hierarchical control features described above with respect to FIG. 7 .
  • the network equipment e.g., a GSX9000 sold by Sonus Networks, Inc. of Chelmsford, Mass.
  • the configuration of FIG. 8 includes the resource manager 602 of FIG. 6 (not shown) to manage hierarchical control features.
  • each network node e.g., the network node 802 a
  • each network node 802 a includes a set of logical trunk groups 808 b - d configured for transmitting data to each of the other network nodes (e.g., the network nodes 802 b - d ), through the packet-based network 804 .
  • the network node 802 a can include a logical trunk group 808 b “dedicated” to network node 802 b (e.g., all calls routed between node 802 a and node 802 b are associated with the logical trunk group 808 b ), a logical trunk group 808 c “dedicated” to network node 802 c (e.g., all calls routed between node 802 a and node 802 c are associated with the logical trunk group 808 c ), and a logical trunk group 808 d “dedicated” to network node 802 d (e.g., all calls routed between node 802 a and node 802 d are associated with the logical trunk group 808 d ).
  • the logical trunk groups can be associated with a call or a call session. In some embodiments, the logical trunk groups are associated with call traffic.
  • the logical trunk groups 808 b - d can be combined to form a group or a combination as discussed above and represented in FIG. 8 by reference numeral 810 .
  • Block 812 illustrates an exemplary hierarchical configuration used by various of the network equipment for implementing hierarchical control features.
  • One advantage realized by a configuration including hierarchical control features includes the ability of an administrator associated with the network node 802 a to control data transmission through the network 804 and to the other network nodes 802 b - d by allocating resources associated with the group 810 or the logical trunk groups 808 b - d associated with the group 810 .
  • Block 812 illustrates that the principle of a hierarchical configuration (e.g., the hierarchical configuration 700 of FIG. 7 ) can be implemented in the IP network 804 , or the configuration can be implemented by other network equipment, for example the multiplexer 806 a or a control component (not shown).
  • a hierarchical resource allocation allows multiple trunk groups to share common resources without interfering with other trunk groups. For example, an administrator can track resources that are needed by a network node (e.g., the network node 802 a ) or between network nodes in the network 804 .
  • a call and data associated with the call can originate in the network node 802 a and terminate in the network node 802 d.
  • the multiplexer 806 a selects the set of pathways associated with the logical trunk group 808 d to transmit the data through the network 804 .
  • Any data control features associated with the logical trunk group 808 d for example resource allocation or data admission, can then be implemented with respect to the data.
  • data control features associated with the group 810 can then be implemented with respect to the data.
  • the data can be transmitted to the packet-based network 804 for delivery to the network node 802 d.
  • the multiplexer 806 d receives the data from the network 804 , determines which trunk group with which to associate the data (e.g., the logical trunk group 808 d ), and routes the data and the call according to control features associated with the logical trunk group 808 d. Similar processing occurs for data transmitted from the network node 802 d to the network node 802 a (e.g., data associated with return call transmission).
  • the performance of logical trunk groups can be monitored by employing or analyzing performance statistics related to data associated with a call. Such statistics can be monitored, reported and recorded for a given time interval that can be configured by a user.
  • the performance statistics can relate to the performance of various network equipment including IP trunk groups.
  • the performance statistics allow an administrator monitor network performance and adjust or configure resource parameters to improve network performance (e.g., to provide a quality of service guarantee or a higher quality of service with respect to calls).
  • the call performance statistics in Table 3 include industry standard statistics as they relate to management of PSTN trunk groups, for example the Telcordia GR-477 standard described above.
  • An advantage realized includes the application of statistics associated with PSTN trunk group management to management of packet-based networks to allow PSTN administrators to configure and administer packet-based networks with minimal retraining.
  • Another advantage includes minimal retooling regarding network management operation tools for implementation of the features described herein with respect to logical trunk groups.
  • TABLE 3 Performance Statistic Associated Calculation Inbound Usage The sum of call-seconds for every inbound call associated with a logical trunk group. This statistic relates to the period between resource allocation and release.
  • Outbound Usage The sum of call-seconds for every outbound call associated with a logical trunk group.
  • Inbound Completed Calls The sum of normal call completions (answered calls) for every inbound call associated with a logical trunk group.
  • Outbound Completed Calls The sum of normal call completions (answered calls) for every outbound call associated with a logical trunk group.
  • Inbound Call Attempts The sum of call initiations for every inbound call associated with a trunk group.
  • Outbound Call Attempts The sum of call initiations for every outbound call associated with a logical trunk group.
  • Maximum Active Calls The maximum number of calls in either direction associated with a logical trunk group. This value can include an upper limit of the total number of calls allowed by the logical trunk group.
  • Call Setup Time The sum of call-setup time in hundredths of seconds on every call associated with a logical trunk group. Calls Setup The number of calls setup in both directions using a logical trunk group. Inbound & Outbound Call The current number of inbound or Failures due to No Routes outbound failed calls because no route associated with a logical trunk group was available. Inbound & Outbound Call The current number of inbound or Failures due to No Resources outbound failed calls because available resource associated with a logical trunk group was available. Inbound & Outbound Call The current number of inbound or Failures due to No Service outbound failed calls because there was no available service associated with a logical trunk group.
  • Inbound & Outbound Call The current number of inbound or Failures due to Invalid Call outbound failed calls because an invalid call attempt was associated with the logical trunk group.
  • Inbound & Outbound Call The current number of inbound or Failures due to Network outbound failed calls because a Failure network failure was associated with a logical trunk group.
  • Inbound & Outbound Call The current number of inbound or Failures due to Protocol outbound failed calls because a Error protocol error was associated with the logical trunk group.
  • Inbound & Outbound Call The current number of inbound or Failures due to Unspecified outbound failed calls for an unknown reason associated with a logical trunk group. Routing Attempts The number of routing requests associated with a logical trunk group.
  • various performance statistics can be combined (e.g., determining a performance percentage associated with IRR routing from the number of attempts at rerouting and the number of successful reroutes).
  • the performance statistics can be detected or processed by a monitor, for example, a call processor (e.g., switching component 303 of FIG. 3 ) that is in communication with the logical trunk group.
  • the performance statistics can be detected or processed by a monitor remote from the logical trunk group, for example, a provisioning server or resource manager that is in communication with various network components (e.g., control component 306 of FIG. 3 ).
  • an administrator can monitor any of the statistics in Table 3 and adjust network or resource parameters based in part on the statistics to optimize network performance and quality of telephone calls.
  • RTCP real-time control protocol
  • Logical trunk groups and associated call processors can use statistics generated to determine call quality (e.g., the quality of data transmission).
  • Statistics and call quality information allow a user to configure or change various resource parameters to maximize network efficiency and reliability.
  • Packet-based telephony networks can employ various signaling protocols in connection with data transmission.
  • a packet-based network can employ a signaling protocol (e.g., generically gateway-to-gateway signaling protocol) within the packet network 302 (e.g., the core packet-based network).
  • Telephony networks external to the packet-based network e.g., PSTN 308 and PSTN 318 of FIG. 3
  • PSTN 308 and PSTN 318 of FIG. 3 can use other signaling protocols, for example, signaling system 7 (“SS7”), session initiation protocol (“SIP”), session initiation protocol for telephones (“SIP-T”), or H.323 signaling protocols or any combination of these.
  • SS7 signaling system 7
  • SIP session initiation protocol
  • SIP-T session initiation protocol for telephones
  • H.323 signaling protocols or any combination of these.
  • FIG. 9 depicts a system 900 including exemplary networks and devices for call processing.
  • a network 902 includes a first call processor 904 , a second call processor 906 , and a policy processor 908 .
  • Either of the call processors 904 , 906 can be, for example, a GSX9000
  • the policy processor 908 can be, for example, a PSX policy server, both sold by Sonus Networks, Inc., of Chelmsford, Mass.
  • the policy processor 908 can communicate with processor 904 or processor 906 .
  • the call processor 904 communicates data associated with a call to the call processor 906 using SIP signaling protocol.
  • a call arrives from a PSTN trunk group (not shown) at the call processor 904 , and the call processor 904 signals a request to the policy processor 908 .
  • a request includes information (e.g., a characteristic) including ingress source (e.g., the PSTN or a PSTN trunk group), ingress gateway (e.g., the call processor 904 ), ingress trunk group (e.g., the ingress PSTN trunk group TG 1 ), calling party number, called party number, among other information.
  • the policy processor 908 Based in part on the characteristic, provides information associated with a set of pathways 910 associated with a logical trunk group through the network 902 to the call processor 904 .
  • the information can include a destination processor (e.g., the call processor 906 ), an IP address of a destination processor, a destination trunk group (e.g., the PSTN trunk group that delivers the call to the recipient), information about the route, or any combination of these.
  • the policy processor 908 can specify the destination by name, an IP address, or the destination trunk group. Based on one or more of these characteristics, the call processor 904 associates the call with an IP trunk group.
  • the call processor 904 can select a an IP trunk group with which to associate the call data, based in part on the information provided by the policy processor 908 , by using a most specific address-matching algorithm from a selection table (e.g., as described above).
  • a selection table e.g., as described above.
  • control features associated with the selected trunk group e.g., limit on number of calls
  • can be enforced with respect to the data e.g., scalar, vector, or operational state parameters
  • IP trunk group e.g., resources associated with the IP trunk group including the set of pathways
  • information associated with a call setup can be communicated to the call processor 906 , which determines an ingress logical trunk group associated with the data (e.g., IP trunk group IPTG 2 ), for example by using the signaling IP address of the incoming setup.
  • control features associated with that IP trunk group e.g., bandwidth limit
  • processor 906 can admit the data for processing and attempts to transmit the data to a destination, e.g., on a PSTN trunk group (not shown) associated with egress PSTN trunk group TG 2 .
  • the call processor 906 can select the destination based on a characteristic associated with the destination, as described above. Performance measurements or traffic statistics can be tracked using the associated IP trunk groups.
  • call processors 904 can receive ingress call legs from an IP network and/or call processor 906 can transmit egress call legs to an IP network).
  • Call processors 904 and 906 can also communicate with logical trunk groups associated with SIP signaling. While the invention has been described with respect to packetized data associated with a telephone call, principles and concepts herein described can apply more generally to any time-sensitive data.
  • the above-described techniques can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them.
  • the implementation can be as a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers.
  • a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • Method steps can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by, and apparatus can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Modules can refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.
  • FPGA field programmable gate array
  • ASIC application-specific integrated circuit
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor receives instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Data transmission and instructions can also occur over a communications network.
  • Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.
  • module and “function,” as used herein, mean, but are not limited to, a software or hardware component which performs certain tasks.
  • a module may advantageously be configured to reside on addressable storage medium and configured to execute on one or more processors.
  • a module may be fully or partially implemented with a general purpose integrated circuit (“IC”), FPGA, or ASIC.
  • IC general purpose integrated circuit
  • a module may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • the functionality provided for in the components and modules may be combined into fewer components and modules or further separated into additional components and modules.
  • the components and modules may advantageously be implemented on many different platforms, including computers, computer servers, data communications infrastructure equipment such as application-enabled switches or routers, or telecommunications infrastructure equipment, such as public or private telephone switches or private branch exchanges (“PBX”).
  • PBX private branch exchanges
  • the above described techniques can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer (e.g., interact with a user interface element).
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • the above described techniques can be implemented in a distributed computing system that includes a back-end component, e.g., as a data server, and/or a middleware component, e.g., an application server, and/or a front-end component, e.g., a client computer having a graphical user interface and/or a Web browser through which a user can interact with an example implementation, or any combination of such back-end, middleware, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communications, e.g., a communications network.
  • communications networks also referred to as communications channels
  • communications channels include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet, and include both wired and wireless networks.
  • communications networks can feature virtual networks or sub-networks such as a virtual local area network (“VLAN”).
  • VLAN virtual local area network
  • communications networks can also include all or a portion of the PSTN, for example, a portion owned by a specific carrier.
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communications network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Abstract

Described are methods and apparatus, including computer program products, for defining logical trunk groups in a packet-based network. A plurality of logical trunk groups are defined for a first media gateway in communication with a packet-based network. Each of the plurality of logical trunk groups is associated with one or more media gateways in communication over the packet-based network with the first media gateway. Data associated with a call that is received or transmitted by the media gateway is associated with a first logical trunk group of the plurality of logical trunk groups.

Description

    CROSS REFERENCES TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 60/614,182, filed on Sep. 29, 2004, the disclosure of which is hereby incorporated herein by reference.
  • FIELD OF THE INVENTION
  • The description describes defining logical trunk groups in a packet-based network.
  • Acronyms
  • The written description employs various acronyms to refer to various services and system components, as follows:
    • Digital Signal Processor or Digital Signal Processing (“DSP”)
    • Gateway-to-Gateway (“GW-GW”)
    • Graphical User Interface (“GUI”)
    • Internet Protocol (“IP”)
    • Internet Service Provider (“ISP”)
    • Point of Presence (“POP”)
    • Public Switched Telephone Network (“PSTN”)
    • Session Initiation Protocol (“SIP”)
    • Signaling System 7 (“SS7”)
    • Transport Service Access Point (“TSAP”)
    • Virtual Local Area Network (“VLAN”)
    • Voice over Internet Protocol (“VOIP”)
    BACKGROUND
  • In general, traditional telephone networks, such as the publicly-switched telephone network (“PSTN”), employ circuitry and switches to connect telephone users across the network to facilitate communication. In such a network, a “trunk” or “trunk circuit” is a connection between distributed switches—a trunk may be a physical wire between the switches or any other way of transmitting data. A “trunk group” is a group of common or similar circuits (i.e., trunks) that originate from the same physical location, e.g., a switchboard. Trunk groups are used to route calls through the traditional network using the telephone number as the routing key that provides routing instructions to distributed switches. After a call link has been established between two switches over a trunk, that trunk is dedicated or “locked up” to that call for the duration of the call, e.g., another call link cannot use that trunk until the call has ended, for example by a hang-up or disconnection. In such a configuration, the trunks impose physical limitations on the amount of data (and hence the number of calls) that may be transmitted over the trunk group. Such limits are based on the capacity of the circuit to transmit data. As physical limits of a trunk group are approached, the number of additional calls that can be routed over that particular trunk group decreases. One known solution to increase the call capacity of a trunk group is to add more trunk circuits to the trunk group.
  • An emerging alternative to traditional phone networks uses packetized data to transmit content of telephone communications (e.g., voice or videoconferencing data) through a packet-based network such as an Internet Protocol (“IP”) network. Such a configuration is commonly referred to as a Voice over Internet Protocol (“VOIP”) network and can support voice, data, and video content. A packet-based telephone network employs packet-switches (also referred to as gateways, media gateways, media gateway controllers, switching components, softswitches, data sources, or call processors). A packet assembler can convert a signal received from a traditional telephone network call into a set of data packets for transmission through the IP network.
  • In contrast to the circuit-based architecture of traditional telephone networks, packet-based networks do not require dedicated circuits for transmitting data associated with a call (sometimes referred to as a call, a call session, a set of data packets, or data packets), and as such, do not encounter the same physical limitations as circuit-switched networks. Packet-based networks include components with an interface to the packet-based network, for example, an IP address. Packet-switches are analogous to circuit-based switches, and data links are analogous to trunk circuits. However, unlike circuit-based network calls, packet-based network calls employ an IP address as the routing key. As the data traffic over a particular data link increases (e.g., up to and exceeding the bandwidth capacity of the data link), existing calls that utilize the particular data link can be affected. For example, the call may be disconnected or problems such as jitter or delay in voice transmission can occur for existing calls due to the increased data traffic.
  • Although the strict limit on the number of additional calls that a particular trunk can transmit in circuit-based telephony are not present in packet-based telephony, the network itself and the topology of the network can present practical or physical limits on data transmission (e.g., call routing or quality). A chokepoint can develop in a packet-based network when data packets arrive at a packet switch faster than the packet switch can process the data packets. The chokepoint can result in lost or delayed transmission of data packets that affects existing calls. When a packet-switch is a router, the router generally lacks the processor capacity and signaling capability to reroute incoming data packets to prevent further network slowdown due to the chokepoint.
  • Previous attempts to avoid chokepoints in a network and associated slowdowns have included providing a server to identify an available IP network route based on available bandwidth associated with the route. The route (composed of undefined, ad hoc route segments between IP routers known as “path links”) defines a bandwidth capability for data transmission that consists of the sum of bandwidth available on each path link. Therefore, a route can change as available bandwidth of the constituent path links fluctuates. Instead, the server, also called a Virtual Provisioning Server (“VPS”), communicates the route having the most available bandwidth to a signaling gateway that can transmit data over that route defined by the VPS. In such a system, the VPS has knowledge of the topology of the IP network to be able to route the traffic. For example, the VPS obtains available bandwidth and other routing type information from the routers making up the portion of the IP network through which the VPS routes its traffic.
  • SUMMARY OF THE INVENTION
  • The description describes methods and apparatus, including computer program products, for defining logical trunk groups in a packet-based network. In general, in one aspect, there is a method. The method involves managing calls through a packet-based network without knowledge of the topology of the packet-based network. The method involves defining a plurality of logical trunk groups for a first media gateway in communication with a packet-based network. Each of the logical trunk groups is associated with one or more media gateways in communication with the first media gateway over the packet-based network. The method involves associating packet data with a particular call. The packet data is received or transmitted by the first media gateway. The method involves associating the packet data with a first logical trunk group of the plurality of logical trunk groups. The method also involves collecting statistical data associated with the first logical trunk group.
  • In another aspect, there is a method. The method involves associating call data through a packet-based network that is received from a data source or transmitted to a call destination with a logical trunk group based at least in part on a characteristic or identifier of the call data, the data source, the data destination, or any combination of these. The characteristic or identifier includes a name, an Internet Protocol (IP) address, a signaling protocol, a transport service access point, a port, a virtual local area network (VLAN) identifier, or any combination of these.
  • In another aspect, there is a method for managing calls through a packet-based network without knowledge of the topology of the packet-based network. The method involves defining a first logical trunk group for a first media gateway in communication with a packet-based network. The first logical trunk group is associated with a second media gateway in communication with the first media gateway over the packet-based network. The method involves associating packets corresponding to a call that is being routed to the second media gateway with a first logical trunk group. The method involves generating a first set of statistics associated with the first logical trunk group. The method involves defining a second logical trunk group for the second media gateway in communication with the packet-based network. The second logical trunk group is associated with the first media gateway in communication with the second media gateway over the packet network. The method involves associating packets corresponding to a call being routed from the first media gateway with the second logical trunk group and generating a second set of statistics associated with the second logical trunk group. The method involves associating a network quality with the packet-based network based in part on the first or second set of statistics or both.
  • In another aspect, there is a system. The system includes a packet-based network and a first media gateway in communication with the packet-based network. The first media gateway includes a plurality of logical trunk groups, and each of the plurality of logical trunk groups is associated with one or more additional media gateways over the packet network. The system includes a first module adapted to associate packet data associated with a call with a first logical trunk group selected from the plurality of logical trunk groups for transmission through the packet-based network. The system includes a collection module adapted to collect statistical data associated with the first logical trunk group.
  • In another aspect, there is a computer program product. The computer program product is tangibly embodied in an information carrier, the computer program product including instructions being operable to cause data processing apparatus to define a plurality of logical trunk groups for a first media gateway in communication with a packet-based network. Each of the plurality of logical trunk groups is associated with one or more media gateways in communication with the first media gateway over the packet-based network. The computer program product includes instructions operable to cause data processing apparatus to associate packet data that is received or transmitted by the first media gateway with a particular call and to associate the packet data with a first logical trunk group selected from the plurality of logical trunk groups. The computer program product includes instructions operable to cause data processing apparatus to collect statistics associated with the first logical trunk group.
  • In other examples, any of the aspects above can include one or more of the following features. In some embodiments, associating the packet data with a first logical trunk group of the plurality of logical trunk groups includes associating the packet data with a first logical trunk group based in part on a characteristic or identifier. The characteristic or identifier can include a name, an IP address, a signaling protocol, a transport service access point, a port, a VLAN identifier, or any combination of these. Some embodiments include selecting a logical trunk group for association with the packet data based in part on a network address or a network mask or both associated with the packet-based network, a second packet-based network, or both. In some embodiments, selecting a logical trunk group is based in part on a most-specific address match algorithm. In some embodiments, the characteristic or identifier is associated with a data source, a data destination, a logical trunk group, or any combination thereof. In some embodiments, the characteristic or identifier is associated with a data source and a data destination.
  • In some embodiments, the data source or data destination includes a gateway, a call processor, a switch, a trunk group or any combination of these. In some embodiments, a resource parameter is associated with the logical trunk group. The resource parameter can include a scalar parameter, a vector parameter, an operational state parameter, or any combination of these. In some embodiments, more than one resource parameter can be associated with the logical trunk group. For example, an operational state parameter and a scalar parameter can be associated with the logical trunk, or more than one scalar parameter can be associated with the logical trunk A scalar parameter can include a call capacity, signal processing resources, a data packet volume, a bandwidth or any combination of these. A vector parameter can include a directional characteristic. An operational state parameter can include an in-service or an out-of-service operational state.
  • In some embodiments, a first resource parameter is associated with the first logical trunk group, and a second resource parameter is associated with a second logical trunk group. The first logical trunk group and the second logical trunk group can be associated with a hierarchical group, and the hierarchical group can be associated with at least a portion of the first resource parameter, the second resource parameter or both. In some embodiments, a first hierarchical group associated with the first logical trunk group is associated with a hierarchical combination. A trunk resource parameter (e.g., the first resource parameter) can be determined based at least in part based on a combination resource parameter associated with the hierarchical combination or a combination resource parameter associated with the hierarchical group or both.
  • In some embodiments, a second set of call data received from a second data source or transmitted to a second data destination through the packet-based network is associated with a second logical trunk group. In some embodiments, a network node includes the data source, the second data source, the data destination, the second data destination, or any combination of these. The first logical trunk group and the second logical trunk group can be associated with a combined communication channel in communication with the network node. In some embodiments, the combined communication channel is in communication with a second packet-based network.
  • In some embodiments, call between the first media gateway and the second media gateway are managed based on the network quality based in part on the first or second set of statistics, or both. In some embodiments, an allocation module is adapted to associate the resource parameter with the first logical trunk group. The resource parameter can include a scalar parameter, a vector parameter, an operational state parameter, or any combination of these.
  • Implementations can realize one or more of the following advantages. The requirement of centralized control over IP routes (e.g., by a VPS) is eliminated. Additionally, knowledge of the network topology is not required to control time-sensitive data transmission through the IP network. Implementations realize increased scalability because knowledge of network topology is not required. Further advantages include controlling data through an IP network based on characteristics or parameters other than bandwidth capacity. Faster processing and more efficient network resource management is realized due to decreased data communications from a centralized control.
  • Another advantage includes increased visibility from the perspective of, for example, a network administrator into the IP network backbone. The increased visibility improves network management functions by allowing a network administrator to configure or manipulate call traffic through the IP network based on performance statistics. The performance statistics can provide visibility by reporting on the performance associated with various sets of pathways or calls. The performance statistics also relate to the quality of the IP network and devices used by the network.
  • In some embodiments, the performance statistics relate to call transmission or data transmission associated with a set of pathways. More particularly, the performance statistics allow a network administrator to track packet transmission through an IP network by only monitoring the devices that transmit and receive data packets. For example, the performance statistics can indicate whether a certain network (e.g., PSTN or IP network) is difficult to reach, which enables a network administrator to route calls around or away from the network that is difficult to reach. The performance statistics can be industry networking or traffic standard statistics such as Telcordia GR-477 or Telcordia TR-746 standard statistics that are used by PSTN administrators or individual statistics developed independently to determine network performance in IP networks. Knowledge of the packet-based network's performance allows an administrator to narrow or tailor troubleshooting efforts to improve performance to those areas within the packet-based network that are experiencing difficulty.
  • Another advantage associated with increased visibility includes improved ease of migration from circuit-switched telephony networks (e.g., the PSTN) to packet-switched telephony networks (e.g., IP networks) by allowing an administrator to employ similar tools for managing the IP networks as are available for managing the PSTN. One implementation of the invention can provide all of the above advantages.
  • The details of one or more examples are set forth in the accompanying drawings and the description below. Further features, aspects, and advantages of the invention will become apparent from the description, the drawings, and the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGS. 1-3 are block diagrams showing exemplary networks and devices related to routing data associated with a call through a packet-based network.
  • FIG. 4 is an exemplary graphical user interface for configuring control features.
  • FIGS. 5-6 depict exemplary networks and devices involved with distributed control features.
  • FIGS. 7-8 are block diagrams illustrating a hierarchical configuration of call controls and related implementations.
  • FIG. 9 is a block diagram illustrating exemplary networks and devices for call processing.
  • DETAILED DESCRIPTION
  • FIG. 1 depicts a system 100 that includes exemplary networks and devices associated with routing data associated with a call through a packet-based network. Data associated with a call can include one or more sets of data packets and may be referred to herein as data packets, a set or sets of data packets, a call or calls, a call leg, or some combination of these terms. Although in general, the call data described in this description is referencing media data (e.g., voice, video), the call data may also include signaling data without departing from the scope of the invention. The system 100 includes a PSTN 105 that is in communication with a media gateway 110 over communication channel 115. In some examples, the communication channel 115 includes PSTN trunk groups. The gateway 110 can be, for example, a GSX9000 sold by Sonus Networks, Inc., of Chelmsford, Mass.
  • The gateway 110 is in communication with a first packet network 120 over a communication channel 125. The gateway 110 is also in communication with a second packet network 130 over a communication channel 135. The first packet network 120 and the second packet network 130 can be separate packet networks, for example, one being a public packet network (e.g., the Internet) and one being a private packet network (e.g., an intranet). In other examples, the first packet network 120 and the second packet network 130 can be the same packet network, e.g., the Internet. In such examples, the separation is shown to illustrate egress and ingress call data for the gateway 110.
  • Using the packet network 120, the gateway 110 communicates with a media gateway 140 and a media gateway 145 (e.g., by transmitting packet data through the packet network 120). Similarly, using the packet network 130, the gateway 110 communicates with a media gateway 150 and a media gateway 155. For example, the media gateways 140, 145, 150, 155 may be located in different geographical areas to allow the service provider to provide national service at reduced costs. For example, the gateway 110 can be in Texas, the gateway 140 can be in Oregon, the gateway 145 can be in California, the gateway 150 can be in Massachusetts, and the gateway 155 can be in New Jersey. As an operational example, a call is received from the PSTN 105 at the gateway 110 and transformed from a circuit-based call into a packet-based call (e.g., by a packet assembler module or a packet assembler disassembler module). The packet data associated with the call is transmitted from the gateway 110 to the appropriate gateway, for example gateway 145, and converted back to a circuit-based call at the gateway 145, if the called party is connected to another portion of the PSTN, or can be left in packet form if the called party is connected directly to a packet-based system (e.g., IP-based telephony).
  • It is noteworthy that a service provider that manages the gateway 110 does not always have knowledge of the topology of the packet networks 120 and 130, particularly if the packet networks 120 and 130 represent a public packet network such as the Internet. In some embodiments, a service provider obtains access to a packet network through agreements with a third party (e.g., a service-level or quality agreement). One advantageous feature of the described configuration allows the service provider to evaluate or understand the underlying packet network inferentially (e.g., by monitoring traffic or performance statistics), which enables the service provider to verify that the third party is meeting the quality guarantees in the agreement. The gateway 110 transmits the packets associated with the call to the packet network 120 through communication channel 125 and the packet network 120 takes responsibility for routing the packets to the gateway 145. Because the packets are associated with a call, the packets are time-sensitive. As such, problems within the packet network 120 can affect the whether and how fast the packets are transported to the gateway 145. Delays and lost packets lead to a loss of quality of the call.
  • The gateway 110 advantageously includes logical trunk groups TG-A 160, TG-B 165, TG-C 170, and TG-D 175. Unlike PSTN trunk groups, which correspond to physical communication channels (e.g., wires, fiber optic cables), logical trunk groups represent a virtual communication channel through a packet network. As an exemplary implementation, logical trunk groups can be represented as objects in an objected oriented data processing paradigm.
  • As an illustrative example, the service provider managing the gateway 110 can define the logical trunk groups 160, 165, 170, and 175 to be associated with the gateways 140, 145, 150, and 155, respectively. In such an example, as the gateway 110 receives calls from the PSTN 105, transforms the call data into packets, and transmits those packets to the appropriate gateway (e.g., the gateways 140, 145, 150, and 155), the gateway 110 associates the packets with the appropriate or corresponding logical trunk group. For example, as a call is received and routed to the gateway 145, the packets associated with that call are associated with the logical trunk group TG-B 165. As additional calls come into gateway 110 and are routed to gateway 145, they are also associated with the logical trunk group TG-B 165. After packet data has been associated with a logical trunk group, statistics about that packet data can be collected and tracked. In some examples, these statistics are aggregated to provide statistics at the call level. In some examples, statistics aggregated at the call level can be aggregated to provide statistics about the logical trunk group (e.g., statistics associated with a group of one or more calls).
  • For example, for a particular call being routed through the gateway 110 to the gateway 145, statistics are collected on packet delays and lost packets (e.g., based on the number of packets transmitted, received, queued, or lost). These statistics are associated with the logical trunk group TG-B 165. As the quality decreases (e.g., packets lost and delayed increases for each call) for this logical trunk group TG-B 165, the service provider managing the gateway 110 has knowledge that packet network 120 has some issues in the set of pathways from the gateway 110 to the gateway 145, even though the service provider does not necessarily know the topology of the packet network 120. In such examples, the data associated with each logical trunk group models characteristics (e.g., capacity, hardware failures, bandwidth, etc.) of the set of pathways between the gateway 110 and the other gateway(s) associated with the particular logical trunk group. In general, references to the set of pathways between two devices refer to any combination of path links through the packet network (e.g., the packet network 120) from one device (e.g., the media gateway 110) to another device (e.g., the media gateway 145). Advantageously, information associated with the performance of a network (e.g., the packet network 120) can be inferred from statistics associated with “edge devices” such as the gateway 110 and a media gateway 140, 145, 150, 155.
  • Similar to associating each trunk group with a different media gateway, the logical trunk groups can be associated with more than one media gateway. For example, the service provider managing the gateway 110 can define the logical trunk group TG-A 160 to be associated with the gateways 140 and 145 (e.g., the gateways in communication with the gateway 110 via packet network 120). Likewise, the service provider managing the gateway 110 can define the logical trunk group TG-B 165 to be associated with the gateways 150 and 155 (e.g., the gateways in communication with the gateway 110 via packet network 130). In this example, as a call is received and routed to either the gateway 140 or 145, the packets associated with that call are associated with the logical trunk group TG-A 160.
  • Service providers whose networks include a portion of the PSTN networks typically have managers that have managed the PSTN using statistics collected about PSTN trunk groups. For example, the Telcordia GR-477 or the Telcordia TR-746 standard on network traffic management deals with PSTN trunk group reservation and PSTN trunk group controls. By establishing logical trunk groups for the packet-based traffic, analogous management techniques can advantageously be applied to the packet-based traffic as is used for the PSTN trunk groups. Managers can quickly adapt to managing the packet-based traffic using their PSTN trunk group skill set.
  • There are also other advantageous uses for the logical trunk groups. The logical trunk groups can be used to allocate resources and to provision services. For example, the logical trunk group TG-A 160 can represent the network used for calls provided on a retail basis and the logical trunk group TG-B 165 can represent the network used for calls provided on a wholesale basis. Because the retail calls provide a higher margin, limited resources, such as DSPs, can be allocated in higher percentages, or on a higher priority basis, to the calls associated with the logical trunk group TG-A 160, regardless of the final destination of the call data. Similarly, services can be provisioned to a call according to the logical trunk group with which that call is associated.
  • In some operational examples, the telephone call has certain data associated with that call (e.g., video, voice, signaling). The PSTN 105 communicates the data to a gateway 110, which includes at least one network card with an IP address (e.g., a network interface). In system 100, calls are received by the gateway 110 at a physical location or port (e.g., a DS0 or a DS1 port) from the PSTN 105 over the communication channel 115. The gateway 110 processes the data and transmits the data according to a characteristic of the device receiving the data. For example, in FIG. 1, the device(s) receiving the packetized call data can be the gateway 140, the gateway 145, the gateway 150, and/or the gateway 155. The characteristic can include a name, an IP address, a signaling protocol, a virtual local area network (“VLAN”) identifier or any combination of these.
  • In the illustrated embodiment of FIG. 1, there are four sets of pathways through the packet networks 120 and 130 between the gateway 110 and the four other gateways 140, 145, 150, 155 over which call data can travel. As described above, these four sets of pathways can be associated with the logical trunk groups TG-A 160, TG-B 165, TG-C 170, and TG-D 175. As referred to herein, a “set of pathways” is not necessarily a fixed physical concept but relates to the concept of communications between various network equipment (e.g., the gateway 110 and the media gateway 140) through the packet network 120. In some embodiments, the logical trunk groups TG-A 160, TG-B 165, TG-C 170, and TG-D 175 are referred to as “IP trunk groups” when the networks 120 and 130 are IP-based networks.
  • Individual pathways (e.g., path links and parallel pathways) are logically associated to form the sets of pathways from one call processing device (e.g., gateway 110) to another call processing device (e.g., gateway 140). One species of logical association involves associating individual pathways based on IP addresses associated with the call processing devices (also referred to herein as call processors) in communication with the pathways (e.g., gateways 110, 140, 145, 150, 155 ). In some embodiments, an IP trunk group is a named object that represents one or more call processing devices and the data communication paths that connect the network elements.
  • By associating a logical trunk group with the sets of pathways between two call processing devices, the amount of data transmitted over any of the sets of pathways can be controlled by the gateway 110 as discussed in more detail below with respect to call admission controls (e.g., whether calls are transmitted as determined by statistics associated with a given IP trunk group). Transmission of a call from the gateway 110 over the IP network 120 to the destination gateway 140 (e.g., the set of pathways between the gateway 110 and the gateway 140) is referred to as an “egress call leg” with respect to gateway 110. The egress call leg is associated with an IP trunk group (sometimes referred to as an “egress IP trunk group”). Reception of the call at the destination 140 is referred to as an “ingress call leg” with respect to the destination 140. The ingress call leg is associated with an IP trunk group defined for the gateway 140 (sometimes referred to as an “ingress IP trunk group).
  • In some embodiments, with respect to the gateway 110, the egress call leg is associated with the same IP trunk group as the ingress call leg, but the egress call leg is not required to be associated with the same IP trunk group as the ingress call leg. In such a configuration, the gateway 110 can be a data or call source with respect to the destination 140 and can include a characteristic that may be used for future call routing (e.g., where the destination 140 provides subsequent call processing and is not the final destination of the call). As described herein, a data source or a data destination can include any gateway, PSTN trunk group, or any set of pathways (e.g., represented by logical trunk groups) with associated characteristics or identifiers.
  • In some embodiments, a media path is associated with a pathway or set of pathways. The media path transmits out-of-band non-voice data associated with the set of pathways (e.g., for videoconferencing) and can be used in association with IP trunk groups.
  • Still referring to FIG. 1, data associated with a call is received from a data source, in this example the gateway 140, at the gateway 110 over a particular pathway included in the set of pathways between the gateway 110 and the gateway 140 through the packet network 120. This set of pathways can be associated with the logical trunk group TG-A 160 (e.g., an IP trunk group when the packet-based network 120 is an IP network). The gateway 110 processes the data and associates the data with the logical trunk group TG-A 160 based in part on a characteristic of the data source including name (e.g., FROM_Oregon), IP address of the gateway 140, a transport service access point (“TSAP”) associated with the gateway 140, signaling protocol between the gateways 140 and 110, a VLAN identifier associated with one of the gateways 11 0, 140, or a combination of these.
  • In some embodiments, the call data is associated with the logical trunk group TG-A 160 based in part on a characteristic of a data destination), and the characteristic can include the name of the destination, the IP address of the destination, a TSAP associated with the destination, the signaling protocol between the destination and the gateway 110, a VLAN identifier associated with the destination or the gateway 110, or any combination of these. Because data is received from the gateway 140 at the gateway 110, the selected logical trunk group is the logical trunk group association for ingress call data.
  • In some embodiments, data is transmitted in the reverse direction. More particularly, the gateway 110 can be a data source with respect to gateway 140 (e.g., for return data traffic). With respect to the gateway 110, for the egress call data (e.g., call data traveling from the gateway 110 to the gateway 140), the same trunk group or a different trunk group can be used. For example, for all call data, ingress and egress, between the gateways 110 and 140, the trunk group TG-A 160 can be used. For example when IP addresses are the characteristics to determine the association, the IP address of the data source can be used for ingress calls and the IP address of the data destination can be used for the egress calls, which in both cases is the IP address for the gateway 140. In some embodiments, the IP address of either the data source or the data destination can be used for either ingress call legs or egress call legs. In some embodiments, the IP addresses of the data source and the destination can be used to associate the call with a logical trunk group.
  • In some examples, the association with the logical trunk group TG-A 160 can be based on the IP address of the data source (e.g., when the data source is the gateway 140) or the data destination. The association with the logical trunk group TG-B 165 can be based on the IP address of the data destination (e.g., when the data destination is the gateway 140) or the data source. In such examples, for ingress calls from the gateway 140, the data is associated with the logical trunk group TG-A 160 and for egress calls to the gateway 140, the data is associated with the logical trunk group TG-B 165). Although the network elements 110, 140, 145, 150, and 155 are referred to repeatedly as gateways, they can also represent groups of signaling peers, points of presence, central offices, network nodes, other telephony equipment and/or the like in communication with the networks 120 and 130 without departing from the scope of the invention.
  • Calls and associated data received by the gateway 110 from the PSTN 105 can be transmitted through the packet networks 120 and 130 to different destinations 140, 145, 150, and/or 155. Similarly, calls and associated data received by the gateway 110 over the packet networks 120 and 130 from data sources 140, 145, 150, and/or 155 can be transmitted to the PSTN 105 over PSTN trunk group(s) that is included in the communications channel 115. More particularly, the gateway 110 is an interface between circuit-switched networks like the PSTN 105 and packet-based networks 120 and 130. Call data between the PSTN 105 and the gateway 110 is controlled based on circuit availability (e.g., PSTN trunk group management). Call data between the gateway 110 and the other gateways 140, 145, 150, and/or 155 do not have such limitations. There are of course limitations on the packets based on the performance capabilities of the networks 120 and 130. Advantageously, a network administrator can configure the operation of the gateway 110 to impose such limitations and manage the call data transmitted across the packet-based networks 120 and 130 based on performance statistics associated with the logical trunk groups analogous to those techniques and performance statistics used to manage the PSTN trunk groups included in the communication channel 115.
  • FIG. 2 depicts a system 200 that includes exemplary networks and devices for call routing and control in connection with packet-to-packet call processing. A data source 202, representing a group of call processing devices, transmits data associated with a call over a first set of pathways 204, and the data are received by a switch 206. The switch 206 can include a packet-peering switch for peer-to-peer data transmission. The switch 206 can be, for example, a GSX9000 sold by Sonus Networks, Inc. of Chelmsford, Mass. The switch 206 can then select a second set of pathways 208 to transmit the data to a destination 210, also representing a group of call processing devices.
  • In system 200, the first set of pathways 204 and the second set of pathways 208 are implemented in an IP-based packet network, so the logical trunk groups are referred to as IP trunk groups. For the switch 206, the first set of pathways 204 and the second set of pathways 208 are each associated with a distinct logical trunk group. Specifically, the first set of pathways 204 is associated with a logical trunk group “IP-A” and the second set of pathways 208 is associated with a logical trunk group “IP-B”. With respect to switch 206 and following the direction indicated by the arrows, the first set of pathways 204 (e.g., the logical trunk group IP-A) is associated with an ingress call leg, and the second set of pathways (e.g., the logical trunk group IP-B) is associated with an egress call leg. In this example, the switch 206 associates call data with the logical trunk group IP-A because the call arrives from the data source 202. Similarly, the switch 206 associates call data with the logical trunk group IP-B because the call is being transmitted to the data destination 210.
  • The switch 206 operates in a packet-based environment (e.g., a packet-based network), so the data transmitted by the first set of pathways 204 is not required to correspond one-to-one to the data packets transmitted by the second set of pathways 208. More specifically, some members of the set of data packets can be transmitted by the second set of pathways 208, and some members can be transmitted over a third set of pathways (not shown). The data packets are reassembled at a remote switch (not shown) that is in communication with the final destination of the call. Data source 202 or destination 210 or both can be network node that includes one or more pieces of telephone equipment (e.g., the switch 206 or the gateway 110 of FIG. 1). The network node can include an interface with a packet-based network via the first or second sets of pathways 204, 208, or via a connection to network node rather than to individual telephony equipment within the node.
  • FIG. 3 shows a system 300 including exemplary networks and devices associated with data routing in a packet-based network core. In some embodiments, data associated with a call is routed through a packet-based network 302 in which substantially all of the network equipment is operated by a single entity or controller, for example a VOIP network administrator.
  • The network 302 includes a first switching component 303, a second switching component 304, and a control component 306. Switching components 303, 304 can be gateways as described above with respect to FIG. 1 (e.g., a GSX9000 sold by Sonus Networks of Chelmsford, Mass.) and control component 306 can be a policy server, controller, monitor, or other network component for storing and implementing control features. For example, the control component 306 can be a PSX policy server for use in the Insignus Softswitch™ architecture, both sold by Sonus Networks of Chelmsford Mass. In system 300, the switching components 303, 304 are “edge devices” (e.g., components that provide an entry point into the core network 302, such as that of a telephone carrier or internet service provider (“ISP”)).
  • A first network 308 (e.g., a PSTN or a portion of the general PSTN) is in communication with switching component 303 via a first connection A 310 (e.g., a PSTN trunk group). In some embodiments, the first network 308 can be a packet-based network (e.g., an IP network), and the first connection A 310 can be a logical trunk group (e.g., an IP trunk group) without departing from the scope of the invention. In the illustrative example depicted in FIG. 3, the network 308 may also be referred to as PSTN 308, and the connection A 310 may also be referred to as PSTN connection A 310.
  • The first PSTN connection A 310 can be a PSTN trunk group that is in communication with switching component 303 (e.g., over wire, fiber optic cable, or other data transmission media). Data associated with a telephone call can originate from a caller in the PSTN 308 and is transmitted to the first switching component 303 for routing through the network 302 by the first PSTN connection A 310. Switching component 303 can communicate with other network equipment (e.g., switching component 304) via one or more sets of pathways that are associated with a logical trunk group B 312.
  • Switching component 303 communicates data packets associated with the call to the switching component 304 via a set of pathways associated with the logical trunk group B 312. Data packets are received by switching component 304 via the set of pathways between the components 303 and 304 and the switching component 304 associates this data with a logical trunk group C 314. As discussed above with respect to FIG. 2, the data packets transmitted over the first set of pathways 312 may not directly correspond to the data packets received over the second set of pathways 314. More particularly, the same set of pathways can have a different name with respect to different components. For example in system 300, the set of pathways between the components 303 and 304 are associated with the egress logical trunk group B 312 with respect to one component (e.g., the switching component 303) and associated with the ingress logical trunk group C 314 with respect to another component (e.g., the switching component 304).
  • In some embodiments, the first set of pathways (e.g., the set of pathways actually taken for the data traveling from the component 303 to the component 304) does not correspond directly (e.g., hop for hop through the packet-based network 302) with the second set of pathways (e.g., the set of pathways actually taken by the data going from the component 304 to the component 303). The set of data packets associated with the original call received by the component 304 is reassembled into a signal by a packet assembler/disassembler that can be co-located with switching component 304. A second connection D 316 (e.g., a PSTN trunk group) transmits the reassembled data from switching component 304 to a second network 318 (e.g., a PSTN network or a portion of the general PSTN) for further processing or communication to the intended call recipient.
  • In some embodiments, the second network 318 can be a packet-based network (e.g., an IP network), and the second connection D 316 can be a logical trunk group (e.g., an IP trunk group) without departing from the scope of the invention. In the illustrative example depicted in FIG. 3, the network 318 may also be referred to as the PSTN 318, and the connection D 316 may also be referred to as PSTN connection D 316.
  • From the point of view of the PSTNs 308, 318, the network 302 can appear as a single distributed switch. In some embodiments, the control component 306 provides routing instructions to switching component 303 based on the characteristic of the data source (e.g., the PSTN 308 or the first PSTN connection A 310) or the data destination (e.g., the switching component 304, the second PSTN connection D 316, or the PSTN 318), where the characteristic includes name, signaling protocol, TSAP, IP address or any combination of these. More particularly, the control component 306 indicates to the switching component 303 routing data (e.g., the IP address of the component 304) to employ for transmitting data through the core network 302 based in part on the characteristic. In some embodiments, data associated with a call is received by the switching component 303 over the PSTN connection 310, and the switching component 303 transmits a signal (e.g., a policy request) to the control component 306 indicating that the data was received or requesting routing or transmission instructions.
  • The control component 306 can provide to the switching component 303 routing information through the network 302 for the switching component 304 to ultimately transmit the data to the PSTN connection D 316—the control component 306 can provide routing information based on a characteristic associated with the data or a characteristic of a data source or destination. More particularly, the control component 306 provides routing information through the network 302 without knowledge of the particular topology of the network 302 by, for example, using the name of the logical trunk group with which a set of pathways is associated (e.g., logical trunk group B 312) to select the route based in part on, for example, the port (e.g., a DS0 or a DS1 port) on switching component 303 at which the data arrived from PSTN connection 310. Other characteristics can be used by control component 306 to select and provide routing information. From the point of view of the PSTNs 308, 318, the network 302 routes the data from the first PSTN connection A 310 to the second PSTN connection D 316.
  • For example, as a call is set up from PSTN connection A 310 to PSTN connection D 316 by the components 303 and 304, the components 303 and 304 associate the logical trunk groups B 312 and C 314 with the call. The component 303 associates the logical trunk group B 312 with the call as the call egresses the component 303. As the call ingresses the component 304, the component 304 associates the logical trunk group C 314 with the call. This procedure for associating a logical trunk group in the network core (e.g., the network 302) with a call is sometimes referred to as logical trunk group selection (or IP trunk group selection in the case where the core network is an IP-based network). In some embodiments, the control component 306 is employed in the logical trunk group selection. In some embodiments, information regarding the route through the network is available to the switching component 303 without communication to the control component 306 (e.g., the information can be stored on or available to the switching component 303).
  • In some embodiments, data associated with a call is processed by the network 302. The data can be processed by a call processor having an interface to the network 302 (e.g., a network card with an IP address). A call processor generally refers to a module for implementing functionality associated with call transmission through a network, for example, by providing signaling information, routing information, or implementing other functionalities with respect to data associated with a call such as compression or decompression or signal processing. In some embodiments a call processor can include the switching components 303, 304 and/or the control component 306 or other devices or modules not illustrated.
  • In some embodiments, data associated with a call includes information relating to the characteristic (e.g., information related to the characteristic forms part of a data packet). The characteristic can include a name, an IP address, a signaling protocol, a transport service access point, or any combination of these. The characteristic can be associated with a data source, a call processor, a set of pathways, a logical trunk group, or combinations of such elements.
  • In some embodiments, a selector selects a logical trunk group associated with a set of pathways over which a call processor (e.g., the switching component 303) can transmit the data based at least in part on the characteristic. The selector can be a module operable within the network 302. In some embodiments, the call processor (e.g., the switching component 303) includes the selector. In other embodiments, the selector is located remotely from a call processor (e.g., co-location with control component 306). A user can prioritize characteristics such that the selector considers first the highest-priority characteristic for routing (e.g., name) and then considers a second-highest-priority characteristic (e.g., IP address) only if routing is not possible using the highest-priority characteristic. In some embodiments, a default logical trunk group associated with a default set of pathways is available for selection by the selector if the lowest-priority set is unavailable.
  • By way of example, using the name of a data source as the characteristic, data associated with a route (“route data”) is provided to the selector, such that the route data can be used to associate the call data (e.g., the packets associated with the call) with a logical trunk group (e.g., the logical trunk group B 312 or the logical trunk group C 314) representing a set of pathways through the packet network 302. The logical trunk group associated with the route can be denoted with a name (e.g., “Logical Trunk Group B 312” or “Logical Trunk Group C 314”). In one embodiment, a telephony device with a network interface (e.g., switching component 303), can be configured to transmit the data through the network based on the name of the associated logical trunk group (e.g., the logical trunk group B 312) or the destination of the route (e.g., the second PSTN connection D 316). In such an embodiment, the route through the network 302 is chosen by the selector based on the name of the logical trunk group, which allows an administrator to control the route or pathways of data transfer in a manner similar to that employed by an administrator with respect to PSTN trunk groups. An exemplary configuration of route data is depicted in Table 1 below:
    TABLE 1
    Route Data
    Destination Route Route
    Route Data Destination IP Trunk Signaling Specific
    Entry Destination Address Group Protocol Data
    1 Switching 10.160.100.101 Logical *
    element 303 Trunk
    Group B
    312
    2 Switching 10.160.101.101 Logical SIP *
    element 304 Trunk
    Group C
    314
    3 Switching 10.160.255.255 Logical SIP-T *
    element Trunk
    (not shown) Group E
    (not shown)
  • Referring to Table 1, the entries associated with “Route Entry 1” can correspond to switching component 303. More specifically, the entries can refer to transmitting data across switching component 303. “Data Destination” can correspond to the characteristic “name,” and “Destination IP Address” can correspond to the characteristic “IP address.” Switching element 303 can communicate with or transmit data associated with a call to the set of pathways associated with the logical trunk group B 312. “Route Signaling Protocol” identifies the signaling protocol used to communicate between switching elements (e.g., between switching element 303 and switching element 304). In some embodiments, “Route Signaling Protocol” refers to the signaling protocol associated with a set of pathways (e.g., the set of pathways associated with the logical trunk group B 312). “Route Signaling Protocol” can identify a physical location with respect to the network or call processors in communication with the network. For example, the null entry associated with Route Entry 1 indicates that the call is transmitted from one location on the switching component 303 to another location (e.g., the set of pathways associated with the logical trunk group B 312 or the TSAP associated with the set of pathways associated with the logical trunk group B 312)—the null entry is the signaling protocol associated with transmitting data within or across a switching component 303 and generally is not configured by an administrator.
  • The entries associated with “Route Entry 2” can correspond to switching component 304, e.g., the destination of the data after the call egresses switching component 303. The entry associated with “Destination Trunk Group” can refer to the logical trunk group associated with the egress call leg with respect to the switching component 303 (e.g., the logical trunk group B 314). The “Route Signaling Protocol” can indicate that the call is be transmitted to switching component 304 using SIP signaling protocol. More particularly, a network administrator can configure call routing based on the desired communication protocol to be used between two switching components. For example, the Route Signaling Protocol associated with Route Entry 3 is SIP-T. A network administrator could select the signaling protocol SIP-T and hence the route and logical trunk group that will handle the call. In some embodiments, SIP or SIP-T signaling protocol can indicate transmission to a switching component logically remote from switching component 303 but still operating within the network core (e.g., switching component 304). The signaling protocols of Table 1 are merely illustrative and other signaling protocols may be used, for example H.323.
  • In some embodiments, a “Route Signaling Protocol” entry that is not a null entry (e.g., an entry of SIP, SIP-T, or H.323)indicates, for example, data transmission to a call processor or switching component (not shown) logically remote from switching component 303 or operating outside the network 302 core. In some embodiments, the signaling protocol is an industry standard signaling protocol or proprietary signaling protocols between edge devices, such as a GW-GW protocol, which refers generally to the signaling protocol between two gateways.
  • In some embodiments, the characteristic that controls the logical trunk group chosen by the selector for associating the transmitted or received data is the IP address or signaling protocol associated with a set of pathways or with the call processor. In some embodiments, the characteristic is a VLAN. In general, network equipment (e.g., call processors or routers) can be grouped according to a packet-based network address (e.g., IP network address) associated with each respective processor. Such groupings can be assigned to a logical trunk group. The call processor can maintain a mapping of IP network addresses that are associated with the logical trunk groups. In some embodiments, the mapping is contained in an object or a table that is available to the call processor (e.g., Table 1). In a particular embodiment, a separate mapping is maintained for data that is received by a call processor (e.g., ingress call legs) and data that is transmitted by a call processor (e.g., egress call legs). In some embodiments, the characteristic that is associated with the logical trunk group (e.g., a name, IP address, signaling protocol, VLAN identifier, or a combination of these) is associated generally with a data source, a data destination or a trunk group. In some embodiments, the characteristic includes more than one type of characteristic from some combination of data sources, data destinations, or trunk groups.
  • When data associated with a call are processed by a call processor, a logical trunk group can be associated with or assigned to the call. More particularly, data associated with a call that is transmitted to switching component 303 from PSTN connection 310 can be associated with a first logical trunk group. For example, the ingress call leg established at the switching component 303 from the PSTN connection A 310 can be associated with a logical trunk group. As the switching component 303 processes the call, data associated with the call is associated with the logical trunk group B 312 based on the destination of the call. In such a configuration, the PSTN connection A 310 and the set of pathways associated with the logical trunk group B 312 are each in communication with the call processor (e.g., switching component 303). An egress call leg is associated with the logical trunk group B 312.
  • In some embodiments, the logical trunk group B 312 is selected based on the source of the call (e.g., the PSTN 308 or PSTN connection A 310) or, the signaling protocol in which the signaling data associated with the call was received (e.g., SS7).
  • In some embodiments, each route through the packet-based network includes route specific data associated with the logical trunk group chosen by the call processor to associate with the transmitted or received call data. The route specific data can include a list of peer-signaling addresses, peer-signaling protocol, and other configuration data that can be used to determine characteristics of a particular call. Some specific examples of the specific data include the codec (e.g., compression/decompression standard used to assemble the packet), diffserve code point, packet size, codec options (e.g., silence suppression), fax/modem/data call handling, and DTMF tone handling. The specific data can include data associated with signaling to determine the content of signaling messages and the sequence or order of signaling messages. The route specific data can be invoked or accessed by a switching component (e.g., the switching component 303) to provide insight into the behavior (e.g., the signaling behavior) associated with a particular signaling peer. The route specific data can be associated with network equipment that forms at least a part of the set of pathways associated with that logical trunk group.
  • In still other embodiments, the logical trunk group is selected based in part on the IP network to be traversed by the set of pathways. In such an embodiment, the selector (also referred to herein as an IP network selector) selects the IP network to transmit the data instead of or in addition to selecting the set of pathways through that network. The selector, in effect, selects a set of pathways, and thus the associated logical trunk group, by selecting the IP network. For example, the selector implicitly selects the logical trunk groups associated with the sets of pathways associated with or passing through network 302 (e.g., the logical trunk group B 312 and the logical trunk group C 314) by routing calls to the switching component 303. An exemplary embodiment of a table for selecting an IP network or logical trunk groups associated with that network is depicted in Table 2.
    TABLE 2
    Ingress IP Trunk Group Selection Table
    IP Network Selector Signaling
    Network Protocol Logical
    Entry Number Network Mask Selector trunk group
    1 209.131.0.0 255.255.0.0 SIP-T FROM_LA
    2 209.131.16.0  255.255.240.0 SIP-T FROM_MA
    3 171.1.0.0 255.255.0.0 SIP  FROM_UK
    4 172.4.0.0 255.255.0.0 SIP  FROM_UK
  • Entry 1 of Table 2 represents a range of IP host addresses from 209.131.0.0 to 209.131.255.255, and Entry 2 represents the range of IP host addresses from 209.131.16.0 to 209.131.31.255. In such an embodiment, both Entry 1 and Entry 2 indicate that SIP-T signaling protocol is being used (e.g., for data transmissions within a network core). Entry 1 of Table 2 is associated with a logical trunk group named FROM_LA, and Entry 2 is associated with a logical amed FROM_MA.
  • As depicted, the range of host addresses defined by Entry 2 is defined within the range specified by Entry 1 (e.g., the range specified by Entry 2 is a subset of the range specified by Entry 1). In some embodiments, the characteristic that is used to select a logical trunk group is an IP address. Data arriving from a call processor having an IP address of, for example, 209.131.17.43 and employing SIP protocol matches both Entry 1 and Entry 2 of Table 2. In such an example, Entry 2 provides a more specific match as a subset of the range of IP addresses of Entry 1, and thus, Entry 2 is selected as an egress call leg (e.g., by a selector). Thus the associated logical trunk group with which the data is associated for that call is the logical trunk group named “FROM_MA”.
  • The selector can look up Table 2 and determine that Entry 2 is a more specific address match when the characteristic is an IP address. The selector can then provide information about Entry 2 to a call processor (e.g., the switching component 303) that egresses the call over the network over the set of pathways associated with the logical trunk group named FROM_MA in part in response to the information returned by the selector. The information returned by the selector after consultation with Table 2 can change as the characteristic changes, for example, in response to a user-provided input or configuration. For example, Table 2 includes characteristics associated with a name of a logical trunk group (e.g., FROM_UK), a signaling protocol associated with a set of pathways associated with that trunk group (e.g., SIP), an IP address associated with a set of pathways associated with that trunk group (e.g., 172.4.0.0), or a combination of these characteristics. Any of the characteristics of Table 2 can be used by the selector for determining which set of pathways over which to transmit the data and thus the logical trunk group with which to associate the data.
  • In some embodiments, the list of IP networks available to the IP network selector is not contiguous. For example, Entry 2 and Entry 3 in Table 2 represent two non-contiguous IP networks, as illustrated by the “Network Number” associated with each entry. As depicted, Entry 2 is associated with a network located in the United States, and Entry 3 is associated with a network located in the United Kingdom. Such a configuration can provide flexibility in the network design to aggregate a number of networks, represented by a range of host IP addresses, into one configuration element or object (e.g., Table 2) at the application level and for promoting network growth.
  • As the size of a network or node expands beyond the capabilities of a specified range of host IP addresses, additional ranges of IP addresses can be added to represent the same set of pathways and thus are associated with the same logical trunk group. By way of nonlimiting example, a second packet-based network 172.4.0.0/16 can be added to the network associated with a set of pathways originating from the United Kingdom. In such an example, both 171.1.0.0/16 and 172.4.0.0/16 are associated with the logical trunk group named FROM_UK.
  • In some embodiments, the IP network address associated with an IP network or set of pathways through the IP network can include an IP network number and an IP network mask. Such a configuration allows transparent communication between the network in which the data originates (e.g., the PSTN 308) and the network over which data is transmitted (e.g., network 302) because data is transmitted to a call processor having an IP address (e.g., a network card with an IP address) associated with the network mask (e.g., IP address 255.255.0.0 of Entry 1 of Table 2). The IP address of the network that actually transmits the data (e.g., IP address 209.131.0.0 of Entry 1 of Table 2) can change without requiring reconfiguration of Table 2 or the PSTN connection 310. More particularly, the IP address associated with Entry 1 can be changed (e.g., by replacing the network card of the switching component 303 or the switching component 303 itself) without affecting the address to which the PSTN 308 transmits the data.
  • A logical trunk group can be associated with the IP network and can include a characteristic as described above. In some embodiments, the logical trunk group is chosen using the longest prefix match algorithm (e.g., the most specific entry from, for example Table 2, is chosen). In other embodiments, different methods for selecting an IP network can be employed to arrive at the association with a logical trunk group.
  • In an advantageous configuration, the architecture described with respect to selecting a logical trunk group associated with a set of pathways through the IP network (e.g., network 302) is scaleable. For example, each logical trunk group (e.g., the logical trunk group B 312 and the logical trunk group C 314) can be configured by using an IP network selector rather than a particular set of nearest neighbors (e.g., a particular topology). An IP network selector generally selects a network node (e.g., the data source 202 of FIG. 2) rather than an individual gateway, switch or call processor (e.g., switching component 304) for connecting and transmitting data associated with a call. More specifically, network equipment that can be added to a particular network node is automatically associated with that network node when the network node is selected for data transmission by the IP network selector. Because the network node is selected for data transmission rather than individual network equipment or sets of pathways, the selector does not require knowledge of the network node's topology or composition. More particularly, a logical trunk group is selected when the network node is selected to transmit data associated with a call, even if a particular set of pathways or piece of network equipment associated with the set of pathways was added to the network node after the creation of the object (e.g., Table 2).
  • FIG. 4 illustrates an exemplary graphical user interface for implementing control features in a packet-based network. The graphical user interface (“GUI”) 400 includes a plurality of fields associated with resource parameters for controlling data in an IP network, particularly data packets associated with a logical trunk group through an IP network (e.g., set of pathways associated with the logical trunk group B 312 of FIG. 3). The GUI 400 can be displayed on a display means (e.g., a computer monitor) in communication with elements of an IP network (e.g., switching component 303 or control component 306). In some embodiments, a module is adapted to generate the GUI 400. The GUI 400 allows a user to determine or control various configurable variables in connection with a logical trunk group (e.g., displayed in the GUI 400 as field 402). The configurable variable is associated with a control feature or resource parameter that is associated with a logical trunk group. In some embodiments, the appearance of the GUI 400 differs depending on the value of the field 402. Additionally, the values of the resource parameters (e.g., the with fields 404, 406, 408, and 410 and the sub-fields associated with those fields 404, 406, 408, 410) In other embodiments, field 402 can refer to a group of logical trunk groups or a combination of groups as discussed below with respect to FIGS. 7 and 8.
  • The resource parameter can include scalar quantities (e.g., bandwidth capacity, call capacity, signal processing speed, or data packet volume of the field 404), vector quantities (e.g., a directional characteristic of field 406), or an operational state or toggle-like quantity (e.g., in-service or out-of-service operational states of field 408), or a combination of any of these. In other embodiments, more than one user interface can be used to implement control functions. Exemplary resource parameters and control features will be described with reference to the exemplary networks and devices depicted in FIG. 3, but the resource parameters and control features can be implemented with respect to other networks or devices (e.g., the exemplary configuration of FIGS. 1 or 2 or other configurations).
  • The illustrated scalar parameters involving configurable variables include fields for number of calls 404 a, DSP resources 404 b, data packet volume 404 c, bandwidth 404 d, or other resources (not shown) associated with a signaling peer. The scalar parameters can be configured by inputting a data value into the sub-field or through, for example a drop-down menu. When data associated with a call exceed any of the configured resource parameters, a trigger event occurs (e.g., the data packets are not transmitted over the set of pathways associated with logical trunk group with which the trigger event is associated). In some embodiments, a particular call or call session can be more “expensive” in terms of scalar parameters required to guarantee a minimum quality of service. An administrator can configure the scalar parameters to limit the scalar parameters that are used for each call, which allows the administrator to limit the number of calls from a particular signaling peer (e.g., switching component 303 of FIG. 3).
  • Vector parameters can include a directional characteristic 406 a such as, for example, a “one-directional” characteristic 406 b or a “bi-directional” characteristic 406 c. In some embodiments, the vector parameters 406 can be configured by check-off boxes. Other, more complicated arrangements of vector parameters will be appreciated. For example, the directional characteristic 406 can be configured to allow bi-directional call traffic (e.g., by enabling bi-directional characteristic 406 c). The administrator can configure the directional characteristic 406 a to allocate a certain amount of resources for calls in a first direction (e.g., incoming calls) and a certain, not necessarily equal, amount of resources for calls in a second direction (e.g., outgoing calls).
  • Another field 408 involves the operational state associated with a logical trunk group (e.g., “in-service” 408 a or “out-of-service” 408 b) that allows a user to control whether the particular set of pathways associated with the logical trunk group indicated in field 402 is accessible. In some embodiments, a logical trunk group can be in-service for calls in one direction and out-of-service for calls in the other direction. The packet outage detection field 410 permits a user to determine or define whether a trigger event occurs with respect to the state of a set of data packets in the set of pathways associated with a particular logical trunk group. More particularly, the packet outage detection field 410 allows the user to determine or define parameters associated with various states of sets of data packets (e.g., a transmitted, received, lost, or queued state).
  • In some embodiments, a logical trunk group (e.g., the logical trunk group B 312) includes a vector parameter 406 or directional characteristic 406 a. The directional characteristic 406 a can refer to the direction of a call leg with respect to a particular switching component (e.g., the switching component 303). In some embodiments, ingress call legs are referred to as “inbound” calls and egress call legs are referred to as “outbound” calls. The two endpoints can include gateways, call processors, data sources, switching components, or other network equipment (e.g., between the switching components 303, 304 or, more generally, between the PSTN 308 and the PSTN 318). In some embodiments, the endpoints include a monitor or controller (e.g., the control component 306) and a switching component (e.g., the switching component 303) that can apply access control to data packets associated with the logical trunk group (e.g., the logical trunk group B 312). One advantage achieved by this embodiment includes preventing further congestion of data traffic downstream of the switching component.
  • In embodiments involving an operational state or toggle-like resource parameter 408, a “one-way” directional characteristic 408 a includes “inbound” or “outbound.” For example, a logical trunk group having the “inbound” associated directional characteristic can have control measures imposed (e.g., resource limitations) on the data packets received at a call processor (e.g., the switching component 303). The controller does not allow data packets associated with an egress (e.g., outbound) call leg to access a set of pathways associated with that logical trunk group—only data packets associated with ingress call legs are allowed to potentially access the set of pathways. Moreover, data associated with an ingress call leg can gain access to a set of pathways associated with the logical trunk group if the set includes sufficient resources to process the call (e.g., non-congested data traffic). A directional characteristic can impose a control in addition to scalar parameters described above. In other embodiments, a vector characteristic can be imposed on a set of pathways associated with a logical trunk group without a scalar quantity for the resource parameter.
  • In embodiments employing a two-way directional characteristic 408 b, a resource parameter can be shared by incoming call traffic and outgoing call traffic associated with the same logical trunk group, or each logical trunk group can include its own resource parameter. Data packets and/or calls are permitted access to the set of pathways associated with a logical trunk group provided the logical trunk group has sufficient resources available to transmit the packets (e.g., the data packet resource requirements do not exceed the resource parameter associated with the logical trunk group). More particularly, a call can be connected across a given IP network (e.g., the logical trunk group is selected based on whether the call is an incoming call or an outgoing call with respect to a particular switching component.
  • Another advantage of a two-way directional characteristic involves resource reservation. In some embodiments, when data packets (e.g., associated with a call) ingress a call processor (e.g., the switching component 303) and are then transmitted over a set of pathways associated with an IP trunk group having a two-way directional characteristic, a resource parameter (e.g., a scalar quantity such as number of calls or a vector parameter such as a directional characteristic) can be reserved for unrelated outgoing calls. In this way, resources can reserved for outgoing calls from a particular switching component to ensure that, for example, an increased number of incoming calls does not consume all of the resources of the switching component. Such a configuration can provide a quality of service assurance because an increase in data traffic associated with that logical trunk group does not affect the resources already reserved, which can be “shielded” from the increased traffic to prevent a call from being disconnected or dropped. In the telephony field, such a configuration reduces loss of data packets associated with an increase in call traffic over a data link associated with that logical trunk group (e.g., the set of pathways associated with the logical trunk group B 312).
  • In another advantageous configuration, a logical trunk group can include an operational state parameter 408 such as “in-service” 408 a or “out-of-service” 408 b. The operational state parameter 408 can be controlled by a central user, e.g., a network administrator, to add or remove available pathways associated with the logical trunk group for data transmission. In some embodiments, the user changes the operational state parameter 408 of a logical trunk group during an existing call session. The change to the operational state parameter does not affect existing call sessions, only future call sessions.
  • An operational state parameter 408 can add a level of control (in addition to scalar parameters 404 or vector parameters 406) to a set of pathways associated with a logical trunk group. For example, a logical trunk group associated with the out-of-service operational state 408 b is not available for handling call sessions (e.g., data packets associated with a call are not able to access that set of pathways associated with the logical trunk group, regardless of the scalar parameters 404 or vector parameters 406 associated with that logical trunk group).
  • In general, resource parameters and access to a set of pathways associated with a logical trunk group can be controlled either statically or dynamically. The process of implementing control features can be referred to as “enforcing limits,” “applying controls,” “implementing control features,” or “comparing data to resource parameters” with respect to a logical trunk group. Other expressions for implementing control features with respect to data traffic can be used. In a statically controlled situation, the state of the data packets with respect to a logical trunk group can be monitored. A monitor module can provide the monitoring of the state associated with the data packets.
  • In an illustrative embodiment, when the state is lost or queued, the monitor provides information about the state to the selector. The selector in turn can select a set of pathways associated with a logical trunk group that is not associated with a lost or queued state (e.g., associated with a transmitted or received state) to transmit the additional data packets. For example, on call setup, the monitor can also observe and monitor data requirements associated with the data packets and compare those requirements with the resource parameter associated with the logical trunk group. If the data requirements exceed the resource parameter, the data can be rerouted, and the process is iterated until a logical trunk group (and its associated set of pathways) is found that can transmit the data packets.
  • In other embodiments, dynamic control features are implemented with respect to data attempting to access a set of pathways associated with a particular logical trunk group. In such an embodiment, a monitor module provides monitoring of a state associated with a set of data packets. When the state is lost or queued, the monitor can provide information about the state to the selector or a controller. The controller (e.g., a user or control component 306) can adjust or configure the resource parameter associated with the logical trunk group, which in turn can allow a second set of data packets to be transmitted by the set of pathways associated with the logical trunk group. More particularly, the controller can increase the value of a scalar parameter 404 or change the value of the vector parameter 406 (e.g., from a “one-way” 406 a directional characteristic to a “two-way” 406 b directional characteristic) or the operational state parameter 408 (e.g., from an “out-of-service” 408 b operational state to an “in-service” 408 a operational state).
  • In some embodiments, a set of pathways associated with a logical trunk group can be selected regardless of the state associated with data packets associated with that logical trunk group. More particularly, the set of pathways selected to transmit data associated with a call can be the set of pathways associated with a logical trunk group without optimal resource capacity. In some embodiments, the size of the set of data packets associated with a logical trunk group increases until a lost state arises (e.g., some packets in the set are not transmitted across the set of pathways associated with that logical trunk group as determined by the packet data received at a downstream call processor), at which point the selector no longer routes data packets to the set of pathways associated with that logical trunk group. In addition to limiting or selecting a set of pathways based on a state of the logical trunk group, the selection can be based in part on delay or jitter (e.g., variations in delay) measurements associated with a logical trunk group.
  • In a particular embodiment, controlling access dynamically can be accomplished by three cooperating algorithms implemented at three layers of operation in the controller: a set layer, a pathway layer, and a data admission layer. The algorithm for the set layer determines a state, also called a congestion state, for the logical trunk group that can be communicated to the selector, controller, or the pathway layer (e.g., control component 306). The algorithm for the pathway layer determines the resource capacity of the logical trunk group (e.g., by determining the resource capacities of each pathway that is a member of the set of pathways associated with that logical trunk group). The congestion state as determined by the set layer provides an input for the pathway layer algorithm. A capacity associated with a resource parameter of a logical trunk group can be an output. The output of the pathway layer (e.g., resources available with respect to a given set of pathways associated with the logical trunk group) can be an input to the data admission layer algorithm. For example, the data admission layer can compare the number of calls 404 a (e.g., the input from the pathway layer and the configurable resource parameter of the GUI 400 related to number of calls 404 a) to a maximum number of calls for reliable transmission. If the number of calls being processed by the logical trunk group exceeds the configurable variable 404 a, the data admission layer requests additional resources (e.g., an increased maximum number of calls) for that logical trunk group. A call processor or media gateway (e.g., the switching component 110 of FIG. 1) can implement three cooperating algorithms associated with dynamic access control. Dynamic resource allocation at the gateway or switch level permits decentralized resource management and reduces processing demands on a centralized resource manager (e.g., control component 306 of FIG. 3).
  • Advantageously, dynamic resource allocation permits call traffic-level control. A monitor module monitors the quality and performance of call traffic (e.g., various parameters) through a packet network at the packet level (e.g., by monitoring lost packets, jitter, or delay) and can extrapolate or aggregate the quality and performance to the call or logical trunk group level. When the parameters associated with network quality and performance fall below a threshold level, the monitor module can invoke a trigger action to prevent additional calls from being associated with the particular logical trunk group that is experiencing reduced quality or performance.
  • A network administrator can adjust call admission policies associated with the logical trunk groups in response to the trigger action or various other performance statistics. A specific implementation of traffic-level control includes a packet outage detection (“POD”) algorithm associated with various trigger events. In some embodiments, the POD algorithm is implemented for use with a logical trunk group, particularly by digital signal processors associated with the logical trunk group. A flag associated with the state can be transmitted with the data packets for receiving with the transmission. The flag can indicate the size of the packet outage (e.g., the ratio of “lost” or “queued” packets to “transmitted” or “received” packets) or location in the set of data packets that the outage occurred.
  • In some embodiments, the packet outage is determined from the ratio of received packets to transmitted packets, or other combinations of states. In other embodiments, the state is detected by network equipment such as switching elements or gateways (e.g., switching component 303 or 304). In still other embodiments, real-time control protocol (also known as “RTCP”) statistics or implementations can detect the state.
  • Various trigger events can prompt a packet outage alert to be associated with a logical trunk group. A user can adjust a resource parameter (e.g., on the GUI 400) to respond to the alert. For example, an administrator can operationally remove a logical trunk group from service or can adjust the trunk group's resource allocation in response to the outage alert.
  • In some embodiments, trigger events can be manually configured by a user to allow some control over network performance (e.g., by configuring the packet outage detection fields 410 a-d). More particularly, a user-provided configuration can define the trigger event. For example, the user can provide certain minimum or maximum parameters, or default parameters can be used. The user-provided configuration can be referred to as a configurable variable. Trigger events or configurable variables can include the following: packet outage minimum duration 410 a, outage detection intervals 410 b, amount of data detecting packet outage 410 c, minimum number of calls 410 d, or a combination of any of these.
  • With respect to packet outage minimum duration 410 a, the configurable variable can allow an administrator to specify criteria for adding a flag to the set of data packets (e.g., by inputting a value associated with the criteria into the GUI 400, associated with packet outage detection). A sustained packet outage can be detrimental to call quality and can indicate a failure condition somewhere in the packet-based network (e.g., some piece or pieces of network equipment associated with the set of pathways associated with the logical trunk group). In some embodiments, a burst loss of packets can perceptibly affect the quality of a phone call but without significant effect on the end-users (e.g., the call participants). A momentary loss of packets can be considered “normal”; for example, temporary router congestion can result in momentary loss of packets without disconnecting the call.
  • One advantageous feature of packet outage minimum duration can allow the administrator (e.g., the user providing the configuration to control component 306 via the GUI 400) to customize the trigger to match the conditions of a particular packet-based network. In some embodiments, the minimum packet outage duration 410 a is configurable in units of milliseconds, but any suitable interval is contemplated.
  • Some embodiments involve a configurable outage detection interval variable 410 b. In such a configuration, a user or an administrator can ignore the state of a set of data packets for data that older in time than a specified time value. One advantageous feature of this configuration allows the administrator to address transient fault conditions. Packet outage events can be cumulated within a time interval. The detection algorithm can be based on the sum of all of the packet outages that occurred during the interval. The time interval could further be divided into, for example, three sub-intervals. In some embodiments, the packet outage detection interval is specified in seconds, but any suitable interval is contemplated.
  • With respect to the amount of data packets detecting packet outage 410 c and the minimum number of calls detecting packet outage 410 d, the configurable variable can allow a user or an administrator to specify when to trigger a flag based on the effect of packet outage on the rest of the network (e.g., switching component 314, the second PSTN 318 or the call recipient). For example, if packet outage is detected in a small number of calls (e.g., less than the number specified by the configurable variable 410 c, 410 d in the GUI 400), the flag is not triggered. Conversely, if the packet outage is detected in a large number of calls (e.g., more than or equal to the number specified by the variable 410 c, 410 d), the flag can be triggered. The configurable variable can be defined, for example, as a number of calls or as a percentage of calls processed that are associated with a logical trunk group (e.g., the calls processed having data that travel through the logical trunk group B 312).
  • Another advantageous feature involves traffic monitoring with sets of data packets providing the traffic. Traffic data can be measured and made available to call processors for more effective transmission. Additionally, traffic data can be displayed on the GUI 400 and available to a user. In some embodiments, the traffic data can be communicated to a resource manager (e.g., a controller such as control component 306) or to various gateways (e.g., switching components 303, 304). The data can be reformulated, manipulated, or calculated into statistics, or it can be stored and archived without processing. In some embodiments, the traffic data can be used by a network management system (e.g., control component 306) to facilitate call routing, data transmission, or data traffic engineering.
  • When the data is available to a user (e.g., via the GUI 400), the user can view and understand how various network equipment performs or is performing with respect to call processing. The user can then adjust or manipulate configurable variables and parameters 402, 404, 406, 408, 410 fields of the GUI 400 to improve network performance in part in response to problems that are perceived or observed with respect to the network. For example, data associated with a call can be used to assemble call performance statistics with respect to a particular logical trunk group (e.g., the logical trunk group B 312), and thus give a user an insight into the performance of the packet network (e.g., the set of pathways through the network 302 associated with the logical trunk group B 312, even if there is no direct knowledge of the topology of the packet network 302).
  • The statistics of multiple logical trunk groups can also give the user additional insights to the packet network when the overall topology is not known. Using system 100 in an example where the trunk group TG-A 160 is associated with the gateway 140 and the logical trunk group TG-B 165 is associated with the gateway 145 can provide insight into the network 120. If the call performance statistics indicate that call quality is degrading with respect to calls between the gateways 110 and 140 but do not indicate that call quality is degrading with respect to calls between the gateways 110 and 145, then the problem lies in pathways only specific to the gateway 140. If, however, call quality is degrading for both connections, then the problem lies in a pathway common to both or possibly in the packet network 120. The performance statistics can be observed or adjusted by a user or utilized by call processors (e.g., switching component 303).
  • Multiple user inputs can be used to configure parameters using the GUI 400. Examples of such inputs include buttons, radio buttons, icons, check boxes, combo boxes, menus, text boxes, tooltips, toggle switches, buttons, scroll bars, toolbars, status bars, windows, or other suitable icons or widgets associated with a GUI 400. In other embodiments, the network management system adjusts the control features without requiring the input of a user. It is worth noting that for controlling quality of PSTN calls and managing a PSTN, the Telcordia GR-477 standard on network traffic management deals with PSTN trunk group reservation and PSTN trunk group controls. Through the use of logical trunk groups, the control variables and performance statistics collected can mirror those used in the PSTN, for example, those defined by the GR-477 standard, to provide analogous management of calls going through packet-based networks.
  • Referring now to FIG. 5, a block diagram of a system 500 including exemplary networks and devices associated with distributed control features associated with access to sets of pathways associated with logical trunk groups is depicted. The exemplary configuration of the system 500 includes several call processors 502 a-c, which combine to form a logical network node 504. Each call processor 502 a-c has a defined logical trunk group 506 a-c, respectively that is associated with call data transmitted through a packet-based communication channel 508. As discussed above, each call processor 502 a-c or logical trunk group 506 a-c can be associated with call admissions controls. The admissions controls of each component are combined to form an aggregated admissions control for the combined communications channel 508 for the node 504. The admissions controls are not required to be equally distributed; more particularly, each individual component (e.g., the call processors 502 a-c, the logical trunk groups 506 a-c, or combinations of them) can be associated with an individual set of admissions controls. In some embodiments, other types of data sources are in communication with the logical trunk groups 506 a-c.
  • In some embodiments, the combined communications channel 508 is associated with a resource parameter or limitation, and the parameter or limitation is distributed to the call processors 502 a-c. In this way, resource parameters of the communications channel 508 can be determined either by the combining resource parameters associated with the individual call processors 502 a-c or trunk groups 506 a-c or by a centralized configuration, e.g., by an administrator (not shown). The resource parameter associated with the channel 508 can include additional features or different resource parameters than the result of combining the parameters or features of the call processors 502 a-c or trunk groups 506 a-c.
  • In some embodiments, resources or parameters can be enhanced by allowing crank-back. Crank-back can involve selecting a new logical trunk group (e.g., the logical trunk group 506 b) when admission to a selected trunk group (e.g., the logical trunk group 506 a) fails. Call admission fails when data associated with a call setup cannot be reliably transmitted over the set of pathways associated with the selected trunk group (e.g., the logical trunk group 506 a). The control channel 508 can include the logical trunk groups 506 a-c associated with the node 504, and can interface with a remote packet-based network 510. The packet-based network 510 can be physically and logically remote from the packet-based network (not shown) that includes the logical trunk groups 506 a-c (e.g., the communications channel 508 communicates with a component having an interface to the packet-based network 510). In some embodiments, the communications channel 508 is referred to as a group.
  • Referring now to FIG. 6, a block diagram of a system 600 including exemplary networks and devices including a component for centralized control of distributed network components within a logical network node is depicted. The system 600 of FIG. 6 includes the exemplary components depicted with respect to FIG. 5 with the addition of a resource manager 602. In one embodiment, the resource manager 602 provides monitoring and controlling functions for the logical trunk groups 506 a-c. The resource manager 602 can be, for example, a module for managing and allocating resources among call processors in a network node (e.g., network node 504 of FIG. 5).
  • The resource manager 602 can be in communication with the call processors 502 a-c. In some embodiments, the call processors 502 a-c request resources or resource parameters from the resource manager 602 as data associated with calls are processed. In the illustrated embodiment, the resource manager 602 employs a failure-minimizing algorithm to determine resource allocation among the call processors 502 a-c. One advantageous feature associated with the configuration of FIG. 6 includes no special routing requirements to distribute the call control features or access control to the call processors 502 a-c. The resource manager 602 can also monitor and/or maintain statistics associated with data traffic associated with the logical trunk groups 506 a-c. Such information can be accessible to other network components (not shown). In some embodiments, resource manager 602 can be analogous to the control component 306 depicted above with respect to FIG. 3.
  • FIG. 7 depicts an exemplary hierarchical configuration 700 associated with call control. Three levels 702, 704, 706 of hierarchy are depicted. The lowest level 702 includes trunk groups 708 a-e. In some embodiments, the trunk groups 708 a-e include associated control features as described above (e.g., resource parameters and admission controls). The second level 704 of the hierarchical structure includes two hierarchical groups 710 a-b including the logical trunk groups 708 a-e. The first hierarchical group 710 a includes three logical trunk groups 708 a-c, and the second hierarchical group 710 b includes two logical trunk groups 708 d-e.
  • In some embodiments, the control features associated with logical trunk groups 708 a-c are included in the hierarchical group 710 a (e.g., amalgamated or combined to define the resource parameters of the group 710 a). Additional control features may be associated with group 710 a that are not included individually in any of trunk groups 708 a-c. Likewise, the control features associated with logical trunk groups 708 d-e are included in hierarchical group 710 b, and additional control features may be associated with hierarchical group 710 b that are not included individually in any of the logical trunk groups 708 d-e. The third level 706 of the hierarchical structure includes a hierarchical combination 712 of the hierarchical groups 710 a-b. Hierarchical combination 712 can include all of the control features associated with the hierarchical groups 710 a-b and additional control features that are not included in either hierarchical group 710 a-b or in any logical trunk groups 708 a-e.
  • The resource parameter associated with the hierarchical combination 712 or hierarchical groups 710 a-b can include scalar, vector, operational state parameters, or any combination of them, as described above with respect to FIGS. 3 and 4. If an administrator knows about a portion of the topology of the packet-based network (e.g., the topology on the node 504, then the hierarchical trunk group relationships (e.g., the hierarchical configuration 700) can advantageously be used for a logical representation of that network architecture or topology.
  • In some embodiments, the resource manager 602 of FIG. 6 includes the hierarchical configuration 700. The resource manager 602 can configure the resources among the call processors 502 a-c either directly by allocating resources for the trunk groups 506 a-c or by allocating resources to the control channel 508 to manage and control call traffic. Allocating resources to the control channel 508 is analogous to providing a configuration at the second level of hierarchy 704 or the third level of hierarchy 706.
  • The hierarchical configuration 700 affects call processing. For example, to use logical trunk group 708 a for call processing (e.g., the resources and set of pathways associated with that logical trunk group), both the hierarchical group 710 a and the hierarchical combination 712 should be in service (e.g., the operational state parameter associated with the hierarchical group 710 a and the hierarchical combination 712 is not set to “out-of-service”). Similarly, individual logical trunk groups 708 a-e can be removed from operational service by removing a hierarchical group 710 a-b or the hierarchical combination 712 from operational service (e.g., associating an operational state of “out-of-service” with the hierarchical combination 712). More specifically, to prevent data transmission using logical trunk groups 708 a-c, a user can associate or configure the hierarchical group 710 a or hierarchical combination 712 with an operational state of “out-of-service” to prevent routing of data to pathways associated with logical trunk groups in subhierarchical levels (e.g., by using the GUI 400 of FIG. 4). One implementation of this concept is embodied in a peg counter. For example, when data is routed to the logical trunk group 708 a and resources are not available in the hierarchical combination 712 for the data, the peg counter can be incremented in the logical trunk group 708 a and the hierarchical group 710 a, for example indicating a failure to allocate resources up the hierarchical configuration 700 (e.g., by the hierarchical combination 712).
  • FIG. 8 depicts a system 800 including exemplary networks and devices employing a hierarchical configuration of call controls and related implementations. In the illustrated system 800, the configuration includes four network nodes 802 a-d, which are groups including three media gateways, although in general, the network nodes can include a varying number of telephony equipment.
  • Generally, a network node (sometimes referred to as a point of presence (“POP”)) can include a configuration in which network services are provided to subscribers that are connected to the network node (e.g., the network node 802 a). The four network nodes 802 a-d can each include interfaces to a packet-based network 804. The interfaces are depicted in FIG. 8 as multiplexers 806 a-d, respectively. In some embodiments, one or more of the multiplexer 806 a-d include the resource manager functionality of the resource manager 602 of FIG. 6 and can implement hierarchical control features described above with respect to FIG. 7. In some embodiments, the network equipment (e.g., a GSX9000 sold by Sonus Networks, Inc. of Chelmsford, Mass.) associated with each of the four network nodes 802 a-d implements resource manager functionality. In still other embodiments, the configuration of FIG. 8 includes the resource manager 602 of FIG. 6 (not shown) to manage hierarchical control features.
  • In some embodiments, the multiplexers 806 a-d are switching components. In a particular embodiment, each network node (e.g., the network node 802 a) includes a set of logical trunk groups 808 b-d configured for transmitting data to each of the other network nodes (e.g., the network nodes 802 b-d), through the packet-based network 804. More specifically, the network node 802 a can include a logical trunk group 808 b “dedicated” to network node 802 b (e.g., all calls routed between node 802 a and node 802 b are associated with the logical trunk group 808 b), a logical trunk group 808 c “dedicated” to network node 802 c (e.g., all calls routed between node 802 a and node 802 c are associated with the logical trunk group 808 c), and a logical trunk group 808 d “dedicated” to network node 802 d (e.g., all calls routed between node 802 a and node 802 d are associated with the logical trunk group 808 d).
  • The logical trunk groups can be associated with a call or a call session. In some embodiments, the logical trunk groups are associated with call traffic. The logical trunk groups 808 b-d can be combined to form a group or a combination as discussed above and represented in FIG. 8 by reference numeral 810. Block 812 illustrates an exemplary hierarchical configuration used by various of the network equipment for implementing hierarchical control features.
  • One advantage realized by a configuration including hierarchical control features includes the ability of an administrator associated with the network node 802 a to control data transmission through the network 804 and to the other network nodes 802 b-d by allocating resources associated with the group 810 or the logical trunk groups 808 b-d associated with the group 810. Block 812 illustrates that the principle of a hierarchical configuration (e.g., the hierarchical configuration 700 of FIG. 7) can be implemented in the IP network 804, or the configuration can be implemented by other network equipment, for example the multiplexer 806 a or a control component (not shown). A hierarchical resource allocation allows multiple trunk groups to share common resources without interfering with other trunk groups. For example, an administrator can track resources that are needed by a network node (e.g., the network node 802 a) or between network nodes in the network 804.
  • As a specific example of control implementations associated with FIG. 8, a call and data associated with the call can originate in the network node 802 a and terminate in the network node 802 d. In such a configuration, the multiplexer 806 a selects the set of pathways associated with the logical trunk group 808 d to transmit the data through the network 804. Any data control features associated with the logical trunk group 808 d, for example resource allocation or data admission, can then be implemented with respect to the data. Further, data control features associated with the group 810 can then be implemented with respect to the data.
  • When the control features have been implemented, the data can be transmitted to the packet-based network 804 for delivery to the network node 802 d. In some embodiments, the multiplexer 806 d receives the data from the network 804, determines which trunk group with which to associate the data (e.g., the logical trunk group 808 d), and routes the data and the call according to control features associated with the logical trunk group 808 d. Similar processing occurs for data transmitted from the network node 802 d to the network node 802 a (e.g., data associated with return call transmission).
  • In some embodiments, the performance of logical trunk groups (e.g., the individual logical trunk groups 808 b-d and/or the hierarchical group logical trunk group 810) can be monitored by employing or analyzing performance statistics related to data associated with a call. Such statistics can be monitored, reported and recorded for a given time interval that can be configured by a user. The performance statistics can relate to the performance of various network equipment including IP trunk groups. The performance statistics allow an administrator monitor network performance and adjust or configure resource parameters to improve network performance (e.g., to provide a quality of service guarantee or a higher quality of service with respect to calls). Some of the exemplary statistics and a method for calculating each statistic are listed below in Table 3. The call performance statistics in Table 3 include industry standard statistics as they relate to management of PSTN trunk groups, for example the Telcordia GR-477 standard described above. An advantage realized includes the application of statistics associated with PSTN trunk group management to management of packet-based networks to allow PSTN administrators to configure and administer packet-based networks with minimal retraining. Another advantage includes minimal retooling regarding network management operation tools for implementation of the features described herein with respect to logical trunk groups.
    TABLE 3
    Performance Statistic Associated Calculation
    Inbound Usage The sum of call-seconds for every
    inbound call associated with a
    logical trunk group. This statistic
    relates to the period between
    resource allocation and release.
    Outbound Usage The sum of call-seconds for every
    outbound call associated with a
    logical trunk group. This statistic
    relates to the period between
    resource allocation and release.
    Inbound Completed Calls The sum of normal call completions
    (answered calls) for every inbound
    call associated with a logical
    trunk group.
    Outbound Completed Calls The sum of normal call completions
    (answered calls) for every
    outbound call associated with a
    logical trunk group.
    Inbound Call Attempts The sum of call initiations for
    every inbound call associated with
    a trunk group.
    Outbound Call Attempts The sum of call initiations for
    every outbound call associated
    with a logical trunk group.
    Maximum Active Calls The maximum number of calls in
    either direction associated with a
    logical trunk group. This value
    can include an upper limit of the
    total number of calls allowed by
    the logical trunk group.
    Call Setup Time The sum of call-setup time in
    hundredths of seconds on every
    call associated with a logical
    trunk group.
    Calls Setup The number of calls setup in both
    directions using a logical trunk
    group.
    Inbound & Outbound Call The current number of inbound or
    Failures due to No Routes outbound failed calls because no
    route associated with a logical
    trunk group was available.
    Inbound & Outbound Call The current number of inbound or
    Failures due to No Resources outbound failed calls because
    available resource associated with
    a logical trunk group was
    available.
    Inbound & Outbound Call The current number of inbound or
    Failures due to No Service outbound failed calls because there
    was no available service associated
    with a logical trunk group.
    Inbound & Outbound Call The current number of inbound or
    Failures due to Invalid Call outbound failed calls because an
    invalid call attempt was associated
    with the logical trunk group.
    Inbound & Outbound Call The current number of inbound or
    Failures due to Network outbound failed calls because a
    Failure network failure was associated
    with a logical trunk group.
    Inbound & Outbound Call The current number of inbound or
    Failures due to Protocol outbound failed calls because a
    Error protocol error was associated with
    the logical trunk group.
    Inbound & Outbound Call The current number of inbound or
    Failures due to Unspecified outbound failed calls for an
    unknown reason associated with a
    logical trunk group.
    Routing Attempts The number of routing requests
    associated with a logical trunk
    group.
    Failures due to no The current number of routing
    unreserved circuits failures due to no un-reserved
    routes available in association
    with a logical trunk group
    SILC Count The current number of calls cancelled
    due to selective incoming load
    control (“SILC”) associated
    with a logical trunk group
    STRCANT Count The current number of calls cancelled
    due to selective trunk reservation
    (“STR”) associated with a
    logical trunk group
    STRSKIP Count The current number of calls skipped
    (i.e., unload a call without canceling
    the call) due to STR associated with
    a logical trunk group
    SKIP Count The current number of calls
    skipped due to SKIP traffic
    control for a logical trunk group
    CANT Count The current number of call
    cancelled due to CANT (i.e., cancel
    to control - a control that cancels
    a percentage of all new data
    processing for an overloaded or
    impaired set of egress pathways)
    associated with a logical trunk group
    CANF Count The current number of call cancelled
    due to CANF (i.e., cancel from
    control - a control that cancels a
    percentage of all new data processing
    for an overloaded or impaired set of
    ingress pathways) associated with a
    logical trunk group
    ACCCANT Count The current number of calls cancelled
    due to automatic congestion control
    (“ACC”) associated with a
    logical trunk group
    ACCSKIP Count The current number of calls skipped
    due to ACC in association with a
    logical trunk group
    Route Attempts IRR Count The current number of reroute
    attempts due to immediate rerouting
    (“IRR”) associated with a
    logical trunk group
    Route Attempts SIRR Count The current number of reroute
    attempts due to immediate spray
    rerouting (“SIRR”) associated
    with a logical trunk group
    Route Attempts ORR Count The current number of reroute
    attempts due to overflow rerouting
    (“ORR”) associated with the
    logical trunk group
    Route Attempts SORR Count The current number of reroute
    attempts due to overflow spray
    rerouting (“SORR”) associated
    with a logical trunk group
    Successful IRR Count The current number of successful
    reroutes due to IRR associated
    with the logical trunk group
    Successful SIRR Count The current number of successful
    reroutes due to SIRR associated
    with a logical trunk group
    Successful ORR Count The current number of successful
    reroutes due to ORR associated
    with a logical trunk group
    Successful SORR Count The current number of successful
    reroutes due to SORR associated
    with a logical trunk group.
  • As can be seen with respect to Table 3, various performance statistics can be combined (e.g., determining a performance percentage associated with IRR routing from the number of attempts at rerouting and the number of successful reroutes). In some embodiments, the performance statistics can be detected or processed by a monitor, for example, a call processor (e.g., switching component 303 of FIG. 3) that is in communication with the logical trunk group. In other embodiments, the performance statistics can be detected or processed by a monitor remote from the logical trunk group, for example, a provisioning server or resource manager that is in communication with various network components (e.g., control component 306 of FIG. 3). In some embodiments, an administrator can monitor any of the statistics in Table 3 and adjust network or resource parameters based in part on the statistics to optimize network performance and quality of telephone calls.
  • In embodiments employing real-time control protocol (“RTCP”) for data transmission control, statistics associated with media data associated with a call (e.g., teleconferencing or video data) are employed. Logical trunk groups and associated call processors can use statistics generated to determine call quality (e.g., the quality of data transmission). Statistics and call quality information allow a user to configure or change various resource parameters to maximize network efficiency and reliability.
  • Packet-based telephony networks can employ various signaling protocols in connection with data transmission. For example, a packet-based network can employ a signaling protocol (e.g., generically gateway-to-gateway signaling protocol) within the packet network 302 (e.g., the core packet-based network). Telephony networks external to the packet-based network (e.g., PSTN 308 and PSTN 318 of FIG. 3) can use other signaling protocols, for example, signaling system 7 (“SS7”), session initiation protocol (“SIP”), session initiation protocol for telephones (“SIP-T”), or H.323 signaling protocols or any combination of these.
  • FIG. 9 depicts a system 900 including exemplary networks and devices for call processing. A network 902 includes a first call processor 904, a second call processor 906, and a policy processor 908. Either of the call processors 904, 906 can be, for example, a GSX9000, and the policy processor 908 can be, for example, a PSX policy server, both sold by Sonus Networks, Inc., of Chelmsford, Mass. The policy processor 908 can communicate with processor 904 or processor 906.
  • In some embodiments, the call processor 904 communicates data associated with a call to the call processor 906 using SIP signaling protocol. In an exemplary configuration for a call setup, a call arrives from a PSTN trunk group (not shown) at the call processor 904, and the call processor 904 signals a request to the policy processor 908. Such a request includes information (e.g., a characteristic) including ingress source (e.g., the PSTN or a PSTN trunk group), ingress gateway (e.g., the call processor 904), ingress trunk group (e.g., the ingress PSTN trunk group TG1), calling party number, called party number, among other information. Based in part on the characteristic, the policy processor 908 provides information associated with a set of pathways 910 associated with a logical trunk group through the network 902 to the call processor 904.
  • The information can include a destination processor (e.g., the call processor 906), an IP address of a destination processor, a destination trunk group (e.g., the PSTN trunk group that delivers the call to the recipient), information about the route, or any combination of these. In some embodiments, the policy processor 908 can specify the destination by name, an IP address, or the destination trunk group. Based on one or more of these characteristics, the call processor 904 associates the call with an IP trunk group.
  • For example, the call processor 904 can select a an IP trunk group with which to associate the call data, based in part on the information provided by the policy processor 908, by using a most specific address-matching algorithm from a selection table (e.g., as described above). After the IP trunk group has been selected (e.g., the IP trunk group IPTG1), control features associated with the selected trunk group (e.g., limit on number of calls) can be enforced with respect to the data (e.g., scalar, vector, or operational state parameters), as described above. If the IP trunk group (e.g., resources associated with the IP trunk group including the set of pathways) can accommodate the data, information associated with a call setup can be communicated to the call processor 906, which determines an ingress logical trunk group associated with the data (e.g., IP trunk group IPTG2), for example by using the signaling IP address of the incoming setup.
  • After the call processor 906 selects the IP trunk group (e.g., IP trunk group IPTG2), control features associated with that IP trunk group (e.g., bandwidth limit) are implemented with respect to the data, as described above. At this point two sets of control features have been enforced with respect to the data. If the ingress IP trunk group can accommodate the data, processor 906 can admit the data for processing and attempts to transmit the data to a destination, e.g., on a PSTN trunk group (not shown) associated with egress PSTN trunk group TG2. The call processor 906 can select the destination based on a characteristic associated with the destination, as described above. Performance measurements or traffic statistics can be tracked using the associated IP trunk groups.
  • While this embodiment has been described with respect to PSTN trunk groups, the features of the invention also apply to SIP signaling or H.323 signaling with an “Invite” to a call processor or SIP server rather than a direct communications as discussed with respect to a PSTN trunk group (call processors 904 can receive ingress call legs from an IP network and/or call processor 906 can transmit egress call legs to an IP network). Call processors 904 and 906 can also communicate with logical trunk groups associated with SIP signaling. While the invention has been described with respect to packetized data associated with a telephone call, principles and concepts herein described can apply more generally to any time-sensitive data.
  • The above-described techniques can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The implementation can be as a computer program product, e.g., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
  • Method steps can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by, and apparatus can be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). Modules can refer to portions of the computer program and/or the processor/special circuitry that implements that functionality.
  • Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor receives instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Data transmission and instructions can also occur over a communications network. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.
  • The terms “module” and “function,” as used herein, mean, but are not limited to, a software or hardware component which performs certain tasks. A module may advantageously be configured to reside on addressable storage medium and configured to execute on one or more processors. A module may be fully or partially implemented with a general purpose integrated circuit (“IC”), FPGA, or ASIC. Thus, a module may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided for in the components and modules may be combined into fewer components and modules or further separated into additional components and modules. Additionally, the components and modules may advantageously be implemented on many different platforms, including computers, computer servers, data communications infrastructure equipment such as application-enabled switches or routers, or telecommunications infrastructure equipment, such as public or private telephone switches or private branch exchanges (“PBX”). In any of these cases, implementation may be achieved either by writing applications that are native to the chosen platform, or by interfacing the platform to one or more external application engines.
  • To provide for interaction with a user, the above described techniques can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer (e.g., interact with a user interface element). Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • The above described techniques can be implemented in a distributed computing system that includes a back-end component, e.g., as a data server, and/or a middleware component, e.g., an application server, and/or a front-end component, e.g., a client computer having a graphical user interface and/or a Web browser through which a user can interact with an example implementation, or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communications, e.g., a communications network. Examples of communications networks, also referred to as communications channels, include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet, and include both wired and wireless networks. In some examples, communications networks can feature virtual networks or sub-networks such as a virtual local area network (“VLAN”). Unless clearly indicated otherwise, communications networks can also include all or a portion of the PSTN, for example, a portion owned by a specific carrier.
  • The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communications network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • The invention has been described in terms of particular embodiments. The alternatives described herein are examples for illustration only and not to limit the alternatives in any way. The steps of the invention can be performed in a different order and still achieve desirable results. Other embodiments are within the scope of the following claims.

Claims (21)

1. A method for managing calls through a packet-based network without knowledge of the topology of the packet-based network, the method comprising:
defining a plurality of logical trunk groups for a first media gateway in communication with a packet-based network, each of the logical trunk groups being associated with one or more media gateways in communication with the first media gateway over the packet-based network;
associating packet data with a particular call, the packet data being received or transmitted by the first media gateway;
associating the packet data with a first logical trunk group of the plurality of logical trunk groups; and
collecting statistical data associated with the first logical trunk group.
2. The method of claim 1, wherein associating the packet data with a first logical trunk group of the plurality of logical trunk groups comprises associating the packet data with a first logical trunk group based at least in part on a characteristic, the characteristic comprising a name, an Internet Protocol (IP) address, a signaling protocol, a transport service access point, a VLAN identifier, or any combination thereof.
3. The method of claim 2, wherein the characteristic is associated with a data source, a data destination, a logical trunk group, or any combination thereof.
4. The method of claim 2, wherein the characteristic is associated with a data source and a data destination.
5. The method of claim 1, further comprising selecting the first logical trunk group for association with the packet data based in part on a network address or a network mask associated with the packet-based network, a second packet-based network, or both.
6. The method of claim 5, wherein selecting the logical trunk group is based on a most specific match algorithm.
7. The method of claim 1, further comprising associating a resource parameter with the first logical trunk group, wherein the resource parameter comprises a scalar parameter including a call capacity, signal processing resources, a data packet volume, a bandwidth, or any combination thereof.
8. The method of claim 1, further comprising associating a resource parameter with the first logical trunk group, wherein the resource parameter comprises a vector parameter including a directional characteristic.
9. The method of claim 1, further comprising associating a resource parameter with the first logical trunk group, wherein the resource parameter comprises an operational state parameter including in-service or out-of-service, or any combination thereof.
10. The method of claim 1, further comprising associating a resource parameter with the first logical trunk group, wherein the resource parameter includes at least one of a scalar parameter, a vector parameter, an operational state parameter, or any combination thereof.
11. The method of claim 1, wherein a first resource parameter is associated with the first logical trunk group and a second resource parameter is associated with a second logical trunk group, further comprising associating the logical trunk group and the second logical trunk group with a hierarchical group, the hierarchical group being associated with at least a portion of the first resource parameter, the second resource parameter, or both.
12. The method of claim 1, further comprising:
associating a first hierarchical group comprising the first logical trunk group with a hierarchical combination, wherein a trunk resource parameter associated with the first logical trunk group is determined at least in part based on a combination resource parameter associated with the hierarchical combination or a group resource parameter associated the hierarchical group, or both.
13. The method of claim 1, further comprising:
associating a second set of call data, received from a second data source or transmitted to a second data destination, through the packet-based network, with a second logical trunk group, wherein a network node comprises one of the data source, the second data source, the data destination, the second data destination, or any combination thereof; and
associating the first logical trunk group and the second logical trunk group with a combined communication channel in communication with the network node.
14. The method of claim 1, wherein the combined communication channel is in communication with a second packet-based network.
15. A method comprising:
associating call data, received from a data source or transmitted to a data destination, through a packet-based network, with a logical trunk group based at least in part on a characteristic of the call data, the data source, the data destination, or any combination thereof, the characteristic comprising a name, an Internet Protocol (IP) address, a signaling protocol, a transport service access point, a VLAN identifier, or any combination thereof.
16. The method of claim 15, wherein the data source or the data destination comprises a gateway, a call processor, a switch, a trunk group, or any combination thereof.
17. A method for managing calls through a packet-based network without knowledge of the topology of the packet-based network, comprising:
defining a first logical trunk group for a first media gateway in communication with the packet-based network, the first logical trunk group being associated with a second media gateway in communication with the first media gateway over the packet-based network;
associating packets corresponding to a call being routed to the second media gateway with the first logical trunk group;
generating a first set of statistics associated with the first logical trunk group;
defining a second logical trunk group for the second media gateway in communication with the packet-based network, the second logical trunk group being associated with the first media gateway in communication with the second media gateway over the packet network;
associating packets corresponding to a call being routed from the first media gateway with the second logical trunk group;
generating a second set of statistics associated with the second logical trunk group; and
associating a network quality with the packet-based network based in part on the first or second set of statistics or both.
18. The method of claim 17, further comprising:
managing calls between the first media gateway and the second media gateway based on the network quality.
19. A system comprising:
a packet-based network;
a first media gateway in communication with the packet-based network, the first media gateway including a plurality of logical trunks groups, wherein each of the plurality of logical trunk groups is associated with one or more additional media gateways over the packet network;
a first module adapted to associate packet data associated with a call with a first logical trunk group selected from the plurality of logical trunk groups for transmission through the packet-based network; and
a collection module adapted to collect statistical data associated with the first logical trunk group.
20. The system of claim 19, further comprising:
an allocation module adapted to associate a resource parameter with the first logical trunk group, wherein the resource parameter comprises a scalar parameter, a vector parameter, an operational state parameter or any combination thereof.
21. A computer program product, tangibly embodied in an information carrier, the computer program product including instructions being operable to cause data processing apparatus to:
define a plurality of logical trunk groups for a first media gateway in communication with a packet-based network, each of the plurality of logical trunk groups being associated with one or more media gateways in communication with the first media gateway over a packet-based network;
associate packet data with a particular call, the packet data received or transmitted by the first media gateway;
associated the packet data with a first logical trunk group selected from the plurality of logical trunk groups; and
collect statistics associated with the first logical trunk group.
US11/238,663 2004-09-29 2005-09-29 Defining logical trunk groups in a packet-based network Abandoned US20060072555A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/238,663 US20060072555A1 (en) 2004-09-29 2005-09-29 Defining logical trunk groups in a packet-based network

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US61418204P 2004-09-29 2004-09-29
US11/238,663 US20060072555A1 (en) 2004-09-29 2005-09-29 Defining logical trunk groups in a packet-based network

Publications (1)

Publication Number Publication Date
US20060072555A1 true US20060072555A1 (en) 2006-04-06

Family

ID=36143022

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/238,639 Abandoned US20060072554A1 (en) 2004-09-29 2005-09-29 Hierarchically organizing logical trunk groups in a packet-based network
US11/238,682 Active 2028-08-13 US7602710B2 (en) 2004-09-29 2005-09-29 Controlling time-sensitive data in a packet-based network
US11/238,663 Abandoned US20060072555A1 (en) 2004-09-29 2005-09-29 Defining logical trunk groups in a packet-based network

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/238,639 Abandoned US20060072554A1 (en) 2004-09-29 2005-09-29 Hierarchically organizing logical trunk groups in a packet-based network
US11/238,682 Active 2028-08-13 US7602710B2 (en) 2004-09-29 2005-09-29 Controlling time-sensitive data in a packet-based network

Country Status (6)

Country Link
US (3) US20060072554A1 (en)
EP (3) EP1794959A4 (en)
JP (1) JP2008515348A (en)
AT (1) ATE523018T1 (en)
CA (1) CA2581189A1 (en)
WO (1) WO2006039344A2 (en)

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040252681A1 (en) * 2003-02-21 2004-12-16 Rafi Rabipour Data communication apparatus and method
US20050195741A1 (en) * 2004-03-03 2005-09-08 Doshi Bharat T. Network quality of service management
US20060072554A1 (en) * 2004-09-29 2006-04-06 Fardad Farahmand Hierarchically organizing logical trunk groups in a packet-based network
US20070104114A1 (en) * 2004-03-19 2007-05-10 Nortel Networks Limited Providing a capability list of a predefined format in a communications network
US20070116018A1 (en) * 2005-11-18 2007-05-24 Santera Systems, Inc. Methods, systems, and computer program products for distributed resource allocation among clustered media gateways in a communications network
US20080002576A1 (en) * 2006-06-30 2008-01-03 Bugenhagen Michael K System and method for resetting counters counting network performance information at network communications devices on a packet network
US20080005156A1 (en) * 2006-06-30 2008-01-03 Edwards Stephen K System and method for managing subscriber usage of a communications network
WO2008013649A2 (en) * 2006-06-30 2008-01-31 Embarq Holdings Company Llc System and method for call routing based on transmission performance of a packet network
US20080049615A1 (en) * 2006-08-22 2008-02-28 Bugenhagen Michael K System and method for dynamically shaping network traffic
US20080049641A1 (en) * 2006-08-22 2008-02-28 Edwards Stephen K System and method for displaying a graph representative of network performance over a time period
US20080052401A1 (en) * 2006-08-22 2008-02-28 Bugenhagen Michael K Pin-hole firewall for communicating data packets on a packet network
US20080049753A1 (en) * 2006-08-22 2008-02-28 Heinze John M System and method for load balancing network resources using a connection admission control engine
US20080049630A1 (en) * 2006-08-22 2008-02-28 Kozisek Steven E System and method for monitoring and optimizing network performance to a wireless device
US20080049776A1 (en) * 2006-08-22 2008-02-28 Wiley William L System and method for using centralized network performance tables to manage network communications
US20080049775A1 (en) * 2006-08-22 2008-02-28 Morrill Robert J System and method for monitoring and optimizing network performance with vector performance tables and engines
US20080049625A1 (en) * 2006-08-22 2008-02-28 Edwards Stephen K System and method for collecting and managing network performance information
US20080049629A1 (en) * 2006-08-22 2008-02-28 Morrill Robert J System and method for monitoring data link layer devices and optimizing interlayer network performance
US20080049746A1 (en) * 2006-08-22 2008-02-28 Morrill Robert J System and method for routing data on a packet network
US20080052628A1 (en) * 2006-08-22 2008-02-28 Bugenhagen Michael K System and method for displaying a graphical representation of a network to identify nodes and node segments on the network that are not operating normally
US20080049640A1 (en) * 2006-08-22 2008-02-28 Heinz John M System and method for provisioning resources of a packet network based on collected network performance information
US20080049632A1 (en) * 2006-08-22 2008-02-28 Ray Amar N System and method for adjusting the window size of a TCP packet through remote network elements
US20080049649A1 (en) * 2006-08-22 2008-02-28 Kozisek Steven E System and method for selecting an access point
US20080049757A1 (en) * 2006-08-22 2008-02-28 Bugenhagen Michael K System and method for synchronizing counters on an asynchronous packet communications network
US20080056244A1 (en) * 2006-08-30 2008-03-06 Level 3 Communications, Llc Internet protocol trunk groups
US20080095049A1 (en) * 2006-10-19 2008-04-24 Embarq Holdings Company, Llc System and method for establishing a communications session with an end-user based on the state of a network connection
US20080137649A1 (en) * 2006-12-07 2008-06-12 Nortel Networks Limited Techniques for implementing logical trunk groups with session initiation protocol (sip)
US20080167846A1 (en) * 2006-10-25 2008-07-10 Embarq Holdings Company, Llc System and method for regulating messages between networks
US20080240079A1 (en) * 2004-03-19 2008-10-02 Nortel Networks Limited Communicating Processing Capabilities Along a Communications Path
WO2009024079A1 (en) * 2007-08-19 2009-02-26 Huawei Technologies Co., Ltd. Method and system for realizing communication called and release, and equipment thereof
US20090129580A1 (en) * 2007-11-19 2009-05-21 Level 3 Communications Llc Geographic trunk groups
US20090257350A1 (en) * 2008-04-09 2009-10-15 Embarq Holdings Company, Llc System and method for using network performance information to determine improved measures of path states
US7684332B2 (en) 2006-08-22 2010-03-23 Embarq Holdings Company, Llc System and method for adjusting the window size of a TCP packet through network elements
US20100299605A1 (en) * 2002-11-05 2010-11-25 At&T Intellectual Property I, L.P. (Formerly Known As Sbc Properties, L.P.) User Interface Design for Telecommunications Systems
US20110032927A1 (en) * 2009-08-04 2011-02-10 Weisheng Chen Methods, systems, and computer readable media for intelligent optimization of digital signal processor (dsp) resource utilization in a media gateway
US20110122773A1 (en) * 2009-11-24 2011-05-26 At&T Intellectual Property Corporation Method, system, and computer program product, for correlating special service impacting events
US8111692B2 (en) 2007-05-31 2012-02-07 Embarq Holdings Company Llc System and method for modifying network traffic
US8125897B2 (en) 2006-08-22 2012-02-28 Embarq Holdings Company Lp System and method for monitoring and optimizing network performance with user datagram protocol network performance information packets
US8130793B2 (en) 2006-08-22 2012-03-06 Embarq Holdings Company, Llc System and method for enabling reciprocal billing for different types of communications over a packet network
US8144586B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for controlling network bandwidth with a connection admission control engine
US8194643B2 (en) 2006-10-19 2012-06-05 Embarq Holdings Company, Llc System and method for monitoring the connection of an end-user to a remote network
US8194555B2 (en) 2006-08-22 2012-06-05 Embarq Holdings Company, Llc System and method for using distributed network performance information tables to manage network communications
US8199653B2 (en) 2006-08-22 2012-06-12 Embarq Holdings Company, Llc System and method for communicating network performance information over a packet network
US8224255B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for managing radio frequency windows
US8228791B2 (en) 2006-08-22 2012-07-24 Embarq Holdings Company, Llc System and method for routing communications between packet networks based on intercarrier agreements
US8238253B2 (en) 2006-08-22 2012-08-07 Embarq Holdings Company, Llc System and method for monitoring interlayer devices and optimizing network performance
US8307065B2 (en) 2006-08-22 2012-11-06 Centurylink Intellectual Property Llc System and method for remotely controlling network operators
US8346239B2 (en) 2006-12-28 2013-01-01 Genband Us Llc Methods, systems, and computer program products for silence insertion descriptor (SID) conversion
US8407765B2 (en) 2006-08-22 2013-03-26 Centurylink Intellectual Property Llc System and method for restricting access to network performance information tables
US8488447B2 (en) 2006-06-30 2013-07-16 Centurylink Intellectual Property Llc System and method for adjusting code speed in a transmission path during call set-up due to reduced transmission performance
US8531954B2 (en) 2006-08-22 2013-09-10 Centurylink Intellectual Property Llc System and method for handling reservation requests with a connection admission control engine
US8537695B2 (en) 2006-08-22 2013-09-17 Centurylink Intellectual Property Llc System and method for establishing a call being received by a trunk on a packet network
US8576722B2 (en) 2006-08-22 2013-11-05 Centurylink Intellectual Property Llc System and method for modifying connectivity fault management packets
US8619600B2 (en) 2006-08-22 2013-12-31 Centurylink Intellectual Property Llc System and method for establishing calls over a call path having best path metrics
US8717911B2 (en) 2006-06-30 2014-05-06 Centurylink Intellectual Property Llc System and method for collecting network performance information
US8743703B2 (en) 2006-08-22 2014-06-03 Centurylink Intellectual Property Llc System and method for tracking application resource usage
US8750158B2 (en) 2006-08-22 2014-06-10 Centurylink Intellectual Property Llc System and method for differentiated billing
US9003021B2 (en) 2011-12-27 2015-04-07 Solidfire, Inc. Management of storage system access based on client performance and cluser health
US9054992B2 (en) * 2011-12-27 2015-06-09 Solidfire, Inc. Quality of service policy sets
US9094257B2 (en) 2006-06-30 2015-07-28 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US9479341B2 (en) 2006-08-22 2016-10-25 Centurylink Intellectual Property Llc System and method for initiating diagnostics on a packet network node
US9671960B2 (en) 2014-09-12 2017-06-06 Netapp, Inc. Rate matching technique for balancing segment cleaning and I/O workload
US9710317B2 (en) 2015-03-30 2017-07-18 Netapp, Inc. Methods to identify, handle and recover from suspect SSDS in a clustered flash array
US9720601B2 (en) 2015-02-11 2017-08-01 Netapp, Inc. Load balancing technique for a storage array
US9740566B2 (en) 2015-07-31 2017-08-22 Netapp, Inc. Snapshot creation workflow
US9762460B2 (en) 2015-03-24 2017-09-12 Netapp, Inc. Providing continuous context for operational information of a storage system
US9798728B2 (en) 2014-07-24 2017-10-24 Netapp, Inc. System performing data deduplication using a dense tree data structure
US9836229B2 (en) 2014-11-18 2017-12-05 Netapp, Inc. N-way merge technique for updating volume metadata in a storage I/O stack
US9838269B2 (en) 2011-12-27 2017-12-05 Netapp, Inc. Proportional quality of service based on client usage and system metrics
US10133511B2 (en) 2014-09-12 2018-11-20 Netapp, Inc Optimized segment cleaning technique
US10929022B2 (en) 2016-04-25 2021-02-23 Netapp. Inc. Space savings reporting for storage system supporting snapshot and clones
US10997098B2 (en) 2016-09-20 2021-05-04 Netapp, Inc. Quality of service policy sets
US11379119B2 (en) 2010-03-05 2022-07-05 Netapp, Inc. Writing data in a distributed data storage system
US11386120B2 (en) 2014-02-21 2022-07-12 Netapp, Inc. Data syncing in a distributed system

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7796524B1 (en) * 2000-10-06 2010-09-14 O'connell David Monitoring quality of service in packet-based communications
US20150341812A1 (en) 2003-08-29 2015-11-26 Ineoquest Technologies, Inc. Video quality monitoring
US9367614B2 (en) * 2009-07-30 2016-06-14 Ineoquest Technologies, Inc. System and method of collecting video content information
US7570601B2 (en) * 2005-04-06 2009-08-04 Broadcom Corporation High speed autotrunking
JP2007207328A (en) * 2006-01-31 2007-08-16 Toshiba Corp Information storage medium, program, information reproducing method, information reproducing device, data transfer method, and data processing method
US7995498B2 (en) * 2006-02-13 2011-08-09 Cisco Technology, Inc. Method and system for providing configuration of network elements through hierarchical inheritance
US7826365B2 (en) * 2006-09-12 2010-11-02 International Business Machines Corporation Method and apparatus for resource allocation for stream data processing
CN100563212C (en) * 2006-12-13 2009-11-25 华为技术有限公司 The methods, devices and systems of routing and Flow Control
EP1944922A1 (en) * 2007-01-10 2008-07-16 Alcatel Lucent Method of providing QoS
US7924853B1 (en) * 2007-10-04 2011-04-12 Sprint Communications Company L.P. Method of maintaining a communication network
JP2011509567A (en) * 2007-12-18 2011-03-24 ゼットティーイー (ユーエスエー) インコーポレイテッド Efficient radio resource allocation method and system
US8396053B2 (en) * 2008-04-24 2013-03-12 International Business Machines Corporation Method and apparatus for VLAN-based selective path routing
US8428609B2 (en) * 2008-05-02 2013-04-23 Pine Valley Investments, Inc. System and method for managing communications in cells within a cellular communication system
US8040796B2 (en) * 2008-07-31 2011-10-18 Alcatel Lucent Voice over IP system recovery apparatus for service and packet groups based on failure detection thresholds
US8942217B2 (en) * 2009-10-12 2015-01-27 Dell Products L.P. System and method for hierarchical link aggregation
US8966291B2 (en) * 2010-12-23 2015-02-24 Qualcomm Incorporated Method and apparatus for reducing dynamic power within system-on-a-chip routing resources
US9210046B2 (en) 2011-03-14 2015-12-08 Hewlett Packard Enterprise Development Lp Zone-based network traffic analysis
US20140258509A1 (en) * 2013-03-05 2014-09-11 Aerohive Networks, Inc. Systems and methods for context-based network data analysis and monitoring
US9769014B2 (en) * 2014-08-05 2017-09-19 Cisco Technology, Inc. Network link use determination based on network error detection
CN106331997B (en) * 2015-07-03 2019-06-14 普天信息技术有限公司 A method of group membership registers to a group main control server when classification networking
US10447606B2 (en) 2017-04-12 2019-10-15 General Electric Company Time-sensitive networking differentiation of traffic based upon content
US10814893B2 (en) 2016-03-21 2020-10-27 Ge Global Sourcing Llc Vehicle control system
US11072356B2 (en) 2016-06-30 2021-07-27 Transportation Ip Holdings, Llc Vehicle control system
CN111740903B (en) * 2017-04-11 2022-10-11 华为技术有限公司 Data transmission method and device
EP4121856A4 (en) * 2020-03-20 2023-09-20 Section.IO Incorporated Systems, methods, computing platforms, and storage media for administering a distributed edge computing system utilizing an adaptive edge engine

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141342A (en) * 1998-12-02 2000-10-31 Nortel Networks Corporation Apparatus and method for completing inter-switch calls using large trunk groups
US6275493B1 (en) * 1998-04-02 2001-08-14 Nortel Networks Limited Method and apparatus for caching switched virtual circuits in an ATM network
US20020093947A1 (en) * 1998-04-30 2002-07-18 Sbc Technology Resources, Inc. ATM-based distributed virtual tandem switching system
US20020101860A1 (en) * 1999-11-10 2002-08-01 Thornton Timothy R. Application for a voice over IP (VoIP) telephony gateway and methods for use therein
US20020120759A1 (en) * 2001-02-23 2002-08-29 Stefano Faccin IP based service architecture
US20020141386A1 (en) * 2001-03-29 2002-10-03 Minert Brian D. System, apparatus and method for voice over internet protocol telephone calling using enhanced signaling packets and localized time slot interchanging
US6529499B1 (en) * 1998-09-22 2003-03-04 Lucent Technologies Inc. Method for providing quality of service for delay sensitive traffic over IP networks
US20030156583A1 (en) * 2002-02-13 2003-08-21 Walter Prager System and method for parallel connection selection in a communication network
US20040032860A1 (en) * 2002-08-19 2004-02-19 Satish Mundra Quality of voice calls through voice over IP gateways
US6765866B1 (en) * 2000-02-29 2004-07-20 Mosaid Technologies, Inc. Link aggregation
US20050052996A1 (en) * 2003-09-09 2005-03-10 Lucent Technologies Inc. Method and apparatus for management of voice-over IP communications
US6873689B1 (en) * 1999-06-26 2005-03-29 International Business Machines Corporation Voice processing system
US20050094623A1 (en) * 2003-10-31 2005-05-05 D'eletto Robert Apparatus and method for voice over IP traffic separation and factor determination
US6914973B2 (en) * 2002-06-25 2005-07-05 Tekelec Methods and systems for improving trunk utilization for calls to ported numbers
US20050232251A1 (en) * 2004-04-14 2005-10-20 Nortel Networks Limited Personal communication device having multiple user IDs
US6977933B2 (en) * 2003-10-06 2005-12-20 Tekelec Methods and systems for providing session initiation protocol (SIP) trunk groups
US20060072554A1 (en) * 2004-09-29 2006-04-06 Fardad Farahmand Hierarchically organizing logical trunk groups in a packet-based network
US7248565B1 (en) * 2002-05-16 2007-07-24 Cisco Technology, Inc. Arrangement for managing multiple gateway trunk groups for voice over IP networks
US7307985B1 (en) * 2003-12-04 2007-12-11 Sprint Communications Company L.P. Method and system for automatically routing a dial plan through a communications network
US7496184B2 (en) * 2003-11-07 2009-02-24 Telarus, Inc. System and method to determine and deliver quotes for distance-sensitive communication links from multiple service providers
US7619974B2 (en) * 2003-10-31 2009-11-17 Brocade Communication Systems, Inc. Frame traffic balancing across trunk groups
US7639664B2 (en) * 2003-05-30 2009-12-29 Alcatel-Lucent Usa Inc. Dynamic management of trunk group members
US7944817B1 (en) * 2002-03-26 2011-05-17 Nortel Networks Limited Hierarchical virtual trunking over packet networks

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2015248C (en) * 1989-06-30 1996-12-17 Gerald R. Ash Fully shared communications network
CN1338170A (en) 1998-12-29 2002-02-27 尤尼斯菲亚撒鲁森公司 Method and apparatus for provisioning inter-machine trunks
US6744768B2 (en) 1999-07-14 2004-06-01 Telefonaktiebolaget Lm Ericsson Combining narrowband applications with broadband transport
US6914120B2 (en) * 2002-11-13 2005-07-05 Eastman Chemical Company Method for making isosorbide containing polyesters
SE0203362D0 (en) * 2002-11-13 2002-11-13 Reddo Networks Ab A method and apparatus for transferring data packets in a router
US20060187822A1 (en) * 2003-07-03 2006-08-24 Zohar Peleg Method and apparatus for partitioning allocation and management of jitter buffer memory for tdm circuit emulation applications
US7706290B2 (en) * 2004-09-14 2010-04-27 Genband Inc. Object-based operation and maintenance (OAM) systems and related methods and computer program products

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6275493B1 (en) * 1998-04-02 2001-08-14 Nortel Networks Limited Method and apparatus for caching switched virtual circuits in an ATM network
US20020093947A1 (en) * 1998-04-30 2002-07-18 Sbc Technology Resources, Inc. ATM-based distributed virtual tandem switching system
US6529499B1 (en) * 1998-09-22 2003-03-04 Lucent Technologies Inc. Method for providing quality of service for delay sensitive traffic over IP networks
US6141342A (en) * 1998-12-02 2000-10-31 Nortel Networks Corporation Apparatus and method for completing inter-switch calls using large trunk groups
US6873689B1 (en) * 1999-06-26 2005-03-29 International Business Machines Corporation Voice processing system
US20020101860A1 (en) * 1999-11-10 2002-08-01 Thornton Timothy R. Application for a voice over IP (VoIP) telephony gateway and methods for use therein
US6765866B1 (en) * 2000-02-29 2004-07-20 Mosaid Technologies, Inc. Link aggregation
US20020120759A1 (en) * 2001-02-23 2002-08-29 Stefano Faccin IP based service architecture
US20020141386A1 (en) * 2001-03-29 2002-10-03 Minert Brian D. System, apparatus and method for voice over internet protocol telephone calling using enhanced signaling packets and localized time slot interchanging
US20030156583A1 (en) * 2002-02-13 2003-08-21 Walter Prager System and method for parallel connection selection in a communication network
US7944817B1 (en) * 2002-03-26 2011-05-17 Nortel Networks Limited Hierarchical virtual trunking over packet networks
US7248565B1 (en) * 2002-05-16 2007-07-24 Cisco Technology, Inc. Arrangement for managing multiple gateway trunk groups for voice over IP networks
US6914973B2 (en) * 2002-06-25 2005-07-05 Tekelec Methods and systems for improving trunk utilization for calls to ported numbers
US20040032860A1 (en) * 2002-08-19 2004-02-19 Satish Mundra Quality of voice calls through voice over IP gateways
US7639664B2 (en) * 2003-05-30 2009-12-29 Alcatel-Lucent Usa Inc. Dynamic management of trunk group members
US20050052996A1 (en) * 2003-09-09 2005-03-10 Lucent Technologies Inc. Method and apparatus for management of voice-over IP communications
US6977933B2 (en) * 2003-10-06 2005-12-20 Tekelec Methods and systems for providing session initiation protocol (SIP) trunk groups
US20050094623A1 (en) * 2003-10-31 2005-05-05 D'eletto Robert Apparatus and method for voice over IP traffic separation and factor determination
US7619974B2 (en) * 2003-10-31 2009-11-17 Brocade Communication Systems, Inc. Frame traffic balancing across trunk groups
US7496184B2 (en) * 2003-11-07 2009-02-24 Telarus, Inc. System and method to determine and deliver quotes for distance-sensitive communication links from multiple service providers
US7307985B1 (en) * 2003-12-04 2007-12-11 Sprint Communications Company L.P. Method and system for automatically routing a dial plan through a communications network
US20050232251A1 (en) * 2004-04-14 2005-10-20 Nortel Networks Limited Personal communication device having multiple user IDs
US20060072554A1 (en) * 2004-09-29 2006-04-06 Fardad Farahmand Hierarchically organizing logical trunk groups in a packet-based network
US20060072593A1 (en) * 2004-09-29 2006-04-06 Grippo Ronald V Controlling time-sensitive data in a packet-based network

Cited By (178)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100299605A1 (en) * 2002-11-05 2010-11-25 At&T Intellectual Property I, L.P. (Formerly Known As Sbc Properties, L.P.) User Interface Design for Telecommunications Systems
US8254372B2 (en) 2003-02-21 2012-08-28 Genband Us Llc Data communication apparatus and method
US20040252681A1 (en) * 2003-02-21 2004-12-16 Rafi Rabipour Data communication apparatus and method
US7609637B2 (en) * 2004-03-03 2009-10-27 Alcatel-Lucent Usa Inc. Network quality of service management
US20050195741A1 (en) * 2004-03-03 2005-09-08 Doshi Bharat T. Network quality of service management
US20070104114A1 (en) * 2004-03-19 2007-05-10 Nortel Networks Limited Providing a capability list of a predefined format in a communications network
US8027265B2 (en) 2004-03-19 2011-09-27 Genband Us Llc Providing a capability list of a predefined format in a communications network
US20080240079A1 (en) * 2004-03-19 2008-10-02 Nortel Networks Limited Communicating Processing Capabilities Along a Communications Path
US7990865B2 (en) 2004-03-19 2011-08-02 Genband Us Llc Communicating processing capabilities along a communications path
US20060072593A1 (en) * 2004-09-29 2006-04-06 Grippo Ronald V Controlling time-sensitive data in a packet-based network
US7602710B2 (en) 2004-09-29 2009-10-13 Sonus Networks, Inc. Controlling time-sensitive data in a packet-based network
US20060072554A1 (en) * 2004-09-29 2006-04-06 Fardad Farahmand Hierarchically organizing logical trunk groups in a packet-based network
US7792096B2 (en) * 2005-11-18 2010-09-07 Genband Us Llc Methods, systems, and computer program products for distributed resource allocation among clustered media gateways in a communications network
US20070116018A1 (en) * 2005-11-18 2007-05-24 Santera Systems, Inc. Methods, systems, and computer program products for distributed resource allocation among clustered media gateways in a communications network
US10560494B2 (en) 2006-06-30 2020-02-11 Centurylink Intellectual Property Llc Managing voice over internet protocol (VoIP) communications
WO2008013649A3 (en) * 2006-06-30 2008-04-03 Embarq Holdings Co Llc System and method for call routing based on transmission performance of a packet network
US9094257B2 (en) 2006-06-30 2015-07-28 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US9054915B2 (en) 2006-06-30 2015-06-09 Centurylink Intellectual Property Llc System and method for adjusting CODEC speed in a transmission path during call set-up due to reduced transmission performance
US8976665B2 (en) 2006-06-30 2015-03-10 Centurylink Intellectual Property Llc System and method for re-routing calls
US8717911B2 (en) 2006-06-30 2014-05-06 Centurylink Intellectual Property Llc System and method for collecting network performance information
US8570872B2 (en) 2006-06-30 2013-10-29 Centurylink Intellectual Property Llc System and method for selecting network ingress and egress
US8488447B2 (en) 2006-06-30 2013-07-16 Centurylink Intellectual Property Llc System and method for adjusting code speed in a transmission path during call set-up due to reduced transmission performance
US8477614B2 (en) 2006-06-30 2013-07-02 Centurylink Intellectual Property Llc System and method for routing calls if potential call paths are impaired or congested
US9154634B2 (en) 2006-06-30 2015-10-06 Centurylink Intellectual Property Llc System and method for managing network communications
US9549004B2 (en) 2006-06-30 2017-01-17 Centurylink Intellectual Property Llc System and method for re-routing calls
US9118583B2 (en) 2006-06-30 2015-08-25 Centurylink Intellectual Property Llc System and method for re-routing calls
US8184549B2 (en) 2006-06-30 2012-05-22 Embarq Holdings Company, LLP System and method for selecting network egress
US9749399B2 (en) 2006-06-30 2017-08-29 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US8000318B2 (en) * 2006-06-30 2011-08-16 Embarq Holdings Company, Llc System and method for call routing based on transmission performance of a packet network
US9838440B2 (en) 2006-06-30 2017-12-05 Centurylink Intellectual Property Llc Managing voice over internet protocol (VoIP) communications
US10230788B2 (en) 2006-06-30 2019-03-12 Centurylink Intellectual Property Llc System and method for selecting a content delivery network
US20080279183A1 (en) * 2006-06-30 2008-11-13 Wiley William L System and method for call routing based on transmission performance of a packet network
US7948909B2 (en) 2006-06-30 2011-05-24 Embarq Holdings Company, Llc System and method for resetting counters counting network performance information at network communications devices on a packet network
WO2008013649A2 (en) * 2006-06-30 2008-01-31 Embarq Holdings Company Llc System and method for call routing based on transmission performance of a packet network
US20080005156A1 (en) * 2006-06-30 2008-01-03 Edwards Stephen K System and method for managing subscriber usage of a communications network
US20080002576A1 (en) * 2006-06-30 2008-01-03 Bugenhagen Michael K System and method for resetting counters counting network performance information at network communications devices on a packet network
US7765294B2 (en) 2006-06-30 2010-07-27 Embarq Holdings Company, Llc System and method for managing subscriber usage of a communications network
US8407765B2 (en) 2006-08-22 2013-03-26 Centurylink Intellectual Property Llc System and method for restricting access to network performance information tables
US20080052628A1 (en) * 2006-08-22 2008-02-28 Bugenhagen Michael K System and method for displaying a graphical representation of a network to identify nodes and node segments on the network that are not operating normally
US20080049615A1 (en) * 2006-08-22 2008-02-28 Bugenhagen Michael K System and method for dynamically shaping network traffic
US10469385B2 (en) 2006-08-22 2019-11-05 Centurylink Intellectual Property Llc System and method for improving network performance using a connection admission control engine
US10298476B2 (en) 2006-08-22 2019-05-21 Centurylink Intellectual Property Llc System and method for tracking application resource usage
US7808918B2 (en) 2006-08-22 2010-10-05 Embarq Holdings Company, Llc System and method for dynamically shaping network traffic
US20080049641A1 (en) * 2006-08-22 2008-02-28 Edwards Stephen K System and method for displaying a graph representative of network performance over a time period
US10075351B2 (en) 2006-08-22 2018-09-11 Centurylink Intellectual Property Llc System and method for improving network performance
US7843831B2 (en) 2006-08-22 2010-11-30 Embarq Holdings Company Llc System and method for routing data on a packet network
US9992348B2 (en) 2006-08-22 2018-06-05 Century Link Intellectual Property LLC System and method for establishing a call on a packet network
US9929923B2 (en) 2006-08-22 2018-03-27 Centurylink Intellectual Property Llc System and method for provisioning resources of a packet network based on collected network performance information
US7889660B2 (en) 2006-08-22 2011-02-15 Embarq Holdings Company, Llc System and method for synchronizing counters on an asynchronous packet communications network
US7940735B2 (en) 2006-08-22 2011-05-10 Embarq Holdings Company, Llc System and method for selecting an access point
US20080052401A1 (en) * 2006-08-22 2008-02-28 Bugenhagen Michael K Pin-hole firewall for communicating data packets on a packet network
US9832090B2 (en) 2006-08-22 2017-11-28 Centurylink Intellectual Property Llc System, method for compiling network performancing information for communications with customer premise equipment
US9813320B2 (en) 2006-08-22 2017-11-07 Centurylink Intellectual Property Llc System and method for generating a graphical user interface representative of network performance
US9806972B2 (en) 2006-08-22 2017-10-31 Centurylink Intellectual Property Llc System and method for monitoring and altering performance of a packet network
US20080049753A1 (en) * 2006-08-22 2008-02-28 Heinze John M System and method for load balancing network resources using a connection admission control engine
US9712445B2 (en) 2006-08-22 2017-07-18 Centurylink Intellectual Property Llc System and method for routing data on a packet network
US8015294B2 (en) 2006-08-22 2011-09-06 Embarq Holdings Company, LP Pin-hole firewall for communicating data packets on a packet network
US9660917B2 (en) 2006-08-22 2017-05-23 Centurylink Intellectual Property Llc System and method for remotely controlling network operators
US9661514B2 (en) 2006-08-22 2017-05-23 Centurylink Intellectual Property Llc System and method for adjusting communication parameters
US8040811B2 (en) 2006-08-22 2011-10-18 Embarq Holdings Company, Llc System and method for collecting and managing network performance information
US8064391B2 (en) 2006-08-22 2011-11-22 Embarq Holdings Company, Llc System and method for monitoring and optimizing network performance to a wireless device
US9621361B2 (en) 2006-08-22 2017-04-11 Centurylink Intellectual Property Llc Pin-hole firewall for communicating data packets on a packet network
US8098579B2 (en) 2006-08-22 2012-01-17 Embarq Holdings Company, LP System and method for adjusting the window size of a TCP packet through remote network elements
US8102770B2 (en) 2006-08-22 2012-01-24 Embarq Holdings Company, LP System and method for monitoring and optimizing network performance with vector performance tables and engines
US8107366B2 (en) 2006-08-22 2012-01-31 Embarq Holdings Company, LP System and method for using centralized network performance tables to manage network communications
US9602265B2 (en) 2006-08-22 2017-03-21 Centurylink Intellectual Property Llc System and method for handling communications requests
US8125897B2 (en) 2006-08-22 2012-02-28 Embarq Holdings Company Lp System and method for monitoring and optimizing network performance with user datagram protocol network performance information packets
US8130793B2 (en) 2006-08-22 2012-03-06 Embarq Holdings Company, Llc System and method for enabling reciprocal billing for different types of communications over a packet network
US8144586B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for controlling network bandwidth with a connection admission control engine
US8144587B2 (en) 2006-08-22 2012-03-27 Embarq Holdings Company, Llc System and method for load balancing network resources using a connection admission control engine
US20080049630A1 (en) * 2006-08-22 2008-02-28 Kozisek Steven E System and method for monitoring and optimizing network performance to a wireless device
US9479341B2 (en) 2006-08-22 2016-10-25 Centurylink Intellectual Property Llc System and method for initiating diagnostics on a packet network node
US9253661B2 (en) 2006-08-22 2016-02-02 Centurylink Intellectual Property Llc System and method for modifying connectivity fault management packets
US8194555B2 (en) 2006-08-22 2012-06-05 Embarq Holdings Company, Llc System and method for using distributed network performance information tables to manage network communications
US8199653B2 (en) 2006-08-22 2012-06-12 Embarq Holdings Company, Llc System and method for communicating network performance information over a packet network
US8213366B2 (en) 2006-08-22 2012-07-03 Embarq Holdings Company, Llc System and method for monitoring and optimizing network performance to a wireless device
US8223655B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for provisioning resources of a packet network based on collected network performance information
US8223654B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc Application-specific integrated circuit for monitoring and optimizing interlayer network performance
US8224255B2 (en) 2006-08-22 2012-07-17 Embarq Holdings Company, Llc System and method for managing radio frequency windows
US8228791B2 (en) 2006-08-22 2012-07-24 Embarq Holdings Company, Llc System and method for routing communications between packet networks based on intercarrier agreements
US8238253B2 (en) 2006-08-22 2012-08-07 Embarq Holdings Company, Llc System and method for monitoring interlayer devices and optimizing network performance
US9241277B2 (en) 2006-08-22 2016-01-19 Centurylink Intellectual Property Llc System and method for monitoring and optimizing network performance to a wireless device
US8274905B2 (en) 2006-08-22 2012-09-25 Embarq Holdings Company, Llc System and method for displaying a graph representative of network performance over a time period
US9241271B2 (en) 2006-08-22 2016-01-19 Centurylink Intellectual Property Llc System and method for restricting access to network performance information
US8307065B2 (en) 2006-08-22 2012-11-06 Centurylink Intellectual Property Llc System and method for remotely controlling network operators
US9240906B2 (en) 2006-08-22 2016-01-19 Centurylink Intellectual Property Llc System and method for monitoring and altering performance of a packet network
US8358580B2 (en) 2006-08-22 2013-01-22 Centurylink Intellectual Property Llc System and method for adjusting the window size of a TCP packet through network elements
US8374090B2 (en) 2006-08-22 2013-02-12 Centurylink Intellectual Property Llc System and method for routing data on a packet network
US20080049757A1 (en) * 2006-08-22 2008-02-28 Bugenhagen Michael K System and method for synchronizing counters on an asynchronous packet communications network
US8472326B2 (en) 2006-08-22 2013-06-25 Centurylink Intellectual Property Llc System and method for monitoring interlayer devices and optimizing network performance
US20080049649A1 (en) * 2006-08-22 2008-02-28 Kozisek Steven E System and method for selecting an access point
US20080049632A1 (en) * 2006-08-22 2008-02-28 Ray Amar N System and method for adjusting the window size of a TCP packet through remote network elements
US8488495B2 (en) 2006-08-22 2013-07-16 Centurylink Intellectual Property Llc System and method for routing communications between packet networks based on real time pricing
US8509082B2 (en) 2006-08-22 2013-08-13 Centurylink Intellectual Property Llc System and method for load balancing network resources using a connection admission control engine
US9225609B2 (en) 2006-08-22 2015-12-29 Centurylink Intellectual Property Llc System and method for remotely controlling network operators
US9225646B2 (en) 2006-08-22 2015-12-29 Centurylink Intellectual Property Llc System and method for improving network performance using a connection admission control engine
US8520603B2 (en) 2006-08-22 2013-08-27 Centurylink Intellectual Property Llc System and method for monitoring and optimizing network performance to a wireless device
US8531954B2 (en) 2006-08-22 2013-09-10 Centurylink Intellectual Property Llc System and method for handling reservation requests with a connection admission control engine
US8537695B2 (en) 2006-08-22 2013-09-17 Centurylink Intellectual Property Llc System and method for establishing a call being received by a trunk on a packet network
US8549405B2 (en) * 2006-08-22 2013-10-01 Centurylink Intellectual Property Llc System and method for displaying a graphical representation of a network to identify nodes and node segments on the network that are not operating normally
US20080049640A1 (en) * 2006-08-22 2008-02-28 Heinz John M System and method for provisioning resources of a packet network based on collected network performance information
US20080049776A1 (en) * 2006-08-22 2008-02-28 Wiley William L System and method for using centralized network performance tables to manage network communications
US8576722B2 (en) 2006-08-22 2013-11-05 Centurylink Intellectual Property Llc System and method for modifying connectivity fault management packets
US8619596B2 (en) 2006-08-22 2013-12-31 Centurylink Intellectual Property Llc System and method for using centralized network performance tables to manage network communications
US8619600B2 (en) 2006-08-22 2013-12-31 Centurylink Intellectual Property Llc System and method for establishing calls over a call path having best path metrics
US8619820B2 (en) 2006-08-22 2013-12-31 Centurylink Intellectual Property Llc System and method for enabling communications over a number of packet networks
US8670313B2 (en) 2006-08-22 2014-03-11 Centurylink Intellectual Property Llc System and method for adjusting the window size of a TCP packet through network elements
US8687614B2 (en) 2006-08-22 2014-04-01 Centurylink Intellectual Property Llc System and method for adjusting radio frequency parameters
US7684332B2 (en) 2006-08-22 2010-03-23 Embarq Holdings Company, Llc System and method for adjusting the window size of a TCP packet through network elements
US8743700B2 (en) 2006-08-22 2014-06-03 Centurylink Intellectual Property Llc System and method for provisioning resources of a packet network based on collected network performance information
US8743703B2 (en) 2006-08-22 2014-06-03 Centurylink Intellectual Property Llc System and method for tracking application resource usage
US8750158B2 (en) 2006-08-22 2014-06-10 Centurylink Intellectual Property Llc System and method for differentiated billing
US8811160B2 (en) 2006-08-22 2014-08-19 Centurylink Intellectual Property Llc System and method for routing data on a packet network
US20080049775A1 (en) * 2006-08-22 2008-02-28 Morrill Robert J System and method for monitoring and optimizing network performance with vector performance tables and engines
US9112734B2 (en) 2006-08-22 2015-08-18 Centurylink Intellectual Property Llc System and method for generating a graphical user interface representative of network performance
US20080049625A1 (en) * 2006-08-22 2008-02-28 Edwards Stephen K System and method for collecting and managing network performance information
US20080049746A1 (en) * 2006-08-22 2008-02-28 Morrill Robert J System and method for routing data on a packet network
US9094261B2 (en) 2006-08-22 2015-07-28 Centurylink Intellectual Property Llc System and method for establishing a call being received by a trunk on a packet network
US9014204B2 (en) 2006-08-22 2015-04-21 Centurylink Intellectual Property Llc System and method for managing network communications
US9042370B2 (en) 2006-08-22 2015-05-26 Centurylink Intellectual Property Llc System and method for establishing calls over a call path having best path metrics
US20080049629A1 (en) * 2006-08-22 2008-02-28 Morrill Robert J System and method for monitoring data link layer devices and optimizing interlayer network performance
US9054986B2 (en) 2006-08-22 2015-06-09 Centurylink Intellectual Property Llc System and method for enabling communications over a number of packet networks
WO2008027983A3 (en) * 2006-08-30 2008-10-30 Level 3 Communications Llc Internet protocol trunk groups
US20110007736A1 (en) * 2006-08-30 2011-01-13 Richard Terpstra Internet protocol trunk groups
US7813335B2 (en) * 2006-08-30 2010-10-12 Level 3 Communications, Llc Internet protocol trunk groups
US8520667B2 (en) * 2006-08-30 2013-08-27 Level 3 Communications, Llc Internet protocol trunk groups
US20080056244A1 (en) * 2006-08-30 2008-03-06 Level 3 Communications, Llc Internet protocol trunk groups
US8194643B2 (en) 2006-10-19 2012-06-05 Embarq Holdings Company, Llc System and method for monitoring the connection of an end-user to a remote network
US20080095049A1 (en) * 2006-10-19 2008-04-24 Embarq Holdings Company, Llc System and method for establishing a communications session with an end-user based on the state of a network connection
US8289965B2 (en) 2006-10-19 2012-10-16 Embarq Holdings Company, Llc System and method for establishing a communications session with an end-user based on the state of a network connection
US9521150B2 (en) 2006-10-25 2016-12-13 Centurylink Intellectual Property Llc System and method for automatically regulating messages between networks
US20080167846A1 (en) * 2006-10-25 2008-07-10 Embarq Holdings Company, Llc System and method for regulating messages between networks
US8189468B2 (en) 2006-10-25 2012-05-29 Embarq Holdings, Company, LLC System and method for regulating messages between networks
US7995561B2 (en) * 2006-12-07 2011-08-09 Nortel Networks Limited Techniques for implementing logical trunk groups with session initiation protocol (SIP)
US20080137649A1 (en) * 2006-12-07 2008-06-12 Nortel Networks Limited Techniques for implementing logical trunk groups with session initiation protocol (sip)
US8346239B2 (en) 2006-12-28 2013-01-01 Genband Us Llc Methods, systems, and computer program products for silence insertion descriptor (SID) conversion
US8111692B2 (en) 2007-05-31 2012-02-07 Embarq Holdings Company Llc System and method for modifying network traffic
WO2009024079A1 (en) * 2007-08-19 2009-02-26 Huawei Technologies Co., Ltd. Method and system for realizing communication called and release, and equipment thereof
EP2213051A2 (en) * 2007-11-19 2010-08-04 Level 3 Communications, LLC Geographic trunk groups
EP2213051A4 (en) * 2007-11-19 2014-11-26 Level 3 Communications Llc Geographic trunk groups
US20110235517A1 (en) * 2007-11-19 2011-09-29 Level 3 Communications, Llc Geographic Trunk Groups
US8520668B2 (en) 2007-11-19 2013-08-27 Level 3 Communications, Llc Geographic trunk groups
US7961720B2 (en) * 2007-11-19 2011-06-14 Level 3 Communications, Llc Geographic trunk groups
US20090129580A1 (en) * 2007-11-19 2009-05-21 Level 3 Communications Llc Geographic trunk groups
WO2009067443A3 (en) * 2007-11-19 2009-07-09 Level 3 Communications Llc Geographic trunk groups
US8879391B2 (en) 2008-04-09 2014-11-04 Centurylink Intellectual Property Llc System and method for using network derivations to determine path states
US8068425B2 (en) 2008-04-09 2011-11-29 Embarq Holdings Company, Llc System and method for using network performance information to determine improved measures of path states
US20090257350A1 (en) * 2008-04-09 2009-10-15 Embarq Holdings Company, Llc System and method for using network performance information to determine improved measures of path states
US9559978B2 (en) 2009-08-04 2017-01-31 Genband Us Llc Methods, systems, and computer readable media for intelligent optimization of digital signal processor (DSP) resource utilization in a media gateway
US8908541B2 (en) 2009-08-04 2014-12-09 Genband Us Llc Methods, systems, and computer readable media for intelligent optimization of digital signal processor (DSP) resource utilization in a media gateway
US20110032927A1 (en) * 2009-08-04 2011-02-10 Weisheng Chen Methods, systems, and computer readable media for intelligent optimization of digital signal processor (dsp) resource utilization in a media gateway
US8576724B2 (en) 2009-11-24 2013-11-05 At&T Intellectual Property I, L.P. Method, system, and computer program product, for correlating special service impacting events
US20110122773A1 (en) * 2009-11-24 2011-05-26 At&T Intellectual Property Corporation Method, system, and computer program product, for correlating special service impacting events
US11379119B2 (en) 2010-03-05 2022-07-05 Netapp, Inc. Writing data in a distributed data storage system
US10439900B2 (en) 2011-12-27 2019-10-08 Netapp, Inc. Quality of service policy based load adaption
US9054992B2 (en) * 2011-12-27 2015-06-09 Solidfire, Inc. Quality of service policy sets
US9003021B2 (en) 2011-12-27 2015-04-07 Solidfire, Inc. Management of storage system access based on client performance and cluser health
US11212196B2 (en) 2011-12-27 2021-12-28 Netapp, Inc. Proportional quality of service based on client impact on an overload condition
US9838269B2 (en) 2011-12-27 2017-12-05 Netapp, Inc. Proportional quality of service based on client usage and system metrics
US10951488B2 (en) 2011-12-27 2021-03-16 Netapp, Inc. Rule-based performance class access management for storage cluster performance guarantees
US10911328B2 (en) 2011-12-27 2021-02-02 Netapp, Inc. Quality of service policy based load adaption
US9712401B2 (en) 2011-12-27 2017-07-18 Netapp, Inc. Quality of service policy sets
US10516582B2 (en) 2011-12-27 2019-12-24 Netapp, Inc. Managing client access for storage cluster performance guarantees
US11386120B2 (en) 2014-02-21 2022-07-12 Netapp, Inc. Data syncing in a distributed system
US9798728B2 (en) 2014-07-24 2017-10-24 Netapp, Inc. System performing data deduplication using a dense tree data structure
US10133511B2 (en) 2014-09-12 2018-11-20 Netapp, Inc Optimized segment cleaning technique
US9671960B2 (en) 2014-09-12 2017-06-06 Netapp, Inc. Rate matching technique for balancing segment cleaning and I/O workload
US10210082B2 (en) 2014-09-12 2019-02-19 Netapp, Inc. Rate matching technique for balancing segment cleaning and I/O workload
US10365838B2 (en) 2014-11-18 2019-07-30 Netapp, Inc. N-way merge technique for updating volume metadata in a storage I/O stack
US9836229B2 (en) 2014-11-18 2017-12-05 Netapp, Inc. N-way merge technique for updating volume metadata in a storage I/O stack
US9720601B2 (en) 2015-02-11 2017-08-01 Netapp, Inc. Load balancing technique for a storage array
US9762460B2 (en) 2015-03-24 2017-09-12 Netapp, Inc. Providing continuous context for operational information of a storage system
US9710317B2 (en) 2015-03-30 2017-07-18 Netapp, Inc. Methods to identify, handle and recover from suspect SSDS in a clustered flash array
US9740566B2 (en) 2015-07-31 2017-08-22 Netapp, Inc. Snapshot creation workflow
US10929022B2 (en) 2016-04-25 2021-02-23 Netapp. Inc. Space savings reporting for storage system supporting snapshot and clones
US10997098B2 (en) 2016-09-20 2021-05-04 Netapp, Inc. Quality of service policy sets
US11327910B2 (en) 2016-09-20 2022-05-10 Netapp, Inc. Quality of service policy sets
US11886363B2 (en) 2016-09-20 2024-01-30 Netapp, Inc. Quality of service policy sets

Also Published As

Publication number Publication date
WO2006039344A3 (en) 2006-06-01
WO2006039344A2 (en) 2006-04-13
EP1794959A4 (en) 2008-07-02
US20060072554A1 (en) 2006-04-06
JP2008515348A (en) 2008-05-08
CA2581189A1 (en) 2006-04-13
EP1794959A2 (en) 2007-06-13
US20060072593A1 (en) 2006-04-06
EP1850541A2 (en) 2007-10-31
EP1850542A2 (en) 2007-10-31
EP1850541A3 (en) 2008-07-02
EP1850542A3 (en) 2008-07-02
EP1850541B1 (en) 2011-08-31
US7602710B2 (en) 2009-10-13
ATE523018T1 (en) 2011-09-15

Similar Documents

Publication Publication Date Title
US7602710B2 (en) Controlling time-sensitive data in a packet-based network
US10523554B2 (en) System and method of routing calls on a packet network
CA2656409C (en) System and method for managing subscriber usage of a communications network
US7050424B2 (en) Method and system for automatic proxy server workload shifting for load balancing
US9054915B2 (en) System and method for adjusting CODEC speed in a transmission path during call set-up due to reduced transmission performance
US8619596B2 (en) System and method for using centralized network performance tables to manage network communications
US9432292B2 (en) Overload call control in a VoIP network
US7613111B2 (en) Methods, systems, and computer program products for dynamic blocking an unblocking of media over packet resources
US9106511B1 (en) Systems and methods for optimizing application data delivery over third party networks
US9491302B2 (en) Telephone call processing method and apparatus
US11296947B2 (en) SD-WAN device, system, and network
US10116709B1 (en) Systems and methods for optimizing application data delivery over third party networks
US7639793B1 (en) Method and apparatus for reconfiguring network routes
US20090274040A1 (en) Mid-call Redirection of Traffic Through Application-Layer Gateways
US10230679B1 (en) Systems and methods for optimizing application data delivery over third party networks
US7453803B1 (en) Congestion control for packet communications
US20090092125A1 (en) Method and apparatus for providing customer controlled traffic redistribution
Noro et al. QoS support for VoIP traffic to prepare emergency
WO2024049554A1 (en) Alternative route propogation
Tamboli et al. Comparative study of IP & MPLS technology
Headquarters Cisco Hosted/Managed Unified Communications Services

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONUS NETWORKS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ST. HILAIRE, KENNETH R.;GRIPPO, RONALD V.;FARAHMAND, FARDAD;AND OTHERS;REEL/FRAME:016957/0053;SIGNING DATES FROM 20051103 TO 20051212

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION