WO2001050687A1 - Method and apparatus for multi-tiered data multicasting - Google Patents

Method and apparatus for multi-tiered data multicasting Download PDF

Info

Publication number
WO2001050687A1
WO2001050687A1 PCT/US2000/035721 US0035721W WO0150687A1 WO 2001050687 A1 WO2001050687 A1 WO 2001050687A1 US 0035721 W US0035721 W US 0035721W WO 0150687 A1 WO0150687 A1 WO 0150687A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
node
nodes
network
storing
Prior art date
Application number
PCT/US2000/035721
Other languages
French (fr)
Inventor
Scott R. Anderson
Gary W. Longsine
Original Assignee
Computer Associates Think, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Computer Associates Think, Inc. filed Critical Computer Associates Think, Inc.
Priority to AU26124/01A priority Critical patent/AU2612401A/en
Publication of WO2001050687A1 publication Critical patent/WO2001050687A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1863Arrangements for providing special services to substations for broadcast or conference, e.g. multicast comprising mechanisms for improved reliability, e.g. status reports
    • H04L12/1868Measures taken after transmission, e.g. acknowledgments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1881Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with schedule organisation, e.g. priority, sequence management

Definitions

  • the present disclosure relates to multicasting of data, and in particular to a method and
  • IP Unicast designed to transmit data (or a packet) from sender to a single receiver
  • IP Broadcast designed to transmit data from a sender to an entire subnetwork.
  • multicasting involves the transmission of a message from one node in a
  • multicasting is carried out by addressing the message to the appropriate receiving nodes. All the receiving nodes receive the package at the same time, as all
  • the nodes can listen to the single transmission line contemporaneously. Multicasting may also be used.
  • token ring based networks multicast packages
  • IP Multicast wide area networks
  • a sender sends an IP packet to a multicast address and lets the network
  • next network node along the shortest route to the multicasting node which may then elect to join
  • nodes then provide a tree distribution structure which allows a package to be multicast to all
  • IP Multicast is designed for realtime transmissions
  • the receiving node if a node cannot receive at the rate of transmission of the sending node, the receiving node
  • UDP User Data Protocol
  • the packet is discarded. This may work fine for sound or picture transmissions, as the packet structure can be set up so that a lower
  • a receiving node can compensate for lost
  • receiving node is to receive each packet constituting the transmitted package.
  • the transmitting node might have to retransmit a substantial portion of the package several
  • a method for multicasting data on a network including a plurality of nodes
  • the nodes including connections forming a distribution tree.
  • the method may further comprise creating a list at the storing node
  • the method may
  • node may compare data actually received, to the data identified in list to determine if all data
  • Each storing node may maintain a list of the lower
  • Each lower level node may send a
  • the storing node may delete the data from memory. After the storing node receives
  • the storing node may send a positive acknowledgment up
  • the tree to a storing node responsible for sending it the data.
  • An apparatus for serving as a node on a network of nodes for routing
  • the network nodes including connections forming a characteristic distribution tree.
  • apparatus comprises an input for receiving data from a higher level node on the network, at least one output for outputting the data to lower level nodes on the network and storage for
  • the apparatus is responsible for transmitting all the data it has stored to
  • a computer readable medium including computer executable code for
  • the network including a plurality of nodes for routing data
  • medium comprises code for receiving at a storing node, data sent from a higher level node on
  • code for storing the received data in memory at the storing node code for
  • Figure 1 shows an example of a network upon which the present disclosure may be
  • Figure 2 shows a spanning tree superimposed upon the network shown in Figure 1;
  • Figure 3 is a diagram for explaining the hierarchal nature of a spanning tree
  • Figure 4 shows an example of a branch of a multicast tree
  • Figure 5 is a flow chart for describing a process that may be performed by a storing
  • Figure 6 is a flow chart for describing another process that may be performed by a
  • Figure 7 is a flow chart for describing steps performed at a distribution node if the
  • Figures 8A and 8B are flow charts describing processes performed by a storing node
  • Figure 9 is a flow chart for describing processes performed by a storing node according
  • Figure 10 is a block diagram showing exemplary elements of a storing node according
  • Network is shown in Figure 1 and is referred to generally as network system 1. Distribution
  • server node (DS) 10 is capable of multicasting packets of data which together form a package
  • target nodes T 14
  • relay nodes R 12
  • R relay nodes
  • a node 12 and the target nodes 14 may change for particular transmissions.
  • a node may be a
  • Network 1 has appropriate routing set up so that any
  • LAN Local Area Network
  • ATM Asynchronous Transfer Mode
  • routes from a single distribution node to any set of target nodes will form a tree (spanning tree)
  • FIG. 2 shows a spanning tree
  • the spanning tree can be generated for a particular distribution node and set of target nodes before implementing the distribution mechanism of the present system
  • the present system and method can easily be designed to
  • DS distribution server
  • nodes (T) 14 on network 1 It will be appreciated that the distribution server 10 does not need
  • a spanning tree will be present, or can be created, for any set of nodes required.
  • a spanning tree could be generated
  • the distribution server is
  • the server acting as the distribution node will have a high
  • the target nodes in direct connection with the highest bandwidth links of the network.
  • distribution server and the target nodes will typically include higher bandwidth links at the top
  • bandwidth linlcs before being passed back down to other lower bandwidth linlcs associated with
  • the system may also be
  • the distribution server is capable of looking for a node to transmit the data to
  • Tree 100 is shown more clearly in Fig. 3 and is referred to generally as tree 100.
  • Tree 100 is shown more clearly in Fig. 3 and is referred to generally as tree 100.
  • 100 includes DS 10 at the top of the tree, and target nodes 24-31. Each node is connected to a
  • link 4 Although linlcs 4 are described as providing a connection between nodes, it should be
  • a package is multicast from the DS 10 to each of its child
  • nodes which in this embodiment are relay node 20 and target node 21 in the tree, as a sequence
  • such addressing may be achieved by assigning a
  • recipient group address to the package, using a similar mechanism to that used by IP Multicast.
  • nodes are instructed that they are group members by distribution node 10, rather than choosing
  • the nodes can be instructed
  • Each intermediate node that the unicast message passes through is instructed that it forms
  • Each intermediate node may maintain a list of its own child
  • Routing information could even be encapsulated with each packet
  • the packets may be placed in a queue associated with each link along with
  • the packets may then be sent using
  • the rate of packet transmission between nodes is based on various weighting factors and
  • the package could be multicast from the distribution node
  • some intermediate nodes are enabled to store packets. These nodes are referred to
  • Storing nodes may perform some or all of the functions typically
  • a storing node may itself function as a target node.
  • a storing node is also a
  • target node it is referred to herein as a storing target node.
  • a storing node is programmed to retain in memory a
  • the storing node then forwards the packet to each of
  • the packets can be added to the appropriate queues for the appropriate network links and
  • FIG. 10 A block diagram of a storing node is shown in Fig. 10. In addition to including the hardware and software necessary for routing data to the appropriate destination, the storing
  • nodes include one or more software applications 42 for performing one or more of the processes
  • the storing node also includes memory 44.
  • Memory 44 may include memory
  • Storing node 25 includes input 40 for receiving data communicated via link 4.
  • Storing node 25 may also include
  • one or more outputs 46 for routing the data via one or more linlcs 4 to destination nodes.
  • nodes 20 and 21 are child nodes of distribution node 10
  • node 21 being a storing node.
  • Nodes 24 and 25 are storing nodes and target nodes (storing
  • Nodes 26-31 are target nodes, and nodes 20 and 22 in this example, serve as relay
  • node 25 that it is likely that the packet should have been received, node 25 will send a Negative
  • nodes 20, 22 would just pass a NAck on up the tree until the distribution node 10 or a
  • a node should be capable of determining whether a packet that was
  • node(s) enclosing a list of the packets it has queued, identifying packets sent and/or packets that
  • the child node can then ascertain whether any packets have been lost, by
  • packets list" may be sent by the parent node at substantially different times to the packets
  • Fig. 5 is a flow diagram illustrating a technique capable of being performed by storing
  • a packet is received from a parent node and
  • step SI 2 a determination is made whether the received packet forms part
  • Step S12 If the packet is not part of a "sent packets list" (No, Step S12). If the packet is not part of a "sent packets list" (No, Step S12). If the packet is not part of a "sent packets list" (No, Step S12). If the packet is not part of a "sent packets list" (No, Step S12). If the packet is not part of a "sent packets list" (No, Step S12). If the packet is not part of a "sent packets list" (No, Step S12).
  • SI 2 a determination is made whether the packet is for this node (e.g., whether this node is a
  • Step S28 identifying the packet is added to this nodes own “sent packets list" (Step S28).
  • the packet is
  • Step S30 The packets in the queue will be sent to nodes below in due course using, for example, a "fair queuing" type algorithm.
  • Step S26 If the packet is for this node (Yes, Step S26) (e.g., this node is
  • Step S32 the packet is processed by the node in a normal manner.
  • Step S34 A determination is
  • Step S36 Forwarded to other nodes (No, Step S36), the process then returns to Step S10. If the packet is not
  • Step S36 information identifying the packet is added to
  • Step S28 this node's "sent packets list" (Step S28).
  • the packet is then queued (Step S30) and the process
  • Step S12 If the received packet is part of a "sent packets list" (Yes, Step S12), a determination is
  • Step S14 made whether all packets for the "sent packets list" have been received. If not all
  • Step S14 a comparison is made between the received packets list created
  • Step SI 6 If all packets sent to this node from the node above
  • Step SI 8 the process ends and the "received packet list" can be
  • Step S20 If all packets have not been received (No, Step SI 8), a negative
  • Information may be also sent along with the NAck identifying the packets
  • the package is a list of packets
  • Step S50 the NAck is examined to determine which packets were not
  • Step S52 by examining information accompanying the NAck identifying the packets
  • Another method of determining whether all packets have been received at a child node is
  • Fig. 7 will now be used to describe the steps performed if the routing tree changes. If a
  • Step S60 the distribution node sends out another set of unicasts to the target nodes (Step S62)
  • branclnng nodes involved in the multicast can be made aware of
  • the multicast can then continue thereafter.
  • system can be arranged so that as many portions of the
  • each storing node (e.g., nodes 82, 84 and 85) is
  • each storing node is
  • dependent nodes for sending packets may be referred to herein as dependent nodes.
  • dependent nodes In the example shown in
  • the dependent nodes of storing node 80 are storing node 82, storing target node 84 and
  • Target nodes 88 and 89 are
  • Target nodes 90 and 91 are dependent on storing target node 85.
  • Nodes 81 and 82 are dependent on storing target node 85.
  • node sends a positive acknowledgment back up the tree.
  • the storing node above can delete its copy of the package (also assuming it is not a
  • the distribution node performs a
  • the initialization message is unicast by the
  • Step S82 the storing node forwards the unicast initialization message on to the
  • Step S86 The first storing node that receives this identification message
  • Step S88 a storing node will add the node identified in the identification packet to its
  • Step S90 The identification packet is not passed
  • the storing node can determine whether positive aclcnowledgments have been received
  • Step S92 After a storing node receives a positive acknowledgment (Step S92) a determination is made
  • Step S94 whether positive aclcnowledgments have been received from all dependent nodes. If
  • Step S94 the process returns to Step
  • Step S92 If all dependent nodes responded with positive aclcnowledgments (Yes, Step S94), a
  • Step 2 determination is made whether this node is a target node. If the node is a target node (Yes, Step
  • Step S98 the package can be deleted from memory (Step S98) prior
  • Step SI 00 to the positive acknowledgment being sent up the tree.
  • Storing node 82 stores and
  • node 85 stores and queues the packets, forwarding them to target nodes 90, 91. Upon receipt of
  • target nodes 90 and 91 each send positive aclcnowledgments to storing
  • target node 85 Storing target node 85, also being a target node, will not delete its copy of the
  • Storing target node 85 will then send a
  • target node 86 Upon receipt of the complete package, target node 86 will also send a
  • storing node 82 can delete its copy of the package. Storing node 82
  • the distribution node determines that the package has been received at each
  • Nodes receiving positive aclcnowledgments may also send acknowledgments of receipt

Abstract

A method for multicasting data on a network (1), the network (1) including a plurality of nodes (10, 12, 14), the nodes (10, 12, 14) including connections forming a characteristic distribution tree. The method comprises receiving at a storing node (12), data sent from a higher level node (10) on the network (1), storing the received data in memory at the storing node (12) and transmitting the data stored at the storing node (12) to lower level nodes (14) on the network (1), wherein any communication required for the transmission of such data to lower level nodes (14) on the network is not passed further up the distribution tree.

Description

METHOD AND APPARATUS FOR MULTI-TIERED
DATA MULTICASTING
Reference to Related Application
The present application claims the benefit of provisional Application Serial No.
60/173,872 filed on December 30, 1999, which is hereby incorporated herein by reference.
BACKGROUND
Field of the Disclosure
The present disclosure relates to multicasting of data, and in particular to a method and
apparatus for multi-tiered data multicasting.
Description of the Related Art
There are several fundamental types of methods for transmitting data on a network,
including IP Unicast designed to transmit data (or a packet) from sender to a single receiver and
IP Broadcast designed to transmit data from a sender to an entire subnetwork. A third method,
known generally as multicasting, involves the transmission of a message from one node in a
network to a plurality of other nodes in such a way that the sending node does not need to send
the data individually to each of the receiving nodes.
Multicasting has been carried out on local area networks. In Ethernet type single
transmission line networks, multicasting is carried out by addressing the message to the appropriate receiving nodes. All the receiving nodes receive the package at the same time, as all
the nodes can listen to the single transmission line contemporaneously. Multicasting may also
be implemented in token ring based networks. In token ring based networks, multicast packages
travel around the ring and are retrieved by the intended recipients, and passed on.
It is impractical for the traditional multicast type systems mentioned above to be used on
wide area networks such as the Internet due to the huge amount of bandwidth required.
Accordingly, more recently, systems have been developed for multicasting over wide area
networks, including the Internet. One system which attempts to deal with multicasting over
wide area networks is referred to as IP Multicast.
In an IP Multicast type system, all receivers are configured as members of the same
multicast group. A sender sends an IP packet to a multicast address and lets the network
forward a copy of the packet to each group of hosts. The group member node then informs the
next network node along the shortest route to the multicasting node which may then elect to join
the multicast group, and so on up the chain to the multicasting node. These group member
nodes then provide a tree distribution structure which allows a package to be multicast to all
destination nodes.
However, the IP Multicast type system does not take into account the different
bandwidths available on the different links. IP Multicast is designed for realtime transmissions,
and if a node cannot receive at the rate of transmission of the sending node, the receiving node
must be selective in the packets it receives. That is, in IP Multicast, the sender sends data to
multiple receivers with a User Data Protocol (UDP). The UDP, unlike TCP, only malces a best
effort to deliver data. If a transmission error occurs, the packet is discarded. This may work fine for sound or picture transmissions, as the packet structure can be set up so that a lower
quality version of the transmission can be received when receiving only a subset of the packets.
For example, this might be done by putting lower frequency components of the transmission in
certain packets and higher frequency components in other packets which can be discarded if
necessary. However, this system may not be adequate in all instances, including those in which
a higher quality transmission is required. Accordingly, although a system such as IP Multicast
can be used for non-critical transmissions, wherein a receiving node can compensate for lost
packets, it may not be suitable in situations where a package is to be multicast, and each
receiving node is to receive each packet constituting the transmitted package.
Various other problems may occur with the above-mentioned systems, particularly when
different receiving nodes have different bandwidth connections to the transmitting node.
Transmitting a package at the bandwidth of the lowest link on the network is highly inefficient,
as nodes with high bandwidth connections have to spend a much longer period receiving the
package, unnecessarily tying up resources on these nodes. Transmitting the package at a higher
data rate than can be received by a receiving node with the lower bandwidth connection will
lead to many of the packets not reaching some destination nodes. Different nodes in the
network might discard different packets, and therefore if the nodes need to receive all packets,
the transmitting node might have to retransmit a substantial portion of the package several
times.
Attempts can be made to overcome these problems by partitioning the receiving nodes
into groups according to bandwidth capability. Certain packets may then be designated to be
received by the lower bandwidth groups, and other packets designated as ones which can be discarded. Ignoring packets which might be lost for other reasons, such as local network
congestion, when packets need to be resent to the nodes with lower bandwidth connections, each
of these nodes will be missing the same designated packets. The sending node therefore only
has to retransmit these missing packets. Although this may be more efficient than a system
where different packets are resent to different nodes, when a packet is resent, it still has to travel
all the links in the network from the sending node to the receiving node. This requires the use of
a large amount of bandwidth and processor time in the routing nodes, and in particular, in the
transmitting node which has to handle all the "negative acknowledgments" (NAcks) requesting
retransmission of packets.
In most networks, generally nodes on the periphery have low bandwidth connections,
while links between higher level nodes, which will often form the higher level branch nodes in a
multicasting distribution tree, have higher bandwidth. The systems presently in place do not
make efficient use of this generally hierarchical pattern of bandwidth which exists in most
networks.
SUMMARY
A method for multicasting data on a network, the network including a plurality of nodes
for routing data, the nodes including connections forming a distribution tree. The method
comprises receiving at a storing node, data sent from a higher level node on the network,
storing the received data in memory at the storing node and transmitting the data stored at the
storing node to lower level nodes on the network, wherein any communication required for the
transmission of such data to the lower level nodes on the network is not passed further up the distribution tree. The method may further comprise creating a list at the storing node
identifying the data transmitted to the lower level nodes on the network. The method may
further comprise transmitting the list to the lower level nodes on the network. Each lower level
node may compare data actually received, to the data identified in list to determine if all data
arrived, and if a lower level node determines that at least a part of the data did not arrive, a
negative acknowledgment may be sent back up the tree. When the node that sent the data
receives the negative acknowledgment, it may intercept the negative acknowledgment and not
send the negative acknowledgment any further up the tree. When the node that sent the data
receives the negative acknowledgment, it may determine what data was not received by the
node below and resend the at least part of the data not received back down the tree to the node
that sent the negative acknowledgment. Each storing node may maintain a list of the lower
level nodes to which it is responsible for sending the data. Each lower level node may send a
positive acknowledgment back up the tree to the node responsible for sending the data to it,
when the data has been received. After a storing node receives positive acknowledgments from
each of the nodes on its list of the lower level nodes to which it is responsible for sending the
data, the storing node may delete the data from memory. After the storing node receives
positive acknowledgments from each of the nodes on its list of the lower level nodes to which it
is responsible for sending the data, the storing node may send a positive acknowledgment up
the tree to a storing node responsible for sending it the data.
An apparatus is also disclosed for serving as a node on a network of nodes for routing
data, the network nodes including connections forming a characteristic distribution tree. The
apparatus comprises an input for receiving data from a higher level node on the network, at least one output for outputting the data to lower level nodes on the network and storage for
storing the data, wherein the apparatus is responsible for transmitting all the data it has stored to
the lower level nodes, and wherein any communication required for the transmission of such
data is not passed further up the distribution tree.
A computer readable medium is disclosed including computer executable code for
multicasting data on a network, the network including a plurality of nodes for routing data, the
nodes including comiections forming a characteristic distribution tree. The computer readable
medium comprises code for receiving at a storing node, data sent from a higher level node on
the network, code for storing the received data in memory at the storing node and code for
transmitting the data stored at the storing node to lower level nodes on the network, wherein
any communication required for the transmission of such data to the lower level nodes on the
network is not passed further up the distribution tree.
BRIEF DESCRIPTION OF THE DRAWINGS
A more complete appreciation of the present disclosure and many of the attendant
advantages thereof will be readily obtained as the same becomes better understood by
reference to the following detailed description when considered in connection with the
accompanying drawings, wherein:
Figure 1 shows an example of a network upon which the present disclosure may be
applied;
Figure 2 shows a spanning tree superimposed upon the network shown in Figure 1;
Figure 3 is a diagram for explaining the hierarchal nature of a spanning tree; Figure 4 shows an example of a branch of a multicast tree;
Figure 5 is a flow chart for describing a process that may be performed by a storing
node according to the present disclosure;
Figure 6 is a flow chart for describing another process that may be performed by a
storing node according to the present disclosure;
Figure 7 is a flow chart for describing steps performed at a distribution node if the
spanning tree should change;
Figures 8A and 8B are flow charts describing processes performed by a storing node
according to another embodiment;
Figure 9 is a flow chart for describing processes performed by a storing node according
to another embodiment; and
Figure 10 is a block diagram showing exemplary elements of a storing node according
to an embodiment.
DETAILED DESCRIPTION
In describing the embodiments illustrated in the drawings, specific terminology is
employed for sake of clarity. However, the present disclosure is not intended to be limited
to the specific terminology so selected and it is to be understood that each specific element
includes all technical equivalents which operate in a similar manner.
The method and system for multi-tiered multicasting as described in the present
disclosure are ideally suited to operate on any network on which the bandwidth for a particular
set of packages might vary over different network links. Most networks that consist of more than a single unabridged LAN will have links of varying bandwidth. An example of such a
network is shown in Figure 1 and is referred to generally as network system 1. Distribution
server node (DS) 10 is capable of multicasting packets of data which together form a package,
to one or more nodes in the network. The nodes designated to receive multicasting packets of
data for a particular transmission are also identified as target nodes (T) 14. Nodes between the
distribution node 10 and the target nodes (T) 14 used for relaying the packets for a particular
transmission, are referred to herein as relay nodes (R) 12. It should be noted that the relay
nodes 12 and the target nodes 14 may change for particular transmissions. A node may be a
workstation, laptop, server, router, etc. Network 1 has appropriate routing set up so that any
particular transmission of a set of packets between a first node and a second node will follow
the same non-looping route. Such routing mechanisms are well known in the art for various
different network architectures, such as Local Area Network (LAN) bridging arrangements,
Asynchronous Transfer Mode (ATM) networks and X25 networks, and such routing issues will
therefore not be described in further detail herein.
When a single non-looping route is normally used between any two nodes, any set of
routes from a single distribution node to any set of target nodes will form a tree (spanning tree)
with the distribution node at the top of the tree, and the target nodes forming other nodes in the
tree. An example of such a tree is depicted in Figure 2, which shows a spanning tree
superimposed on the network of Figure 1. The solid lines with arrows, identify the routes taken
when packets are sent from distribution node 10 to one or more target nodes 14. Such a tree
does not have to be defined explicitly if the routing mechanism used allows only a single route
between any two nodes. The spanning tree can be generated for a particular distribution node and set of target nodes before implementing the distribution mechanism of the present system
to be described below. That is, the routing might change dynamically because of network
problems, congestion or the like. The present system and method can easily be designed to
handle such dynamic changes, as will become apparent.
This embodiment of the present system sends packages of data as multiple packets of
data across a network from a distribution server (DS) 10 to one or more destination or target
nodes (T) 14 on network 1. It will be appreciated that the distribution server 10 does not need
to be located at a particular node in the network, but might be different for different packages.
In this case, a spanning tree will be present, or can be created, for any set of nodes required.
For example, if a node 12 is acting as the distribution server, a spanning tree could be generated
for node 12 and the appropriate set of target nodes before implementing the distribution
mechanism of the present disclosure. As will be described below, the distribution server is
capable of instructing a node that it is part of a distribution group.
In most network arrangements, the server acting as the distribution node will have a high
bandwidth connection to its nearest neighbors, and will often be in a fairly central location in the
network in direct connection with the highest bandwidth links of the network. The target nodes
will often be at the periphery of the network, and may, for example, be at remote locations with
modem connections to the overall network. Therefore, the spanning tree formed between the
distribution server and the target nodes will typically include higher bandwidth links at the top
of the tree and lower bandwidth links at the bottom of the tree. However, although the network
arrangement need not necessarily fit this model, the closer the tree is to fitting this model, the
higher the bandwidth savings associated therewith will be for the present system and method. If the distribution server is on a low bandwidth connection to the rest of the network, packages to
be delivered will generally have to be distributed over low bandwidth links to the higher
bandwidth linlcs before being passed back down to other lower bandwidth linlcs associated with
the target nodes. This will lead to a tree with low bandwidth links at the very top of the tree, but
the lower parts of the tree will still have a decreasing bandwidth structure as described above,
and the present method and system will accordingly improve bandwidth usage over these parts
of the network.
In addition, if the distribution server is on a low bandwith link, the system may also be
arranged such that the distribution server is capable of looking for a node to transmit the data to
that has higher bandwith links to its nodes, thereby improving bandwith usage higher in the tree.
The hierarchal nature of a spanning tree, for use in explaining an embodiment of the
present disclosure, is shown more clearly in Fig. 3 and is referred to generally as tree 100. Tree
100 includes DS 10 at the top of the tree, and target nodes 24-31. Each node is connected to a
link 4. Although linlcs 4 are described as providing a connection between nodes, it should be
clear that this need not necessarily refer to a physical connection. Nodes 21, 24 and 25 function
as storing nodes as will be described later below.
According to this embodiment, a package is multicast from the DS 10 to each of its child
nodes, which in this embodiment are relay node 20 and target node 21 in the tree, as a sequence
of packets. The addresses of the child nodes which are to receive the package are incorporated
in the addressing of the package. For example, such addressing may be achieved by assigning a
recipient group address to the package, using a similar mechanism to that used by IP Multicast.
However unlike IP Multicast, in which a user's host application requests membership in a multicast host group associated with a particular multicast, in the present system, intermediate
nodes are instructed that they are group members by distribution node 10, rather than choosing
to become group members themselves as occurs in IP Multicast. The nodes can be instructed
that they form part of the multicast group by unicasting an initialization message to each target
node. Each intermediate node that the unicast message passes through is instructed that it forms
part of the distribution group. Each intermediate node may maintain a list of its own child
nodes, so that it is able to assist in the multicasting of a packet to the associated group. It will be
appreciated that a large number of methods may be used to provide intermediate routing nodes
with the necessary information to forward a packet addressed to a certain group of nodes to the
appropriate child nodes. Routing information could even be encapsulated with each packet,
although this may not be the most bandwidth efficient manner of accomplishing the task.
According to the present embodiment, when a package is multicast from the distribution
node 10, it will be multicast as a sequence of packets. The rate at which the packets can be sent
along each of the linlcs 4 from the distribution node 10 to its child nodes 20, 21 may vary
depending on the overall bandwidth of the links 4 and the amount of other traffic passing along
the linlcs. There are other methods by which the distribution node 10 could transmit the packets
to the child nodes, and the most efficient of these will depend on various hardware and protocol
considerations. For example, if the transmission is taking place over a standard packet-
switching router, the packets may be placed in a queue associated with each link along with
other traffic which is also being transmitted along the linlcs. The packets may then be sent using
a "fair queuing" type algorithm, by which packets in a particular package being sent between a
particular source and destination are sent in fairly regularly spaced time-multiplexed slots. The rate of packet transmission between nodes is based on various weighting factors and
the amount of network traffic passing along the links 4. If packets constituting a package are
added to the queue at a greater rate than they can be sent with the bandwidth allocated to them,
packets may be discarded. Accordingly, if this were to occur in an intermediate node, the
percentage of packets that will malce it to each of its child nodes may vary depending on the
bandwidth and traffic flow on each of the linlcs. If, for example, the child nodes 20, 21 are all
located on the same unabridged LAN, the package could be multicast from the distribution node
to each child node simultaneously. However, there is still the possibility that packets could be
lost and not reach any of the child nodes, depending on the queuing algorithm and the protocol
being used.
To avoid packets being lost, according to an embodiment of the present system and
method, some intermediate nodes are enabled to store packets. These nodes are referred to
herein as storing nodes. Storing nodes may perform some or all of the functions typically
associated with the other nodes on the network, including relaying data to the target nodes on
the network. A storing node may itself function as a target node. When a storing node is also a
target node, it is referred to herein as a storing target node. When packets forming part of a
multicast are received by a storing node, the storing node is programmed to retain in memory a
copy of each packet forming the package. The storing node then forwards the packet to each of
its child nodes using whatever mechanism is most appropriate. For example, as described
above, the packets can be added to the appropriate queues for the appropriate network links and
distributed using the "fair queuing" type algorithm.
A block diagram of a storing node is shown in Fig. 10. In addition to including the hardware and software necessary for routing data to the appropriate destination, the storing
nodes include one or more software applications 42 for performing one or more of the processes
described below. The storing node also includes memory 44. Memory 44 may include memory
for storing applications performed by the storing node, work area memory, memory for
providing the queues used for storing data being routed to the appropriate destination nodes, etc.
Although shown separately, Applications 42 may be stored in memory 44. Storing node 25
includes input 40 for receiving data communicated via link 4. Storing node 25 may also include
one or more outputs 46 for routing the data via one or more linlcs 4 to destination nodes.
In the example shown in Fig. 3, nodes 20 and 21 are child nodes of distribution node 10,
with node 21 being a storing node. Nodes 24 and 25 are storing nodes and target nodes (storing
target nodes). Nodes 26-31 are target nodes, and nodes 20 and 22 in this example, serve as relay
nodes.
If a packet is not received, for example, by node 25, and it has been determined by node
25 that it is likely that the packet should have been received, node 25 will send a Negative
Acknowledgment (NAck) back up the tree requesting that the packet be resent. However,
instead of sending the NAck up the tree to distribution node 10, the parent storing node 21
(parent with respect to node 25), which has maintained a copy of the packet, will receive the
NAck from node 25 preventing it from going further up the tree, identify the packet not received
and re-queue the packet for sending to node 25 without any communication passing further up
the tree. Thus, once a packet has reached a storing node, the packet does not need to be resent
over any linlcs in the tree which are higher than that node in the tree. This greatly reduces
bandwidth requirements, and substantially eliminates any overloading of the distribution node 10 if each target node communicates directly with the distribution node. It should be noted that
not all branching nodes in the tree need to store the package, and non-storing branching nodes
(e.g., nodes 20, 22) would just pass a NAck on up the tree until the distribution node 10 or a
storing node is reached which can deal with the NAck.
As mentioned above, a node should be capable of determining whether a packet that was
sent to it has been received. This can be accomplished in several ways. For example, a package
forming a "sent packets list" can be transmitted periodically by the parent node to its child
node(s) enclosing a list of the packets it has queued, identifying packets sent and/or packets that
will be sent. The child node can then ascertain whether any packets have been lost, by
comparing a list of packets which have been received with the packets identified in the "sent
packets list". The child node can then send NAcks for those packets not received. The "sent
packets list" may be sent by the parent node at substantially different times to the packets
referenced, so that it would be less likely that any congestion causing packets to be missed
would interfere with the transmission of the packets forming the "sent packets list".
Fig. 5 is a flow diagram illustrating a technique capable of being performed by storing
nodes in the system according to an embodiment. A packet is received from a parent node and
stored (step S10). In step SI 2, a determination is made whether the received packet forms part
of a "sent packets list" (Step S12). If the packet is not part of a "sent packets list" (No, Step
SI 2), a determination is made whether the packet is for this node (e.g., whether this node is a
target node for the packet). If the data packet is not for this node (No, Step S26), information
identifying the packet is added to this nodes own "sent packets list" (Step S28). The packet is
then queued to be sent to the node(s) lower in the free (Step S30). The packets in the queue will be sent to nodes below in due course using, for example, a "fair queuing" type algorithm. The
process then returns to Step S10. If the packet is for this node (Yes, Step S26) (e.g., this node is
a target node) the packet is processed by the node in a normal manner (Step S32). Infoπnation
identifying the packet is then added to a "received packets list" (Step S34). A determination is
then made whether the packet is also to be forwarded to nodes below. If the packet is not to be
forwarded to other nodes (No, Step S36), the process then returns to Step S10. If the packet is
to be forwarded to other nodes, (Yes, Step S36), information identifying the packet is added to
this node's "sent packets list" (Step S28). The packet is then queued (Step S30) and the process
returns to Step S10.
If the received packet is part of a "sent packets list" (Yes, Step S12), a determination is
made whether all packets for the "sent packets list" have been received (Step S14). If not all
packets for the "sent packets list" have been received (No, Step S14), the process returns to Step
S10 to wait for the next packet. If all packets for the "sent packets list" have been received from
the parent node (Yes, Step S14), a comparison is made between the received packets list created
in this node in Step S34 and the packets sent to this node as identified in the "sent packets list"
received from the node above (Step SI 6). If all packets sent to this node from the node above
have been received (Yes, Step SI 8), the process ends and the "received packet list" can be
purged if desired (Step S20). If all packets have not been received (No, Step SI 8), a negative
acknowledgment (NAck) is sent up the tree (Step S22) to indicate that one or more packets have
not been received. Information may be also sent along with the NAck identifying the packets
not received. This enables the parent node to then resend only those packets which were not
received at this node. Periodically, a package containing the "sent packets list" created at this node in step S28, indicating the packets sent from this node to any nodes below, will be sent,
typically in the form of a series of packets to the nodes below. The package is a list of packets
the parent has queued up to that point.
As shown in Fig. 6, when a storing node receives a negative acknowledgment (NAck)
from a node below (Step S50), the NAck is examined to determine which packets were not
received (Step S52) by examining information accompanying the NAck identifying the packets
not received. The packets that were not received can then be requeued and resent (Step S54).
Another method of determining whether all packets have been received at a child node is
to attach a reference to a previous queued packet, to each packet being queued, so that any break
in the chain of queued packets reaching the child node can easily be spotted.
Fig. 7 will now be used to describe the steps performed if the routing tree changes. If a
node becomes inoperative or another problem occurs which results in a new spanning tree
between the distribution node and the target nodes, each node with a changed routing table
could be provided with functionality to send a message to the distribution node, alerting it to the
change in spanning tree configuration. When such a message is received at the distribution node
(Step S60), the distribution node sends out another set of unicasts to the target nodes (Step S62)
as described above, whereby branclnng nodes involved in the multicast can be made aware of
their new child nodes. The multicast can then continue thereafter. The new multicast routing
could be weighted to take advantage of those nodes which have part of a package already stored,
wherever possible. For example, the system can be arranged so that as many portions of the
original tree as possible remain the same so that nodes which already have a part of the package
do not need to receive the entire package again. Another embodiment is described with reference to Figure 4, which shows a branch of a
spanning tree. According to this embodiment, each storing node (e.g., nodes 82, 84 and 85) is
able to delete its copy of the stored package (assuming it is not also a target node) as soon as it is
no longer required by any nodes below it in the tree, hi this embodiment, each storing node is
capable of determining when all the storing nodes and target nodes to which it is responsible for
sending packets have received those packets. The nodes to which a storing node is responsible
for sending packets may be referred to herein as dependent nodes. In the example shown in
Figure 4, the dependent nodes of storing node 80 are storing node 82, storing target node 84 and
target node 87 with nodes 81, 83 being intermediate relay nodes. Target nodes 88 and 89 are
dependent on storing target node 84. Storing target node 85 and target node 86 are dependent
on storing node 82. Target nodes 90 and 91 are dependent on storing target node 85. Nodes 81
and 83 do not take part in the node dependency as they simply relay the packets.
In this embodiment, when a package is being sent, knowledge of receipt of the complete
package by all target nodes and storing nodes in the tree is propagated recursively back up the
tree. That is, when target nodes have received the complete package they send positive
acknowledgments back up the tree. In addition, when a storing node receives positive
acknowledgments from the nodes to which it is responsible for sending the package, the storing
node sends a positive acknowledgment back up the tree. Once a storing node has received a
positive acknowledgment from all its dependent nodes, it knows it can erase its copy of the
package (assuming it is not a target node) and send a positive acknowledgment to this effect
back up the tree where it is received by the storing node above and prevented from going further
up the free. Likewise, once the storing node above has received a positive acknowledgment from all its dependent nodes, it can delete its copy of the package (also assuming it is not a
target node) and send a positive acknowledgment back up the tree, and so forth. Once the
distribution node has received acknowledgments from all its dependent nodes, the package has
been received by all the target nodes.
One method for accomplishing this task will now be described by reference to the flow
charts of Figs. 8 A, 8B. According to this embodiment, the distribution node performs a
provisional set of unicasts of an initialization message that are relayed to all target nodes. This
could be the same set of unicast initialization message packets used to set up the multicast
groups as discussed above. As shown in Fig. 8A, the initialization message is unicast by the
distribution node to all target nodes (Step S80). When this initialization message is received at a
storing node (Step S82), the storing node forwards the unicast initialization message on to the
target node(s) (Step S84). In addition, in response to receipt of the initialization message, each
storing node and target node sends an identification packet identifying itself, back up the tree
(Step S86) and the process ends. The first storing node that receives this identification message
being sent back up the tree adds the node identified therein to its list of dependent nodes. For
example, as shown in Fig. 8B, when the identification packet from a storing node below is
received (Step S88) a storing node will add the node identified in the identification packet to its
list of dependent nodes (Step S90) and the process ends. The identification packet is not passed
any further back up the tree. Of course, the node making a record of its dependent nodes will
itself have already sent a packet identifying itself to the node on which it is dependent and so on
and so on until each node is aware of its dependent nodes. Each storing node needs only send
such an identifying message once, no matter how many test messages pass through it. Now, when a storing node sends the packets of data forming an actual package down the
tree, the storing node can determine whether positive aclcnowledgments have been received
from each of its dependent nodes acknowledging receipt of the package. As shown in Fig. 9,
after a storing node receives a positive acknowledgment (Step S92) a determination is made
whether positive aclcnowledgments have been received from all dependent nodes (Step S94). If
not all positive aclcnowledgments have been received (No, Step S94), the process returns to Step
S92. If all dependent nodes responded with positive aclcnowledgments (Yes, Step S94), a
determination is made whether this node is a target node. If the node is a target node (Yes, Step
S96), the positive acknowledgment is sent up the tree (Step SI 00) without deleting the package.
If not a target node (No, Step S96), the package can be deleted from memory (Step S98) prior
to the positive acknowledgment being sent up the tree (Step SI 00).
An example of this process will now be described by reference to Fig. 4. This example
assumes that the test message has been successfully distributed and that each storing node is
aware of its dependent nodes. This example also assumes a package consisting of several
packets is being sent to target nodes 85, 86, 90 and 91. When the packets are received at storing
node 80, they are stored, queued and forwarded to storing node 82. Storing node 82 stores and
queues the packets, forwarding them to storing target node 85 and target node 86. Storing target
node 85 stores and queues the packets, forwarding them to target nodes 90, 91. Upon receipt of
the complete package, target nodes 90 and 91 each send positive aclcnowledgments to storing
target node 85. Storing target node 85, also being a target node, will not delete its copy of the
package upon receipt of the positive acknowledgments. Storing target node 85 will then send a
positive acknowledgment to storing node 82, indicating that its dependent nodes successfully received the sent packets. Upon receipt of the complete package, target node 86 will also send a
positive acknowledgment to storing node 82. Upon receipt of the positive acknowledgments
from its dependent nodes, storing node 82 can delete its copy of the package. Storing node 82
will then send a positive acknowledgment up the tree to storing node 80 indicating that its
dependent nodes successfully received the sent packets. This continues up the tree to the
distribution node. When the distribution node receives the last positive acknowledgment from
its dependent nodes, the distribution node determines that the package has been received at each
target node.
Nodes receiving positive aclcnowledgments, may also send acknowledgments of receipt
back down the tree to the dependent nodes in question, so that the dependent nodes are aware
the positive acknowledgments have been received. In this way, the dependent nodes do not
need to keep re-sending positive acknowledgments up the tree. It should be noted that this
technique can be used on the packet level, rather than the package level, or even a "sub-package
level" somewhere in between.
The present disclosure may be conveniently implemented using one or more
conventional general purpose digital computers and/or servers programmed according to the
teachings of the present specification. Appropriate software coding can readily be prepared
by skilled programmers based on the teachings of the present disclosure. The present
disclosure may also be implemented by the preparation of application specific integrated
circuits or by interconnecting an appropriate network of conventional components.
Numerous additional modifications and variations of the present disclosure are
possible in view of the above-teachings. It is therefore to be understood that within the scope of the appended claims, the present disclosure may be practiced other than as specifically
described herein.

Claims

WHAT IS CLAIMED IS:
1. A method for multicasting data on a network, the network including a plurality of nodes
for routing data, the nodes including connections forming a distribution tree, said method
comprising:
receiving at a storing node, data sent from a higher level node on the network;
storing the received data in memory at the storing node; and
transmitting the data stored at the storing node to lower level nodes on the network,
wherein any coimnunication required for the transmission of such data to the lower level nodes
on the network is not passed further up the distribution tree.
2. A method as recited in claim 1, further comprising creating a list at the storing node
identifying the data transmitted to the lower level nodes on the network.
3. A method as recited in claim 2, further comprising transmitting the list to the lower level
nodes on the network.
4. A method as recited in claim 3, wherein each said lower level node compares data actually
received, to the data identified in list to determine if all data arrived.
5. A method as recited in claim 4, wherein if a lower level node determines that at least a part
of the data did not arrive, a negative acknowledgment is sent back up the tree.
6. A method as recited in claim 5, wherein when the node that sent the data receives the
negative acknowledgment, it intercepts the negative aclcnowledgment and does not send the
negative acknowledgment any further up the tree.
7. A method as recited in claim 6, wherein when the node that sent the data receives the
negative acknowledgment, it determines what data was not received by the node below and
resends the at least part of the data not received back down the tree to the node that sent the
negative aclcnowledgment.
8. A method as recited in claim 1, wherein each storing node maintains a list of the lower
level nodes to which it is responsible for sending the data.
9. A method as recited in claim 8, wherein each lower level node sends a positive
aclcnowledgment back up the tree to the node responsible for sending the data to it, when the
data has been received.
10. A method as recited in claim 9, wherein after a storing node receives positive
aclcnowledgments from each of the nodes on its list of the lower level nodes to which it is
responsible for sending the data, the storing node deletes the data from memory.
11. A method as recited in claim 9, wherein after the storing node receives positive
aclcnowledgments from each of the nodes on its list of the lower level nodes to which it is responsible for sending the data, the storing node sends a positive aclcnowledgment up the tree
to a storing node responsible for sending it the data.
12. An apparatus for serving as a node on a network of nodes for routing data, the network
nodes including comiections forming a distribution tree, said apparatus comprising:
an input for receiving data from a higher level node on the network;
at least one output for outputting the data to lower level nodes on the network; and
storage for storing the data, wherein the apparatus is responsible for transmitting all the
data it has stored to the lower level nodes, and wherein any communication required for the
transmission of such data is not passed further up the distribution tree.
13. A computer readable medium including computer executable code for multicasting data
on a network, the network including a plurality of nodes for routing data, the nodes including
connections forming a distribution tree, said computer readable medium comprising:
code for receiving at a storing node, data sent from a higher level node on the network;
code for storing the received data in memory at the storing node; and
code for transmitting the data stored at the storing node to lower level nodes on the
network, wherein any communication required for the transmission of such data to the lower
level nodes on the network is not passed further up the distribution tree.
14. A networking system including a plurality devices for serving as nodes on a network of
nodes for routing data, the network of nodes including coimections forming a distribution tree, said networking system comprising:
a distribution server for sending data; and
at least one storage node, said storing node comprising,
an input for receiving data sent from the distribution server, from a higher level node on
the network,
at least one output for outputting the data to lower level nodes on the network, and
storage for storing the data, wherein the storage node is responsible for transmitting all
the data it has stored to the lower level nodes, and wherein any communication required for the
transmission of such data is not passed further up the distribution tree.
15. A system for multicasting data on a network, the network including a plurality of nodes
for routing data, the nodes including connections forming a distribution tree, said device
comprising:
means for receiving at a storing node, data sent from a higher level node on the network;
means for storing the received data in memory at the storing node; and
means for transmitting the data stored at the storing node to lower level nodes on the
network, wherein any communication required for the transmission of such data to the lower
level nodes on the network is not passed further up the distribution tree.
16. An apparatus for serving as a node on a network of nodes for routing data, the network
nodes including connections forming a distribution tree, said apparatus comprising:
means for receiving data from a higher level node on the network; means for outputting the data to lower level nodes on the network; and
means for storing the data, wherein the apparatus is responsible for transmitting all the
data it has stored to the lower level nodes, and wherein any communication required for the
transmission of such data is not passed further up the distribution tree.
PCT/US2000/035721 1999-12-30 2000-12-29 Method and apparatus for multi-tiered data multicasting WO2001050687A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU26124/01A AU2612401A (en) 1999-12-30 2000-12-29 Method and apparatus for multi-tiered data multicasting

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17387299P 1999-12-30 1999-12-30
US60/173,872 1999-12-30

Publications (1)

Publication Number Publication Date
WO2001050687A1 true WO2001050687A1 (en) 2001-07-12

Family

ID=22633872

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/035721 WO2001050687A1 (en) 1999-12-30 2000-12-29 Method and apparatus for multi-tiered data multicasting

Country Status (2)

Country Link
AU (1) AU2612401A (en)
WO (1) WO2001050687A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1430655A1 (en) * 2001-09-06 2004-06-23 Ghizi Soft Co., Ltd. Method for generating casting path among participants for multicasting
EP1802049A1 (en) * 2004-10-28 2007-06-27 Huawei Technologies Co., Ltd. A method and system for controlling multimedia broadcast/multicast service session

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355371A (en) * 1982-06-18 1994-10-11 International Business Machines Corp. Multicast communication tree creation and control method and apparatus
US5517494A (en) * 1994-09-30 1996-05-14 Apple Computer, Inc. Method and system of multicast routing for groups with a single transmitter
US5748736A (en) * 1996-06-14 1998-05-05 Mittra; Suvo System and method for secure group communications via multicast or broadcast
US5881246A (en) * 1996-06-12 1999-03-09 Bay Networks, Inc. System for generating explicit routing advertisements to specify a selected path through a connectionless network to a destination by a specific router

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5355371A (en) * 1982-06-18 1994-10-11 International Business Machines Corp. Multicast communication tree creation and control method and apparatus
US5517494A (en) * 1994-09-30 1996-05-14 Apple Computer, Inc. Method and system of multicast routing for groups with a single transmitter
US5881246A (en) * 1996-06-12 1999-03-09 Bay Networks, Inc. System for generating explicit routing advertisements to specify a selected path through a connectionless network to a destination by a specific router
US5748736A (en) * 1996-06-14 1998-05-05 Mittra; Suvo System and method for secure group communications via multicast or broadcast

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1430655A1 (en) * 2001-09-06 2004-06-23 Ghizi Soft Co., Ltd. Method for generating casting path among participants for multicasting
EP1430655A4 (en) * 2001-09-06 2007-08-01 Ghizi Soft Co Ltd Method for generating casting path among participants for multicasting
EP1802049A1 (en) * 2004-10-28 2007-06-27 Huawei Technologies Co., Ltd. A method and system for controlling multimedia broadcast/multicast service session
EP1802049A4 (en) * 2004-10-28 2007-10-10 Huawei Tech Co Ltd A method and system for controlling multimedia broadcast/multicast service session

Also Published As

Publication number Publication date
AU2612401A (en) 2001-07-16

Similar Documents

Publication Publication Date Title
US11910037B2 (en) Layered multicast and fair bandwidth allocation and packet prioritization
JP2825120B2 (en) Method and communication network for multicast transmission
Kasera et al. Scalable fair reliable multicast using active services
CA2151072C (en) Method of multicasting
US5519704A (en) Reliable transport protocol for internetwork routing
US20050243722A1 (en) Method and apparatus for group communication with end-to-end reliability
Jones et al. Protocol design for large group multicasting: the message distribution protocol
EP1139602A1 (en) Method and device for multicasting
US20050074010A1 (en) Method and apparatus for exchanging routing information in distributed router system
WO2001050687A1 (en) Method and apparatus for multi-tiered data multicasting
Gumbold Software distribution by reliable multicast
Sadok et al. A reliable subcasting protocol for wireless environments
Caples et al. Multidestination protocols for tactical radio networks
Venkatesulu et al. Efficient fault-tolerant reliable broadcast in an extended LAN
KR20140002040A (en) Technique for managing communications at a router
Sharma Computer Network
Brandt Reliable multicast protocols and their application on the Green Bank Telescope
Schottmüller Multiparty File Transfer over the Internet Stream Protocol, Version 2 (ST-II)
Kasera et al. Scalable fair multicast using active services
Ashrafuzzaman et al. REDUCING CONGESTION COLLAPSE AND PROMOTING FAIRNESS IN THE INTERNET BY OPTIMIZING OF SCTP.
Iyer Broadband & TCP/IP fundamentals

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP