US20110217953A1 - Method, system and apparatus for managing push data transfers - Google Patents
Method, system and apparatus for managing push data transfers Download PDFInfo
- Publication number
- US20110217953A1 US20110217953A1 US12/819,335 US81933510A US2011217953A1 US 20110217953 A1 US20110217953 A1 US 20110217953A1 US 81933510 A US81933510 A US 81933510A US 2011217953 A1 US2011217953 A1 US 2011217953A1
- Authority
- US
- United States
- Prior art keywords
- push data
- server
- push
- transfer
- data transfer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/53—Network services using third party service providers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/55—Push-based network services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/28—Timers or timing mechanisms used in protocols
Definitions
- the present specification relates generally to telecommunications and more particularly relates to a method, system and apparatus for managing push data transfers.
- Computing devices that connect to servers frequently connect to those servers via one or more network intermediaries, such as a mobile telecommunication carrier, an enterprise, or a manufacturer of the computing device. Increasingly data is pushed to those computing devices from those servers.
- network intermediaries such as a mobile telecommunication carrier, an enterprise, or a manufacturer of the computing device.
- FIG. 1 shows a schematic representation of a system for managing push data transfers.
- FIG. 2 shows a flowchart depicting a method for managing push data transfers.
- FIG. 3 shows an exemplary method of pushing data in conjunction with the method of FIG. 2 .
- FIG. 4 shows an exemplary method of determining a time period in relation to block 340 of the method of FIG. 3 .
- An aspect of this specification provides a method for managing push data transfers comprising: receiving content at a push data server from a content server for push delivery to a computing device; beginning a push data transfer of the content to the computing device from the push data server; incrementing a counter for use in determining a maximum number of concurrent push data transfers that can be effected from the push data server; and decrementing the counter after the push data transfer is determined to be actually completed or deemed to be completed.
- the method can comprise deeming the push data transfer to be completed after a predefined time limit has been reached; receiving, at the push data server, transfer parameters, the transfer parameters can include data representing parameters of a historic successfully completed push data transfer; and determining the predefined time limit by comparing the transfer parameters.
- the transfer parameters can further include at least one of a beginning time and ending time, a transfer duration, a content size, server specifications, and link specifications.
- the push data transfer can be determined to be actually completed when an acknowledgement is received, by the push data server, confirming completion of the push data transfer.
- the comparing can include applying a probability model such that the push data server can infer with a high degree of confidence that the push data transfer is actually completed.
- the applying a probability model can include: compiling a table of values of the transfer parameters; and determining a mean and standard deviation for the transfer parameters that best match an expected duration of time to be used as the predefined time limit.
- the applying a probability model can include employing a cumulative distribution function to determine a minimum time duration to be used as the predefined time limit.
- the method can comprise: receiving, at the push data server, a push data transfer request; determining a capacity; comparing the capacity to the counter; and when the capacity equals the counter, queuing the push data transfer request.
- the beginning the push data transfer can start when the capacity is greater than the counter.
- the queuing the push data transfer request can include maintaining, at the push data server, a record of push data transfer requests.
- the queuing the push data transfer request can include rejecting the push data request.
- the rejecting the push data transfer request can include sending an error message to the content server.
- the method can comprise tracking of successful push data transfers where acknowledgements of completions are received at the push data server.
- a push data server comprising a processor configured to: receive content, from a content server, for push delivery to a computing device; begin a push data transfer of the content to the computing device; increment a counter for use in determining a maximum number of concurrent push data transfers that can be effected from the push data server; and decrement the counter after the push data transfer is determined to be actually completed or deemed to be completed.
- the processor can be further configured to deem the push data transfer to be completed after a predefined time limit has been reached.
- the processor can be further configured to: receive transfer parameters, the transfer parameters can include data representing parameters of a historic successfully completed push data transfer; and determine the predefined time limit by comparing the transfer parameters.
- the transfer parameters can further include at least one of a beginning time and ending time, a transfer duration, a content size, server specifications, and link specifications.
- the processor can be further configured to determine that the push data transfer is actually completed when the processor receives an acknowledgement confirming completion of the push data transfer.
- a computer program product for a push data server, comprising a computer readable storage medium having a computer-readable program code adapted to be executable on the push data server to implement a method for managing push data transfers, the method comprising: receiving content at a push data server from a content server for push delivery to a computing device; beginning a push data transfer of the content to the computing device from the push data server; incrementing a counter for use in determining a maximum number of concurrent push data transfers that can be effected from the push data server; and decrementing the counter after the push data transfer is determined to be actually completed or deemed to be completed.
- a method, system, and apparatus for managing push data transfers whereby in one implementation at least one push data server is situated on a network between a plurality of content servers and a plurality of computing devices.
- the push data server is configured to only perform a maximum number of concurrent data transfers of content between the content servers and the plurality of computing devices.
- the push data server is configured to deem that a particular push data transfer has been completed even if no express acknowledgment of such completion is ever received at the push data server, thereby reducing the likelihood of failure of push data transfers due to a misperception that the maximum number of concurrent data transfers being obtained.
- System 50 comprises a plurality of content servers 54 - 1 . . . 54 - n .
- server 54 each is referred to as server 54 , and collectively they are referred to as servers 54 .
- This nomenclature is used elsewhere herein.
- Each server 54 is configured to host its own content 56 and to deliver that content 56 to a wide area network (WAN) infrastructure 58 via a respective link 62 - 1 . . . 62 - n (Hereafter, generically each is referred to as link 62 , and collectively they are referred to as links 62 ).
- WAN wide area network
- System 50 also comprises a plurality of computing devices 66 - 1 . . . 66 - p .
- computing device 66 each is referred to as computing device 66 , and collectively they are referred to as computing devices 66 .
- Computing devices 66 are configured to connect to WAN infrastructure 58 via their own respective links 70 - 1 . . . 70 - p (Hereafter, generically each is referred to as link 70 , and collectively they are referred to as links 70 ), and, as will be discussed in further detail below, are configured to receive content 56 via a pushed data transfer.
- WAN infrastructure 58 also comprises, amongst other things, a main intermediation infrastructure 74 and a plurality of secondary intermediation servers 78 - 1 . . . 78 - l . (Hereafter, generically each is referred to as server 78 , and collectively they are referred to as servers 78 ). Links 82 - 1 . . . 82 - l (hereafter, generically each is referred to as link 82 , and collectively they are referred to as links 82 ) connect intermediation infrastructure 74 and secondary intermediation servers 78 .
- WAN infrastructure 58 can comprise, or connect to, other computer equipment which is not shown and which may be configured to provide services, applications, other content, or otherwise communicate with, for whatever purpose, to each computing device 66 .
- WAN infrastructure 58 can also comprise, or be part of, the Internet.
- Main intermediation infrastructure 74 comprises at least one push server 86 , and may also comprise a plurality of additional servers to fulfill other intermediation functions between computing devices 66 and other computer equipment, not shown.
- At least one push server 86 is configured to retrieve content 56 and to simultaneously push retrieved content 56 to a plurality of computing devices 66 , subject to a maximum predefined number of transfers.
- main intermediation infrastructure 74 can be based on the mobile data services (MDS) component from Research In Motion Inc. of Waterloo, Canada, while the secondary intermediation servers 78 can be based on a BlackBerryTM Enterprise Server (BES) or a BlackBerryTM Internet Server (BIS) also from Research In Motion Inc. of Waterloo, Canada.
- MDS mobile data services
- BES BlackBerryTM Enterprise Server
- BIOS BlackBerryTM Internet Server
- main intermediation infrastructure 74 and secondary intermediation servers 78 can be omitted in lieu of including at least one push server 86 within WAN infrastructure 58 .
- Links 62 , links 70 , links 82 can be based on any wired structure or wireless structures or combinations thereof.
- computing devices 66 are wireless and therefore at least a portion of link 70 comprises a wireless base station so that link 70 finally connects to each computing device 70 uses one or more wireless protocols, including but not limited to Global System for Mobile communication (“GSM”), General Packet Relay Service (“GPRS”), Enhanced Data Rates for GSM Evolution (“EDGE”), 3 G, High Speed Packet Access (“HSPA”), Code Division Multiple Access (“CDMA”), Evolution-Data Optimized (“EVDO”), Institute of Electrical and Electronic Engineers (IEEE) standard 802.11, BluetoothTM or any of their variants or successors. It is also contemplated each computing device 66 can include multiple radios to accommodate the different protocols that may be used to implement that final portion of link 70 .
- GSM Global System for Mobile communication
- GPRS General Packet Relay Service
- EDGE Enhanced Data Rates for GSM Evolution
- HSPA High Speed Packet Access
- each link 70 that connects to its respective server 78 is typically wired and comprises a backhaul wired via a T-carrier link (e.g. T1, T3) or E-carrier link or the like.
- links 82 and 62 are typically wired and can also comprise a backhaul or can otherwise be effected via the Internet 74 .
- the nature of each link 62 , 70 , and 82 is not particularly limited.
- each server 54 , each server 78 and the at least one server 86 can each be implemented using an appropriately configured hardware computing environment, comprising at least one processor, volatile storage (e.g. random access memory), non-volatile storage (e.g. hard disk drive), and at least one network interface all interconnected by a bus. Other components can also be connected to the processor via the bus, such as input devices and output devices.
- the hardware computing environment of any particular server is configured to execute an operating system and other software such that the servers are ultimately configured to perform according to the teachings herein.
- each server can itself be implemented as several different servers to provide redundancy or load balancing, or alternatively one or more servers in FIG. 1 can be consolidated into a single server.
- each computing device 66 can vary. Typically, however, each computing device 66 thus includes a processor which is configured to receive input from input devices (e.g. a trackball, a joystick, a touch-pad, a touch-screen, a keyboard, a microphone) and to control various output devices (e.g. a speaker, a display, an light emitting diode (LED) indicator, a vibration unit).
- input devices e.g. a trackball, a joystick, a touch-pad, a touch-screen, a keyboard, a microphone
- output devices e.g. a speaker, a display, an light emitting diode (LED) indicator, a vibration unit.
- LED light emitting diode
- volatile storage which can be implemented as random access memory
- non-volatile storage which can be implemented using flash memory or the like, or can include other programmable read only memory (“PROM”) technology or can include read only memory (“ROM”) technology or can include a removable “smart card” or can comprise combinations of the foregoing.
- PROM programmable read only memory
- ROM read only memory
- volatile and non-volatile storage are non-limiting examples of computer readable media capable of storing programming instructions that are executable on processor.
- Each device 66 also includes a network interface, such as a wireless radio, for connecting device 66 to its respective link 70 .
- Each device 66 also includes a battery which is typically rechargeable and provides power to the components of computing device 66 .
- each client 66 is configured to have any content 56 pushed to its non-volatile storage unit via push server 86 and its respective secondary intermediation server 78 .
- Method 200 can be implemented using system 50 or variations thereof. In particular, method 200 can also be implemented by push server 86 .
- Block 205 comprises determining a capacity.
- the capacity represents the maximum number of push transfers that are to be simultaneously managed using method 200 .
- the capacity can be fixed or predefined as part of an architecting system 50 , or the capacity may change in real time, during or even throughout each performance of method 200 .
- the capacity may be based on a level of consumption of resources throughout system 50 , reflecting the maximum capacity overall which results from whichever component in system 50 has the least capacity.
- the available resources within system 50 are dynamic, particularly where the components in system 50 have multiple functions beyond the function of effecting push data transfers of content 56 from one or more servers 54 to one or more computing devices 66 .
- different branches of system 50 may have different capacities.
- link 62 - 1 , link 82 - 2 and link 70 - 1 which would be used to push content 56 - 1 to device 66 - 1 in their aggregate may have a first capacity, while link 62 - n , link 82 - l and link 70 - k in their aggregate may have a second capacity different from the first capacity.
- Such link capacities may be based on bandwidth, for example.
- servers 54 may each have different capacities, whereby server 54 - n has a different capacity than server 54 - 1 .
- second server 78 - l may have a different capacity that server 78 - 1 .
- push server 86 will have its own capacity regardless of the capacity of other components in system 50 .
- each device 66 will have its own capacity to accept push data transfers.
- Such server 86 and device 66 capacities may be based on processor speed, amount of memory, network interface limitations, and the number of other processing threads that are executing in addition to method 200 .
- Other metrics for capacity will now occur to those skilled in the art.
- individual and aggregate capacities may change according to time of day or based on resource allocation priorities, or other factors. Accordingly, in more complex implementations of system 50 , block 205 can comprise determining different capacities corresponding to different content 56 push data transfer scenarios, and managing the number of push transfers according to those different capacities.
- an application programming interface can be provided at one or more of servers 54 , server 86 , or server 78 , or other suitable location in system 50 , where an administrator can manually set the capacity to a predefined level, including a level of “zero”, so that at a local level the resources that are consumed by push data transfers can be manually controlled.
- Block 210 comprises determining whether the capacity at block 205 has been reached.
- push server 86 can examine a counter or other software object that represents the number of data transfers currently being pushed from servers 86 to devices 66 . If the capacity defined at block 205 has been exceeded (e.g. according the specific example referenced above, a counter or software object indicates that two or more push data transfers are currently being effected) then method 200 advances to block 215 at which point any further push data transfer requests are queued, and then method 200 cycles back to block 205 .
- the means by which such queuing is effected is not particularly limited.
- push server 86 can be configured to locally maintain a record of push data transfer requests, or push server 86 can be configured, at block 215 , to reject, ignore or drop push data requests, with or without the return of an error message, leaving it to the requesting network component to reissue the push data request.
- push data transfer requests reflect any instruction to push server 86 to transfer content 56 from a given server 54 to a given device 66 , and that such instructions can originate from any component in system 50 , and typically do not originate from the device 66 itself.
- a request to push content 56 originates from it own server 54 : For example a request to push content 56 - 1 to a given device 66 originates from server 54 - 1 itself, while a request to push content 56 - n to a given device 66 originates from server 54 - n.
- the source of a push data request can be reflective of the nature or type of content 56 , but that in any event the nature or type of content 56 is not particularly limited.
- content 56 - 1 can reflect a weather report that is periodically pushed by server 54 - 1 to that device, thereby providing automatic weather report updates on device 66 - 1 .
- Other types of content 56 can include, but are certainly not limited to, news, sports, traffic, stock, instant messages, social networking status updates, videos, music, chats, software applications, firmware updates, services and the like.
- Block 220 comprises determining if there are any outstanding push requests. Again, in the specific example contemplated in relation to system 50 , block 220 is effected by server 86 waiting for a new push data transfer request, or, where server 86 maintains queues of unprocessed push data transfer requests, then examining such a queue to determine if any push data requests are within that queue. A “No” determination at block 220 results in looping at block 220 , effectively placing server 86 into a “wait” state until a push data transfer request is received. A “yes” determination at block 220 leads to block 225 .
- Block 225 comprises initiating a push data transfer request.
- block 225 is effected by server 86 commencing a push data transfer and without actually waiting for such a transfer to complete, method 200 cycles back to block 205 where method 200 begins anew. It can be noted that when block 225 is reached a number of times that is equal to the capacity determined at block 205 , then “yes” determinations will be made at block 210 resulting in cessation of processing of further push data transfer requests.
- a parallel method is performed which monitors progress of push data transfers and track (or possibly deems) completion of push data transfers so as to reduce the likelihood of method 200 being prevented from reaching block 220 , due to a persistent “yes” determination being made at block 210 .
- Method 300 as shown in FIG. 3 provides a non-limiting example of a method that can be performed in parallel to method 200 so as to reduce the likelihood of method 200 being prevented from reaching block 220 from block 210 .
- Block 315 comprises receiving content.
- block 315 is effected by push data server 86 receiving a particular item of content 56 from its respective server 54 .
- a pending data transfer request comprises a request to push content 56 - 1 to device 66 - 1 .
- content 56 - 1 is received locally at push data server 86 .
- Block 320 comprises beginning a push data transfer of the content received at block 315 .
- block 320 is effected by push data server 86 initiating a transfer of content 56 - 1 from push data server 86 to device 66 - 1 , making appropriate use if link 82 - 1 and server 78 - 1 and link 70 - 1 .
- Block 325 comprises incrementing a counter.
- the counter increments the recorded number of active downloads and is accessible at block 210 in order to determine the number of currently active push data transfers. In accordance with the example discussed above, the counter will therefore increase to “one” from “zero” during this invocation of block 325 , such that a parallel performance of block 210 will still result in a “No” determination leading method 200 to block 220 from block 210 .
- Block 330 comprises determining if the push data transfer that was initiated at block 320 is complete.
- a “yes” determination is made at block 330 when an acknowledgment is received from device 66 - 1 , or server 78 - 1 , at server 86 , expressly confirming that content 56 - 1 has been received by device 66 - 1 .
- the duration of time to complete the push data transfer can vary according to the size of content 56 - 1 , and the amount of bandwidth available between push data server 86 and device 66 - 1 , and accordingly it is contemplated that a “no” determination may be made at block 330 .
- method 300 advances to block 340 at which point a determination is made as to whether a predefined time limit has been reached.
- a time limit corresponds to an expected time for the transfer to complete.
- Block 335 which can be reached from a “yes” determination at either block 330 or block 340 , comprises decrementing the counter that was incremented at block 325 .
- counter will therefore decrease to “zero” from “one” during this invocation of block 335 , such that a parallel performance of block 210 will also result in a “No” determination leading method 200 to block 220 from block 210 .
- method 200 When the third data transfer request is received at server 86 , method 200 will reach block 215 and the third data transfer request will be queued until one of the first two is determined to be actually completed at block 330 , or one of the first to is deemed to be completed at block 340 .
- Those skilled in the art will now also appreciate one of the advantages of the present specification, in that in the event an express acknowledgement of a push data transfer is never received at block 330 , then push data transfers from server 86 can still continue due to the deeming of such completion occurring due to performance of block 340 .
- method 200 and method 300 also regulate the utilization of network resources in system 50 , so that push data transfers (or other functions of system 50 ) do not fail due to overwhelming of those resources due to an overabundance of push data transfers.
- the means by which the time-limit used in block 340 can be determined is not particularly limited.
- a simple predefined time period can be preselected.
- the time-limit may dynamically change according tc historical behavior of system 50 , including but not limited to tracking the history of acknowledged data transfers.
- Method 400 in FIG. 4 shows on non-limiting example of how the time limit in block 340 can be determined using such historical tracking.
- Method 400 can be run from time to time, or in real time in conjunction with method 200 and method 300 to constantly update a time limit that will be used for specific performances of method 300 .
- Block 405 comprises receiving transfer parameters.
- Block 405 thus comprises receiving at server 86 or at another computational resource data representing parameters of a historic, successfully completed push data transfer whereby an express acknowledgement was actually received confirming completion of the specific push data transfer.
- Server 86 can, for example, be configured to input data into block 405 every time a “yes” determination is reached at block 330 .
- Such data at block 405 can be comprise to a simple identification of the content 56 that was successfully transferred, as well as the time it took to complete that transfer.
- the data received at block 405 can further comprise an identification of the specific links, servers or other network resources involved in the transfer.
- Block 410 comprises storing the transfer parameters received at block 405 in a table or other data base resource.
- Block 415 comprises comparing all of the parameters in the table or other data base resource, and block 420 comprises determining a time limit to be used at block 340 based on the comparison from block 415 .
- the operations used to perform the comparison at block 415 and the determination at block 420 are not particularly limited.
- block 415 and block 420 apply a probability model to the performance of system 50 such that server 86 can infer with a high degree of confidence that data transfers are actually completed, even though no completion status (i.e. a “yes” determination”) has been determined at block 330 .
- the time limit that is established for block 340 may be determined by historical tracking of successful push data transfers where acknowledgements of such completions are actually received at block 330 and thereby result in a “yes” determination at block 330 .
- a table of values can be compiled of historical data from various performances of block 405 and block 410 that comprises the beginning time and ending time of push data transfers, and can also comprise transfer duration, content size, which server 78 is used, and which of links 62 , 82 and 70 were utilized. Where a plurality of push data servers 86 are employed within a variation of system 50 , the table of values can additionally comprise an indication of which push data server 86 was employed.
- server 86 or other computational device can be employed to determine a mean and standard deviation for the sample data that best matches an expected duration of time, which can then be employed to establish a time limit for use at block 340 in relation to a particular push data transfer during a particular cycle of method 300 .
- a Cumulative Distribution Function (CDF) can be employed to determine a minimum time duration that would account for the minimum cumulative probability needed to assume completion of a push data transfer that is initiated at block 320 . Accordingly, the time limit established for block 340 can vary between each invocation of method 300 , according the specific circumstances of a particular push data transfer.
- sample data recorded in a table for a system that is reflective of system 50 substantially matches a normal distribution.
- the inventor further believes that versions of system 50 where statistical calculations generate data that do not fit a normal distribution can still be used to generate a time limit for block 340 , as in such a situation lacking a normal distribution, instead of calculating the mean and standard deviation, the time limit can be based on about the 95th percentile value for the sample data that is applicable.
- the time limit established for block 340 may be based on a value that accounts for only about 95% of all download durations in the above-referenced table of values. The remaining about 5% of transfers that exceed a value may be deemed to be an acceptable failure risk and permit the maximum to be exceeded but by a fairly limited amount.
Abstract
Description
- The present specification claims priority from U.S. Provisional Patent Application 61/310,016 filed Mar. 3, 2010, the contents of which are incorporated herein by reference.
- The present specification relates generally to telecommunications and more particularly relates to a method, system and apparatus for managing push data transfers.
- Computing devices that connect to servers frequently connect to those servers via one or more network intermediaries, such as a mobile telecommunication carrier, an enterprise, or a manufacturer of the computing device. Increasingly data is pushed to those computing devices from those servers.
-
FIG. 1 shows a schematic representation of a system for managing push data transfers. -
FIG. 2 shows a flowchart depicting a method for managing push data transfers. -
FIG. 3 shows an exemplary method of pushing data in conjunction with the method ofFIG. 2 . -
FIG. 4 shows an exemplary method of determining a time period in relation toblock 340 of the method ofFIG. 3 . - An aspect of this specification provides a method for managing push data transfers comprising: receiving content at a push data server from a content server for push delivery to a computing device; beginning a push data transfer of the content to the computing device from the push data server; incrementing a counter for use in determining a maximum number of concurrent push data transfers that can be effected from the push data server; and decrementing the counter after the push data transfer is determined to be actually completed or deemed to be completed.
- The method can comprise deeming the push data transfer to be completed after a predefined time limit has been reached; receiving, at the push data server, transfer parameters, the transfer parameters can include data representing parameters of a historic successfully completed push data transfer; and determining the predefined time limit by comparing the transfer parameters.
- The transfer parameters can further include at least one of a beginning time and ending time, a transfer duration, a content size, server specifications, and link specifications.
- The push data transfer can be determined to be actually completed when an acknowledgement is received, by the push data server, confirming completion of the push data transfer. The comparing can include applying a probability model such that the push data server can infer with a high degree of confidence that the push data transfer is actually completed.
- The applying a probability model can include: compiling a table of values of the transfer parameters; and determining a mean and standard deviation for the transfer parameters that best match an expected duration of time to be used as the predefined time limit. The applying a probability model can include employing a cumulative distribution function to determine a minimum time duration to be used as the predefined time limit.
- The method can comprise: receiving, at the push data server, a push data transfer request; determining a capacity; comparing the capacity to the counter; and when the capacity equals the counter, queuing the push data transfer request.
- The beginning the push data transfer can start when the capacity is greater than the counter. The queuing the push data transfer request can include maintaining, at the push data server, a record of push data transfer requests. The queuing the push data transfer request can include rejecting the push data request.
- The rejecting the push data transfer request can include sending an error message to the content server.
- The method can comprise tracking of successful push data transfers where acknowledgements of completions are received at the push data server.
- Another aspect of the specification provides a push data server comprising a processor configured to: receive content, from a content server, for push delivery to a computing device; begin a push data transfer of the content to the computing device; increment a counter for use in determining a maximum number of concurrent push data transfers that can be effected from the push data server; and decrement the counter after the push data transfer is determined to be actually completed or deemed to be completed.
- The processor can be further configured to deem the push data transfer to be completed after a predefined time limit has been reached. The processor can be further configured to: receive transfer parameters, the transfer parameters can include data representing parameters of a historic successfully completed push data transfer; and determine the predefined time limit by comparing the transfer parameters.
- The transfer parameters can further include at least one of a beginning time and ending time, a transfer duration, a content size, server specifications, and link specifications.
- The processor can be further configured to determine that the push data transfer is actually completed when the processor receives an acknowledgement confirming completion of the push data transfer.
- Another aspect of this specification provides a computer program product, for a push data server, comprising a computer readable storage medium having a computer-readable program code adapted to be executable on the push data server to implement a method for managing push data transfers, the method comprising: receiving content at a push data server from a content server for push delivery to a computing device; beginning a push data transfer of the content to the computing device from the push data server; incrementing a counter for use in determining a maximum number of concurrent push data transfers that can be effected from the push data server; and decrementing the counter after the push data transfer is determined to be actually completed or deemed to be completed.
- A method, system, and apparatus for managing push data transfers is provided whereby in one implementation at least one push data server is situated on a network between a plurality of content servers and a plurality of computing devices. The push data server is configured to only perform a maximum number of concurrent data transfers of content between the content servers and the plurality of computing devices. The push data server is configured to deem that a particular push data transfer has been completed even if no express acknowledgment of such completion is ever received at the push data server, thereby reducing the likelihood of failure of push data transfers due to a misperception that the maximum number of concurrent data transfers being obtained.
- Referring now to
FIG. 1 , a system for managing pushed data transfers is indicated generally at 50.System 50 comprises a plurality of content servers 54-1 . . . 54-n. (Hereafter, generically each is referred to asserver 54, and collectively they are referred to asservers 54. This nomenclature is used elsewhere herein.) Eachserver 54 is configured to host itsown content 56 and to deliver thatcontent 56 to a wide area network (WAN)infrastructure 58 via a respective link 62-1 . . . 62-n (Hereafter, generically each is referred to aslink 62, and collectively they are referred to as links 62). -
System 50 also comprises a plurality of computing devices 66-1 . . . 66-p. (Hereafter, generically each is referred to ascomputing device 66, and collectively they are referred to ascomputing devices 66.Computing devices 66 are configured to connect toWAN infrastructure 58 via their own respective links 70-1 . . . 70-p (Hereafter, generically each is referred to aslink 70, and collectively they are referred to as links 70), and, as will be discussed in further detail below, are configured to receivecontent 56 via a pushed data transfer. -
WAN infrastructure 58 also comprises, amongst other things, amain intermediation infrastructure 74 and a plurality of secondary intermediation servers 78-1 . . . 78-l. (Hereafter, generically each is referred to as server 78, and collectively they are referred to as servers 78). Links 82-1 . . . 82-l (hereafter, generically each is referred to as link 82, and collectively they are referred to as links 82) connectintermediation infrastructure 74 and secondary intermediation servers 78. -
WAN infrastructure 58 can comprise, or connect to, other computer equipment which is not shown and which may be configured to provide services, applications, other content, or otherwise communicate with, for whatever purpose, to eachcomputing device 66.WAN infrastructure 58 can also comprise, or be part of, the Internet. -
Main intermediation infrastructure 74 comprises at least onepush server 86, and may also comprise a plurality of additional servers to fulfill other intermediation functions betweencomputing devices 66 and other computer equipment, not shown. At least onepush server 86, as will be discussed below, is configured to retrievecontent 56 and to simultaneously push retrievedcontent 56 to a plurality ofcomputing devices 66, subject to a maximum predefined number of transfers. - In one non-limiting implementation,
main intermediation infrastructure 74 can be based on the mobile data services (MDS) component from Research In Motion Inc. of Waterloo, Canada, while the secondary intermediation servers 78 can be based on a BlackBerry™ Enterprise Server (BES) or a BlackBerry™ Internet Server (BIS) also from Research In Motion Inc. of Waterloo, Canada. Again, these are non-limiting examples, and in other implementations, for example,main intermediation infrastructure 74 and secondary intermediation servers 78 can be omitted in lieu of including at least onepush server 86 withinWAN infrastructure 58. -
Links 62,links 70, links 82 can be based on any wired structure or wireless structures or combinations thereof. Typically, though not necessarily,computing devices 66 are wireless and therefore at least a portion oflink 70 comprises a wireless base station so thatlink 70 finally connects to eachcomputing device 70 uses one or more wireless protocols, including but not limited to Global System for Mobile communication (“GSM”), General Packet Relay Service (“GPRS”), Enhanced Data Rates for GSM Evolution (“EDGE”), 3G, High Speed Packet Access (“HSPA”), Code Division Multiple Access (“CDMA”), Evolution-Data Optimized (“EVDO”), Institute of Electrical and Electronic Engineers (IEEE) standard 802.11, Bluetooth™ or any of their variants or successors. It is also contemplated eachcomputing device 66 can include multiple radios to accommodate the different protocols that may be used to implement that final portion oflink 70. - The portion of each
link 70 that connects to its respective server 78 is typically wired and comprises a backhaul wired via a T-carrier link (e.g. T1, T3) or E-carrier link or the like. By the same token,links 82 and 62 are typically wired and can also comprise a backhaul or can otherwise be effected via the Internet 74. In general, the nature of eachlink - It is to be understood that each
server 54, each server 78 and the at least oneserver 86 can each be implemented using an appropriately configured hardware computing environment, comprising at least one processor, volatile storage (e.g. random access memory), non-volatile storage (e.g. hard disk drive), and at least one network interface all interconnected by a bus. Other components can also be connected to the processor via the bus, such as input devices and output devices. Likewise it is to be understood that the hardware computing environment of any particular server is configured to execute an operating system and other software such that the servers are ultimately configured to perform according to the teachings herein. Furthermore, it will be understood that each server can itself be implemented as several different servers to provide redundancy or load balancing, or alternatively one or more servers inFIG. 1 can be consolidated into a single server. - Like
servers 54, the structure and features of eachcomputing device 66 can vary. Typically, however, eachcomputing device 66 thus includes a processor which is configured to receive input from input devices (e.g. a trackball, a joystick, a touch-pad, a touch-screen, a keyboard, a microphone) and to control various output devices (e.g. a speaker, a display, an light emitting diode (LED) indicator, a vibration unit). The Processor is also connected to volatile storage which can be implemented as random access memory, and non-volatile storage which can be implemented using flash memory or the like, or can include other programmable read only memory (“PROM”) technology or can include read only memory (“ROM”) technology or can include a removable “smart card” or can comprise combinations of the foregoing. Those skilled in the art will now recognize that such volatile and non-volatile storage are non-limiting examples of computer readable media capable of storing programming instructions that are executable on processor. - Each
device 66 also includes a network interface, such as a wireless radio, for connectingdevice 66 to itsrespective link 70. Eachdevice 66 also includes a battery which is typically rechargeable and provides power to the components ofcomputing device 66. Collectively, one can view the processor and storage of eachdevice 66 microcomputer. It is now apparent that eachdevice 66 can be based on the structure and functionality of a portable wireless device, such as a BlackBerry handheld device, but it is to be stressed that this is a purely non-limiting exemplary device, asdevice 66 could also be based on any type of client computing device including portable wireless devices from other manufacturers, desktop computers, laptop computers, cellular telephones and the like. - The microcomputer implemented on
client 66 is thus configured to store and execute the requisite BIOS, operating system and applications to provide the desired functionality ofclient 66. In a present embodiment, eachclient 66 is configured to have anycontent 56 pushed to its non-volatile storage unit viapush server 86 and its respective secondary intermediation server 78. - Referring now to
FIG. 2 , a method for managing push data transfers is represented in the form of a flowchart and indicated generally at 200.Method 200 can be implemented usingsystem 50 or variations thereof. In particular,method 200 can also be implemented bypush server 86. -
Block 205 comprises determining a capacity. The capacity represents the maximum number of push transfers that are to be simultaneously managed usingmethod 200. The capacity can be fixed or predefined as part of anarchitecting system 50, or the capacity may change in real time, during or even throughout each performance ofmethod 200. Generally, the capacity may be based on a level of consumption of resources throughoutsystem 50, reflecting the maximum capacity overall which results from whichever component insystem 50 has the least capacity. Those skilled in the art will recognize that the available resources withinsystem 50 are dynamic, particularly where the components insystem 50 have multiple functions beyond the function of effecting push data transfers ofcontent 56 from one ormore servers 54 to one ormore computing devices 66. Furthermore, different branches ofsystem 50 may have different capacities. For example link 62-1, link 82-2 and link 70-1 which would be used to push content 56-1 to device 66-1 in their aggregate may have a first capacity, while link 62-n, link 82-l and link 70-k in their aggregate may have a second capacity different from the first capacity. Such link capacities may be based on bandwidth, for example. Furthermore, in addition to links having different capacities,servers 54 may each have different capacities, whereby server 54-n has a different capacity than server 54-1. By the same token, second server 78-l may have a different capacity that server 78-1. In any event, pushserver 86 will have its own capacity regardless of the capacity of other components insystem 50. Also, eachdevice 66 will have its own capacity to accept push data transfers.Such server 86 anddevice 66 capacities may be based on processor speed, amount of memory, network interface limitations, and the number of other processing threads that are executing in addition tomethod 200. Other metrics for capacity will now occur to those skilled in the art. Furthermore, individual and aggregate capacities may change according to time of day or based on resource allocation priorities, or other factors. Accordingly, in more complex implementations ofsystem 50, block 205 can comprise determining different capacities corresponding todifferent content 56 push data transfer scenarios, and managing the number of push transfers according to those different capacities. However, for purposes of providing a simplified (but non-limiting) illustration ofmethod 200, it will be assumed that a maximum capacity of two is determined atblock 205. In a further variation, which can be provided in addition to or lieu of the foregoing, an application programming interface can be provided at one or more ofservers 54,server 86, or server 78, or other suitable location insystem 50, where an administrator can manually set the capacity to a predefined level, including a level of “zero”, so that at a local level the resources that are consumed by push data transfers can be manually controlled. -
Block 210 comprises determining whether the capacity atblock 205 has been reached. Whenmethod 200 is performed bypush server 86,push server 86 can examine a counter or other software object that represents the number of data transfers currently being pushed fromservers 86 todevices 66. If the capacity defined atblock 205 has been exceeded (e.g. according the specific example referenced above, a counter or software object indicates that two or more push data transfers are currently being effected) thenmethod 200 advances to block 215 at which point any further push data transfer requests are queued, and thenmethod 200 cycles back to block 205. The means by which such queuing is effected is not particularly limited. For example, pushserver 86 can be configured to locally maintain a record of push data transfer requests, or pushserver 86 can be configured, at block 215, to reject, ignore or drop push data requests, with or without the return of an error message, leaving it to the requesting network component to reissue the push data request. At this point it can also be noted that push data transfer requests reflect any instruction to pushserver 86 to transfercontent 56 from a givenserver 54 to a givendevice 66, and that such instructions can originate from any component insystem 50, and typically do not originate from thedevice 66 itself. More typically, a request to pushcontent 56 originates from it own server 54: For example a request to push content 56-1 to a givendevice 66 originates from server 54-1 itself, while a request to push content 56-n to a givendevice 66 originates from server 54-n. - It can also be noted that the source of a push data request can be reflective of the nature or type of
content 56, but that in any event the nature or type ofcontent 56 is not particularly limited. As a simple example, where device 66-1 has subscribed to a weather service hosted by, for example, server 54-1, then content 56-1 can reflect a weather report that is periodically pushed by server 54-1 to that device, thereby providing automatic weather report updates on device 66-1. Other types ofcontent 56 can include, but are certainly not limited to, news, sports, traffic, stock, instant messages, social networking status updates, videos, music, chats, software applications, firmware updates, services and the like. - In this example, however, it will be assumed that during this initial performance of
method 200, a total of zero transfers are currently being effected, and since “zero” is less than the exemplary “two” introduced above, a “no” determination is reached atblock 210 andmethod 200 would therefore advance to block 220. -
Block 220 comprises determining if there are any outstanding push requests. Again, in the specific example contemplated in relation tosystem 50, block 220 is effected byserver 86 waiting for a new push data transfer request, or, whereserver 86 maintains queues of unprocessed push data transfer requests, then examining such a queue to determine if any push data requests are within that queue. A “No” determination atblock 220 results in looping atblock 220, effectively placingserver 86 into a “wait” state until a push data transfer request is received. A “yes” determination atblock 220 leads to block 225. -
Block 225 comprises initiating a push data transfer request. In the specific example contemplated in relation tosystem 50, block 225 is effected byserver 86 commencing a push data transfer and without actually waiting for such a transfer to complete,method 200 cycles back to block 205 wheremethod 200 begins anew. It can be noted that when block 225 is reached a number of times that is equal to the capacity determined atblock 205, then “yes” determinations will be made atblock 210 resulting in cessation of processing of further push data transfer requests. Accordingly, and as will be discussed further below a parallel method is performed which monitors progress of push data transfers and track (or possibly deems) completion of push data transfers so as to reduce the likelihood ofmethod 200 being prevented from reachingblock 220, due to a persistent “yes” determination being made atblock 210. -
Method 300 as shown inFIG. 3 provides a non-limiting example of a method that can be performed in parallel tomethod 200 so as to reduce the likelihood ofmethod 200 being prevented from reachingblock 220 fromblock 210. -
Block 315 comprises receiving content. In the specific example contemplated in relation tosystem 50, block 315 is effected bypush data server 86 receiving a particular item ofcontent 56 from itsrespective server 54. For example, assume that a pending data transfer request comprises a request to push content 56-1 to device 66-1. Accordingly, as part of performingblock 315, content 56-1 is received locally atpush data server 86. -
Block 320 comprises beginning a push data transfer of the content received atblock 315. In the specific example being discussed, block 320 is effected bypush data server 86 initiating a transfer of content 56-1 frompush data server 86 to device 66-1, making appropriate use if link 82-1 and server 78-1 and link 70-1. -
Block 325 comprises incrementing a counter. The counter increments the recorded number of active downloads and is accessible atblock 210 in order to determine the number of currently active push data transfers. In accordance with the example discussed above, the counter will therefore increase to “one” from “zero” during this invocation ofblock 325, such that a parallel performance ofblock 210 will still result in a “No”determination leading method 200 to block 220 fromblock 210. -
Block 330 comprises determining if the push data transfer that was initiated atblock 320 is complete. A “yes” determination is made atblock 330 when an acknowledgment is received from device 66-1, or server 78-1, atserver 86, expressly confirming that content 56-1 has been received by device 66-1. The duration of time to complete the push data transfer can vary according to the size of content 56-1, and the amount of bandwidth available betweenpush data server 86 and device 66-1, and accordingly it is contemplated that a “no” determination may be made atblock 330. - On a “no” determination at
block 330,method 300 advances to block 340 at which point a determination is made as to whether a predefined time limit has been reached. The method by which such a time limit is reached is not particularly limited, although a presently contemplated method is discussed further below. In general, the time limit corresponds to an expected time for the transfer to complete. -
Block 335, which can be reached from a “yes” determination at either block 330 or block 340, comprises decrementing the counter that was incremented atblock 325. In accordance with the example discussed above, and assuming no other invocations ofmethod 300 have occurred, then counter will therefore decrease to “zero” from “one” during this invocation ofblock 335, such that a parallel performance ofblock 210 will also result in a “No”determination leading method 200 to block 220 fromblock 210. - From
block 335, a return is made to block 205 ofmethod 200. - Having discussed a simple invocation of
method 200 andmethod 300 involving only a single push data transfer, those skilled in the art will now appreciate the behavior ofsystem 50 when multiple, simultaneous push data transfer requests are received bypush data server 86. For example, recall a hypothetical capacity of two push data transfer requests is determined atblock 205. Now assume that three push data transfer requests are received at server 86: initially, two are received from server 54-1 and then subsequently, one is received from server 54-n while the first two from server 54-1 are still be processed usingmethod 200 andmethod 300. When the third data transfer request is received atserver 86,method 200 will reach block 215 and the third data transfer request will be queued until one of the first two is determined to be actually completed atblock 330, or one of the first to is deemed to be completed atblock 340. Those skilled in the art will now also appreciate one of the advantages of the present specification, in that in the event an express acknowledgement of a push data transfer is never received atblock 330, then push data transfers fromserver 86 can still continue due to the deeming of such completion occurring due to performance ofblock 340. At the same time,method 200 andmethod 300 also regulate the utilization of network resources insystem 50, so that push data transfers (or other functions of system 50) do not fail due to overwhelming of those resources due to an overabundance of push data transfers. - As indicated above, the means by which the time-limit used in
block 340 can be determined is not particularly limited. A simple predefined time period can be preselected. In a more complex implementation, the time-limit may dynamically change according tc historical behavior ofsystem 50, including but not limited to tracking the history of acknowledged data transfers.Method 400 inFIG. 4 shows on non-limiting example of how the time limit inblock 340 can be determined using such historical tracking.Method 400 can be run from time to time, or in real time in conjunction withmethod 200 andmethod 300 to constantly update a time limit that will be used for specific performances ofmethod 300.Block 405 comprises receiving transfer parameters.Block 405 thus comprises receiving atserver 86 or at another computational resource data representing parameters of a historic, successfully completed push data transfer whereby an express acknowledgement was actually received confirming completion of the specific push data transfer.Server 86 can, for example, be configured to input data intoblock 405 every time a “yes” determination is reached atblock 330. Such data atblock 405 can be comprise to a simple identification of thecontent 56 that was successfully transferred, as well as the time it took to complete that transfer. The data received atblock 405 can further comprise an identification of the specific links, servers or other network resources involved in the transfer. -
Block 410 comprises storing the transfer parameters received atblock 405 in a table or other data base resource.Block 415 comprises comparing all of the parameters in the table or other data base resource, and block 420 comprises determining a time limit to be used atblock 340 based on the comparison fromblock 415. The operations used to perform the comparison atblock 415 and the determination atblock 420 are not particularly limited. In one implementation, block 415 and block 420 apply a probability model to the performance ofsystem 50 such thatserver 86 can infer with a high degree of confidence that data transfers are actually completed, even though no completion status (i.e. a “yes” determination”) has been determined atblock 330. The time limit that is established forblock 340 may be determined by historical tracking of successful push data transfers where acknowledgements of such completions are actually received atblock 330 and thereby result in a “yes” determination atblock 330. A table of values can be compiled of historical data from various performances ofblock 405 and block 410 that comprises the beginning time and ending time of push data transfers, and can also comprise transfer duration, content size, which server 78 is used, and which oflinks push data servers 86 are employed within a variation ofsystem 50, the table of values can additionally comprise an indication of which pushdata server 86 was employed. Using the table,server 86 or other computational device can be employed to determine a mean and standard deviation for the sample data that best matches an expected duration of time, which can then be employed to establish a time limit for use atblock 340 in relation to a particular push data transfer during a particular cycle ofmethod 300. A Cumulative Distribution Function (CDF) can be employed to determine a minimum time duration that would account for the minimum cumulative probability needed to assume completion of a push data transfer that is initiated atblock 320. Accordingly, the time limit established forblock 340 can vary between each invocation ofmethod 300, according the specific circumstances of a particular push data transfer. - In preliminary calculations performed by the inventor, sample data recorded in a table for a system that is reflective of
system 50 substantially matches a normal distribution. However, the inventor further believes that versions ofsystem 50 where statistical calculations generate data that do not fit a normal distribution can still be used to generate a time limit forblock 340, as in such a situation lacking a normal distribution, instead of calculating the mean and standard deviation, the time limit can be based on about the 95th percentile value for the sample data that is applicable. Thus, in a variation, the time limit established forblock 340 may be based on a value that accounts for only about 95% of all download durations in the above-referenced table of values. The remaining about 5% of transfers that exceed a value may be deemed to be an acceptable failure risk and permit the maximum to be exceeded but by a fairly limited amount. - It should now be understood that variations, subsets or combinations or all of them are contemplated.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/819,335 US9178949B2 (en) | 2010-03-03 | 2010-06-21 | Method, system and apparatus for managing push data transfers |
CA2733219A CA2733219C (en) | 2010-03-03 | 2011-03-02 | Method, system and apparatus for managing push data transfers |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US31001610P | 2010-03-03 | 2010-03-03 | |
US12/819,335 US9178949B2 (en) | 2010-03-03 | 2010-06-21 | Method, system and apparatus for managing push data transfers |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110217953A1 true US20110217953A1 (en) | 2011-09-08 |
US9178949B2 US9178949B2 (en) | 2015-11-03 |
Family
ID=43056956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/819,335 Active 2030-09-30 US9178949B2 (en) | 2010-03-03 | 2010-06-21 | Method, system and apparatus for managing push data transfers |
Country Status (2)
Country | Link |
---|---|
US (1) | US9178949B2 (en) |
EP (1) | EP2363998B1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120170451A1 (en) * | 2011-01-05 | 2012-07-05 | Harish Viswanathan | System and method for communicating data between an application server and an m2m device |
US20120324022A1 (en) * | 2011-06-14 | 2012-12-20 | Lowry Adam C | Push gateway systems and methods |
US8554855B1 (en) | 2011-06-14 | 2013-10-08 | Urban Airship, Inc. | Push notification delivery system |
US8731523B1 (en) * | 2011-06-14 | 2014-05-20 | Urban Airship, Inc. | Push notification delivery system with feedback analysis |
US9531827B1 (en) | 2011-06-14 | 2016-12-27 | Urban Airship, Inc. | Push notification delivery system with feedback analysis |
US20170180501A1 (en) * | 2015-12-21 | 2017-06-22 | Industrial Technology Research Institute | Message pushing method and message pushing device |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11171798B2 (en) * | 2019-08-01 | 2021-11-09 | Nvidia Corporation | Scalable in-network computation for massively-parallel shared-memory processors |
CN115552864A (en) * | 2020-05-13 | 2022-12-30 | 三星电子株式会社 | Method for processing media stream reception in mission-critical system and mission-critical server |
EP4128657A4 (en) * | 2020-08-10 | 2023-09-13 | Samsung Electronics Co., Ltd. | System and method to handle media transmission in mission critical (mc) system |
Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020015403A1 (en) * | 1999-02-04 | 2002-02-07 | Mcconnell Richard | Telecommunications gateway |
US20020029273A1 (en) * | 2000-06-05 | 2002-03-07 | Mark Haroldson | System and method for calculating concurrent network connections |
US20020143964A1 (en) * | 2001-03-30 | 2002-10-03 | Guo Wei-Quiang Michael | Service routing and web integration in a distributed multi-site user authentication system |
US20020194325A1 (en) * | 2001-05-30 | 2002-12-19 | Mazen Chmaytelli | Method and apparatus for individually estimating time required to download application programs to remote modules over wireless network |
US20020197991A1 (en) * | 2001-06-22 | 2002-12-26 | Anvekar Dinesh Kashinath | Roaming in wireless networks with dynamic modification of subscriber identification |
US20030045273A1 (en) * | 2001-08-31 | 2003-03-06 | Seppo Pyhalammi | Mobile content delivery system |
US20030066065A1 (en) * | 2001-10-02 | 2003-04-03 | International Business Machines Corporation | System and method for remotely updating software applications |
US20040045000A1 (en) * | 2002-09-04 | 2004-03-04 | Nec Corporation | Software license management system and method and recording medium |
US20040083317A1 (en) * | 2002-10-23 | 2004-04-29 | Christopher Dickson | System and method for explict communication of messages between processes running on different nodes in a clustered multiprocessor system |
US20050027846A1 (en) * | 2003-04-24 | 2005-02-03 | Alex Wolfe | Automated electronic software distribution and management method and system |
US20050132083A1 (en) * | 2003-07-28 | 2005-06-16 | Limelight Networks, Llc | Multiple object download |
US20050165948A1 (en) * | 2004-01-08 | 2005-07-28 | Hicham Hatime | Systems and methods for improving network performance |
US20050163072A1 (en) * | 2003-12-05 | 2005-07-28 | Samsung Electronics Co., Ltd. | Packet scheduling method using cumulative distribution function |
US20050201320A1 (en) * | 2004-03-10 | 2005-09-15 | Nokia Corporation | System and method for pushing content to a terminal utilizing a network-initiated data service technique |
US20060129638A1 (en) * | 2003-08-07 | 2006-06-15 | Ian Deakin | Server for determining and storing mobile device capability data |
US20070268515A1 (en) * | 2006-05-19 | 2007-11-22 | Yun Freund | System and method for automatic configuration of remote network switch and connected access point devices |
US20080049620A1 (en) * | 2006-08-25 | 2008-02-28 | Bbn Technologies Corp. | Systems and methods for energy-conscious communication in wireless ad-hoc networks |
US20080066190A1 (en) * | 2005-12-13 | 2008-03-13 | Huawei Technologies Co., Ltd. | Method, system and apparatus for protecting service account |
US7353295B1 (en) * | 2000-04-04 | 2008-04-01 | Motive, Inc. | Distributed services architecture through use of a dynamic service point map |
US20080091835A1 (en) * | 2005-07-19 | 2008-04-17 | Lei Yuan | Register/Unregister System And A Register/Unregister Method |
US20080140650A1 (en) * | 2006-11-29 | 2008-06-12 | David Stackpole | Dynamic geosocial networking |
US20080256607A1 (en) * | 2007-04-13 | 2008-10-16 | Akezyt Janedittakarn | Extensible and programmable multi-tenant service architecture |
US20080262994A1 (en) * | 2007-04-23 | 2008-10-23 | Berry Charles F | Populating requests to multiple destinations using a mass request |
US20090069001A1 (en) * | 2004-07-07 | 2009-03-12 | Cardina Donald M | System and method for imei detection and alerting |
US20090112981A1 (en) * | 2007-10-25 | 2009-04-30 | Slavik Markovich | Database end-user identifier |
US20090252325A1 (en) * | 2008-04-07 | 2009-10-08 | Microsoft Corporation | Secure content pre-distribution to designated systems |
US20100010899A1 (en) * | 2008-07-11 | 2010-01-14 | Lambert Paul A | Service discovery methods |
US20100180016A1 (en) * | 2006-05-19 | 2010-07-15 | Belden Inc. | Automated network device configuration and network deployment |
US20100317320A1 (en) * | 2009-06-10 | 2010-12-16 | Sakargayan Anupam | Method and apparatus for preventing unauthorized use of computing devices |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6816719B1 (en) | 1999-11-03 | 2004-11-09 | Nokia Corporation | Method and system for making wireless terminal profile information accessible to a network |
US7925717B2 (en) | 2002-12-20 | 2011-04-12 | Avaya Inc. | Secure interaction between a mobile client device and an enterprise application in a communication system |
CA2638878C (en) | 2006-03-27 | 2010-12-21 | Teamon Systems, Inc. | Wireless email communications system providing subscriber account update features and related methods |
EP1865744B1 (en) | 2006-06-08 | 2014-08-13 | Markport Limited | Device detection in mobile networks |
US10102352B2 (en) | 2009-08-10 | 2018-10-16 | Arm Limited | Content usage monitor |
-
2010
- 2010-06-21 EP EP10166650.1A patent/EP2363998B1/en active Active
- 2010-06-21 US US12/819,335 patent/US9178949B2/en active Active
Patent Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020015403A1 (en) * | 1999-02-04 | 2002-02-07 | Mcconnell Richard | Telecommunications gateway |
US7353295B1 (en) * | 2000-04-04 | 2008-04-01 | Motive, Inc. | Distributed services architecture through use of a dynamic service point map |
US20020029273A1 (en) * | 2000-06-05 | 2002-03-07 | Mark Haroldson | System and method for calculating concurrent network connections |
US20020143964A1 (en) * | 2001-03-30 | 2002-10-03 | Guo Wei-Quiang Michael | Service routing and web integration in a distributed multi-site user authentication system |
US20020194325A1 (en) * | 2001-05-30 | 2002-12-19 | Mazen Chmaytelli | Method and apparatus for individually estimating time required to download application programs to remote modules over wireless network |
US20020197991A1 (en) * | 2001-06-22 | 2002-12-26 | Anvekar Dinesh Kashinath | Roaming in wireless networks with dynamic modification of subscriber identification |
US20030045273A1 (en) * | 2001-08-31 | 2003-03-06 | Seppo Pyhalammi | Mobile content delivery system |
US20030066065A1 (en) * | 2001-10-02 | 2003-04-03 | International Business Machines Corporation | System and method for remotely updating software applications |
US20040045000A1 (en) * | 2002-09-04 | 2004-03-04 | Nec Corporation | Software license management system and method and recording medium |
US20040083317A1 (en) * | 2002-10-23 | 2004-04-29 | Christopher Dickson | System and method for explict communication of messages between processes running on different nodes in a clustered multiprocessor system |
US20050027846A1 (en) * | 2003-04-24 | 2005-02-03 | Alex Wolfe | Automated electronic software distribution and management method and system |
US20050132083A1 (en) * | 2003-07-28 | 2005-06-16 | Limelight Networks, Llc | Multiple object download |
US20060129638A1 (en) * | 2003-08-07 | 2006-06-15 | Ian Deakin | Server for determining and storing mobile device capability data |
US20050163072A1 (en) * | 2003-12-05 | 2005-07-28 | Samsung Electronics Co., Ltd. | Packet scheduling method using cumulative distribution function |
US20050165948A1 (en) * | 2004-01-08 | 2005-07-28 | Hicham Hatime | Systems and methods for improving network performance |
US20050201320A1 (en) * | 2004-03-10 | 2005-09-15 | Nokia Corporation | System and method for pushing content to a terminal utilizing a network-initiated data service technique |
US20090069001A1 (en) * | 2004-07-07 | 2009-03-12 | Cardina Donald M | System and method for imei detection and alerting |
US20080091835A1 (en) * | 2005-07-19 | 2008-04-17 | Lei Yuan | Register/Unregister System And A Register/Unregister Method |
US20080066190A1 (en) * | 2005-12-13 | 2008-03-13 | Huawei Technologies Co., Ltd. | Method, system and apparatus for protecting service account |
US20070268515A1 (en) * | 2006-05-19 | 2007-11-22 | Yun Freund | System and method for automatic configuration of remote network switch and connected access point devices |
US20100180016A1 (en) * | 2006-05-19 | 2010-07-15 | Belden Inc. | Automated network device configuration and network deployment |
US20080049620A1 (en) * | 2006-08-25 | 2008-02-28 | Bbn Technologies Corp. | Systems and methods for energy-conscious communication in wireless ad-hoc networks |
US20080140650A1 (en) * | 2006-11-29 | 2008-06-12 | David Stackpole | Dynamic geosocial networking |
US20080256607A1 (en) * | 2007-04-13 | 2008-10-16 | Akezyt Janedittakarn | Extensible and programmable multi-tenant service architecture |
US20080262994A1 (en) * | 2007-04-23 | 2008-10-23 | Berry Charles F | Populating requests to multiple destinations using a mass request |
US20090112981A1 (en) * | 2007-10-25 | 2009-04-30 | Slavik Markovich | Database end-user identifier |
US20090252325A1 (en) * | 2008-04-07 | 2009-10-08 | Microsoft Corporation | Secure content pre-distribution to designated systems |
US20100010899A1 (en) * | 2008-07-11 | 2010-01-14 | Lambert Paul A | Service discovery methods |
US20100317320A1 (en) * | 2009-06-10 | 2010-12-16 | Sakargayan Anupam | Method and apparatus for preventing unauthorized use of computing devices |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9071925B2 (en) * | 2011-01-05 | 2015-06-30 | Alcatel Lucent | System and method for communicating data between an application server and an M2M device |
US20120170451A1 (en) * | 2011-01-05 | 2012-07-05 | Harish Viswanathan | System and method for communicating data between an application server and an m2m device |
US9762690B2 (en) | 2011-06-14 | 2017-09-12 | Urban Airship, Inc. | Push notification delivery system |
US10142430B1 (en) | 2011-06-14 | 2018-11-27 | Urban Airship, Inc. | Push notification delivery system with feedback analysis |
US8572263B1 (en) | 2011-06-14 | 2013-10-29 | Urban Airship, Inc. | Push gateway systems and methods |
US8731523B1 (en) * | 2011-06-14 | 2014-05-20 | Urban Airship, Inc. | Push notification delivery system with feedback analysis |
US8423656B2 (en) * | 2011-06-14 | 2013-04-16 | Urban Airship, Inc. | Push gateway systems and methods |
US9277023B1 (en) | 2011-06-14 | 2016-03-01 | Urban Airship, Inc. | Push notification delivery system with feedback analysis |
US9531827B1 (en) | 2011-06-14 | 2016-12-27 | Urban Airship, Inc. | Push notification delivery system with feedback analysis |
US11863644B2 (en) | 2011-06-14 | 2024-01-02 | Airship Group, Inc. | Push notification delivery system with feedback analysis |
US20120324022A1 (en) * | 2011-06-14 | 2012-12-20 | Lowry Adam C | Push gateway systems and methods |
US8554855B1 (en) | 2011-06-14 | 2013-10-08 | Urban Airship, Inc. | Push notification delivery system |
US10244066B2 (en) | 2011-06-14 | 2019-03-26 | Urban Airship, Inc. | Push notification delivery system |
US10601940B2 (en) | 2011-06-14 | 2020-03-24 | Urban Airship, Inc. | Push notification delivery system with feedback analysis |
US10862989B2 (en) | 2011-06-14 | 2020-12-08 | Urban Airship, Inc. | Push notification delivery system |
US10972565B2 (en) | 2011-06-14 | 2021-04-06 | Airship Group, Inc. | Push notification delivery system with feedback analysis |
US11290555B2 (en) | 2011-06-14 | 2022-03-29 | Airship Group, Inc. | Push notification delivery system |
US11539809B2 (en) | 2011-06-14 | 2022-12-27 | Airship Group, Inc. | Push notification delivery system with feedback analysis |
US11711442B2 (en) | 2011-06-14 | 2023-07-25 | Airship Group, Inc. | Push notification delivery system |
US20170180501A1 (en) * | 2015-12-21 | 2017-06-22 | Industrial Technology Research Institute | Message pushing method and message pushing device |
Also Published As
Publication number | Publication date |
---|---|
EP2363998A1 (en) | 2011-09-07 |
EP2363998B1 (en) | 2015-01-07 |
US9178949B2 (en) | 2015-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9178949B2 (en) | Method, system and apparatus for managing push data transfers | |
US9729488B2 (en) | On-demand mailbox synchronization and migration system | |
US9952851B2 (en) | Intelligent mobile application update | |
US8099505B2 (en) | Aggregating connection maintenance to optimize resource consumption | |
US10771533B2 (en) | Adaptive communication control device | |
US9934020B2 (en) | Intelligent mobile application update | |
US20150178137A1 (en) | Dynamic system availability management | |
US11734073B2 (en) | Systems and methods for automatically scaling compute resources based on demand | |
US20120072579A1 (en) | Monitoring cloud-runtime operations | |
CN109144700B (en) | Method and device for determining timeout duration, server and data processing method | |
US20130283097A1 (en) | Dynamic network task distribution | |
CN106663031B (en) | Fair sharing of system resources in workflow execution | |
US8484652B2 (en) | Systems and methods for task execution on a managed node | |
US10642585B1 (en) | Enhancing API service schemes | |
CN115454589A (en) | Task scheduling method and device and Kubernetes scheduler | |
CN106331160A (en) | Data migration method and system | |
CN110008187B (en) | File transmission scheduling method, device, equipment and computer readable storage medium | |
CN103186536A (en) | Method and system for scheduling data shearing devices | |
KR101890046B1 (en) | Concurrent network application scheduling for reduced power consumption | |
CA2733219C (en) | Method, system and apparatus for managing push data transfers | |
CN112052077A (en) | Method, device, equipment and medium for software task management | |
US9448855B2 (en) | System and method for executing a cloud computing task | |
US11032375B2 (en) | Automatic scaling for communications event access through a stateful interface | |
US11609799B2 (en) | Method and system for distributed workload processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CHALK MEDIA SERVICE CORP., CANADA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:O'REILLY, JACOB SAMUEL;REEL/FRAME:024565/0401 Effective date: 20100618 |
|
AS | Assignment |
Owner name: RESEARCH IN MOTION LIMITED, ONTARIO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHALK MEDIA SERVICE CORP.;REEL/FRAME:028096/0155 Effective date: 20120424 |
|
AS | Assignment |
Owner name: BLACKBERRY LIMITED, ONTARIO Free format text: CHANGE OF NAME;ASSIGNOR:RESEARCH IN MOTION LIMITED;REEL/FRAME:036207/0637 Effective date: 20130709 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: MALIKIE INNOVATIONS LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BLACKBERRY LIMITED;REEL/FRAME:064104/0103 Effective date: 20230511 |
|
AS | Assignment |
Owner name: MALIKIE INNOVATIONS LIMITED, IRELAND Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:BLACKBERRY LIMITED;REEL/FRAME:064270/0001 Effective date: 20230511 |