US8782655B2 - Controlling computing resource consumption - Google Patents

Controlling computing resource consumption Download PDF

Info

Publication number
US8782655B2
US8782655B2 US12/216,235 US21623508A US8782655B2 US 8782655 B2 US8782655 B2 US 8782655B2 US 21623508 A US21623508 A US 21623508A US 8782655 B2 US8782655 B2 US 8782655B2
Authority
US
United States
Prior art keywords
resources
workload
resource
processors
fully
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/216,235
Other versions
US20100005473A1 (en
Inventor
William H. Blanding
Franklin Greco
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Valtrus Innovations Ltd
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US12/216,235 priority Critical patent/US8782655B2/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GRECO, FRANKLIN, BLANDING, WILLIAM H.
Publication of US20100005473A1 publication Critical patent/US20100005473A1/en
Application granted granted Critical
Publication of US8782655B2 publication Critical patent/US8782655B2/en
Assigned to HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP reassignment HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.
Assigned to OT PATENT ESCROW, LLC reassignment OT PATENT ESCROW, LLC PATENT ASSIGNMENT, SECURITY INTEREST, AND LIEN AGREEMENT Assignors: HEWLETT PACKARD ENTERPRISE COMPANY, HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
Assigned to VALTRUS INNOVATIONS LIMITED reassignment VALTRUS INNOVATIONS LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OT PATENT ESCROW, LLC
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3414Workload generation, e.g. scripts, playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5019Workload prediction

Definitions

  • the technical field is management of computing resources.
  • Instant capacity is a combined hardware and software system through which customers may acquire computing resources for which they pay a reduced price and have concomitant reduced usage rights.
  • Temporary instant capacity allows one or more hardware or software systems to be activated for a pre-paid period without requiring permanent usage rights.
  • customers may have trouble predicting and controlling TiCAP expenditures.
  • a system for controlling resource consumption in the computer system, comprising a monitoring module that monitors current consumption of resources by workloads executing on the computer system; a policy processing module that predicts future consumption of the resources by the workloads and provides a resource request to adjust assignment of resources to workloads based on the predicted future consumption, wherein the policy processing module determines consumption policies for each workload, and compares the policies to the predicted future consumption, and wherein the resource request call for increasing or decreasing resources for each workload based on the comparison; and a user interface module that generates a visual display of resource consumption and workload execution.
  • a method for controlling resource consumption in the computer system, comprising the steps of monitoring current consumption of resources by workloads executing on the computer system; predicting future consumption of the resources by the workloads; adjusting assignment of resources to workloads based on the predicted future consumption, comprising: determining consumption policies for each workload, comparing the policies to the predicted future consumption, and increasing or decreasing resources for each workload based on the comparison; and providing a visual display of resource consumption and workload execution information, the visual display including iconic values indicating predicted consumption of instant capacity resources and authorization to consume instant capacity resources.
  • FIG. 1 is a block diagram of an exemplary computer system on which a workload management system for controlling resource consumption is implemented;
  • FIG. 2 is a block diagram of an exemplary workload management system
  • FIG. 3 illustrates an exemplary user interface for the system of FIG. 2 ;
  • FIG. 4 is a flowchart illustrating an exemplary operation of the system of FIG. 2 .
  • FIG. 1 is a block diagram of an exemplary version of such a computer system.
  • computer system 100 includes partitions 110 i .
  • Each partition 110 i includes fully-licensed processors 120 for which the computer system user has full usage rights.
  • the computer system 100 By dividing the computer system 100 into partitions, and by allowing sharing of resources (i.e., the processors 120 ) across partition boundaries, the computer system 100 may be implemented with a smaller number of processors than if sharing were not allowed, or if workloads were assigned to separate computing systems. Thus, partitioning allows the computer system user to realize higher effective processor capacity with a reduced number of processors.
  • Each partition 110 i may additionally include one or more metered processors 130 , for which the computer system user pays an incremental cost when using the metered resources.
  • An example of a metered processor is an “instant capacity” processor.
  • the computer system user acquires usage rights on an “as needed” basis, and is charged accordingly.
  • some of the instant capacity processors 130 in each partition 110 i may be temporarily assigned to operate when workload demands exceed the computing capacity of the processors 120 .
  • the computer system user pays for operation of the processors 120 regardless of their actual operating status. That is, the user pays for each processor 120 whether or not that processor is actually executing any operations. Indeed, the user pays for the processors 120 even if those processors 120 are not powered on.
  • the user pays for the instant capacity processors 130 only when the instant capacity processors 130 are powered on and actually executing an operation.
  • the computer system user can pay for a “small” computer system yet have a “large” computer system “in standby.”
  • the computer system 100 shown in FIG. 1 includes some processors (the processors 120 ) that are licensed and paid for by the user, and some processors (the instant capacity processors 130 ) that are unlicensed and are used only “on demand.”
  • the computer system 100 can satisfy its resource demands by reallocating unused processor capacity of the processors 120 , then there is no need to access the instant capacity processors 130 .
  • the computer system 100 thus implements a borrowing scheme to temporarily transfer computing system resources (the processors 120 ) as needed to meet current demands, and only supplements this borrowing scheme with the instant capacity processors 130 when the available processor capacity from the processors 120 is not sufficient.
  • the relationship between borrowing capacity from the processors 120 and using the instant capacity processors 130 will be described later in more detail.
  • the computer system user can assign each workload a minimum number of processors, an “owned” number of processors, and a maximum number of processors, and can specify these parameters as a policy to be implemented by the workload management system.
  • workload 1 “owns” the processors 120 assigned to partition 1001
  • workload 2 “owns” the processors assigned to partition 100 2 .
  • a TiCAP (temporary instant capacity) system 150 may be made available to the computer system user.
  • the user acquires rights to use the instant capacity processors 130 by, for example, paying in advance for a certain quantity of processing time.
  • the TiCAP system 150 allows one or more unlicensed instant capacity processors 130 to be activated for a period of pre-paid processing minutes without requiring permanent usage rights.
  • the computer system user under TiCAP, obtains temporary usage rights.
  • the instant capacity processors 130 also may be placed into operation under regimes other than TiCAP. For example, an instant capacity processor 130 may be placed into operation to replace a failed processor 120 , if the computer system user has implemented such a service plan.
  • the TiCAP system 150 may be applied to other shared computing system resources, including, for example, memory, network bandwidth, and storage facilities used by the computer system 100 .
  • the computer system may be configured exclusively with metered resources for which usage the computer system user pays an incremental cost; and may be configured with fully-licensed resources, exclusively. Whether configured with metered resources or fully-licensed resources, exclusively, or with a mix of the two, the computer system user may pay an incremental cost when additional resources are being consumed. For example, in a computer system with fully-licensed and metered resources, when more fully-licensed resources are being consumed, the computer system user will at least experience an incremental cost in terms of increased power consumption and consequent increased cooling demand.
  • the computer system user When more metered resources are being consumed, the computer system user experiences the increased costs of power consumption and cooling, but also may have an increased cost directly tied to using the metered resources (i.e., a pay-per-use cost).
  • the computer system user naturally, would like some means for monitoring and controlling these incremental costs.
  • One such means includes a visual display that indicates when incremental costs are occurring. Note also, that although costs can increment up, due to added resource consumption, costs also can increment down due to reduced resource consumption.
  • the visual display can provide an indication of current consumption and future, or predicted, consumption, so that the computer system user can anticipate these chances in cost, and can take actions as needed and as allowable, to change or control the cost of operating the computer system.
  • the computer system user may purchase 30 days of prepaid temporary activation for instant capacity processors 130 .
  • This allows the temporary processors 130 installed in the computer system 100 to be turned on and off, typically for short periods, to provide added capacity.
  • a temporary instant capacity processor day is 24 hours, or 1,440 minutes of activation for one of the temporary processor cores 130 .
  • a TiCAP day may be used by one instant capacity processor 130 operating for 24 hours, or by four instant capacity processors 130 operating for six hours each. If and when the instant capacity processors 130 are placed into operation may be determined by one or more policies implemented by the computer system user.
  • the computer system user could specify that an instant capacity processor 130 be activated whenever the capacity of the corresponding processors 120 exceeds 90 percent, and that the instant capacity processor remain in operation until processor demand is reduced to 75 percent of the processors 120 .
  • Many other policies are possible to control activation of the instant capacity processors 130 .
  • the TiCAP system 150 resides as software on the user's computer system 100 , and operates in a standalone manner (i.e., no connection to the TiCAP provider). As instant capacity processors 130 are activated, the user's prepaid account, as implemented and monitored by the TiCAP system 150 , is debited. The TiCAP system 150 may predict, at the current rate of instant capacity processor usage, when the user's prepaid account will be depleted, and may send a message to the user with that information. Once the user's prepaid account balance reaches zero, the TiCAP system 150 will deactivate any operating instant capacity processors 130 .
  • Computer system users may benefit from having a system more robust than just the TiCAP system 150 for managing workload on the computer system 100 .
  • Such a system would provide the computer system users with information needed to monitor the consumption of computing resources in a fashion that allows the users to understand the amount and source of that consumption and to make, if needed and/or desired, changes to policies that implement that consumption.
  • Included in this more robust system is a provision to provide the computer system user with real time or near real time information relating expenditures (costs) for computing system resources based on the reallocation of those resources among workloads. For example, when instant capacity or TiCAP processors are accessed, the computer system user's expenditures for computing resources likely will increase. The computer system user would like to know why those costs increased, and to know specifically what workload demands led to the increases.
  • this information is provided to the computer system user by way of a visual or graphical interface.
  • the interface may include varying iconic symbols and values to convey the resource expenditure information (see FIG. 3 ).
  • FIG. 2 is a block diagram of an exemplary workload management system (WLMS) 200 that can be used to control resource utilization in the computer system 100 of FIG. 1 , and which includes provision for visual display of information to the computer system user.
  • WLMS workload management system
  • the WLMS 200 is an automated system that allows users to more precisely manage consumption of computing resources by workloads.
  • a workload is a series of functions executed by an instance of an operating system, or by a subset of the operating system. Policy controls define if, and under what conditions, specific workloads may automatically consume TiCAP resources. Not all workloads are allowed access to TiCAP resources.
  • each workload may be assigned a specific quantity of computing system resources that it “owns.” For example, a workload may normally use one processor 120 , but own two processors 120 . In addition, the workload may be allowed, by a policy established by the computer system user, to “borrow” up to two additional processors.
  • the workload borrows up to the maximum allowable (for a total of four processors in the given example).
  • TiCAP resources are being consumed by the workloads, those workloads which are allowed to use TiCAP resources, and which are borrowing resources, are consuming TiCAP resources.
  • Shared resource domains are comprised of workloads, each of which is assigned a policy, and each of which is associated with a resource compartment.
  • a resource compartment may be an operating system instance or a subdivision of an operating system instance.
  • each shared resource domain includes workloads, each of which is assigned one or more policies and each of which is associated with a resource compartment.
  • WLMS 200 is coupled to the computer system 100 and TiCAP system 150 .
  • the WLMS 200 includes a resource monitor 210 that tracks which workloads are using which resources, and receives requests from the workloads to add additional resources.
  • Coupled to the resource monitor 210 is policy processor 220 , which applies policies set by the user with user input device 230 and stored in database 240 , to determine if and when to apply additional resources to a requesting workload.
  • user interface module 250 produces the user interfaces that the computer system user requires in order to efficiently manage the consumption of computer system resources.
  • the policies may be simple policies (allocate additional processing resources when percent utilization exceeds 75 percent) or complex (different rules for different times of the day, for example). More specifically, a conditional policy (rule set) may employ conditional statements based on the occurrence of specific event. For example, a complex (conditional) rule set may specify one rule for the hours 6 am to 6 pm (condition: the workload is important during daytime and is allowed to consume TiCAP resources), and a second rule for the hours 6 pm to 6 am (condition: the workload is not important during nighttime and is not allowed to consume TiCAP resources). Thus, a conditional policy is one way to create complex rule sets from simple policies.
  • An active policy is one that is currently applied to a workload based on which conditions are true for that workload. In the above example, if the time of day is noon, and a policy is being implemented, the true condition is daytime and the policy that will be applied is to assign additional TiCAP resources to the workload. If a condition associated with a workload is not true, the policy for the workload is not implemented.
  • the WLMS 200 can compare policy requirements to operating parameters for each active workload and each consumed or available resource to determine if resources assigned to the active workloads should be changed (increased or decreased). If a change is indicated, the WLMS 200 (the policy processor 220 ) will initiate a resource request. The resource request may then be sent to the TiCAP system 150 .
  • processor utilization For example, processor utilization.
  • the resource monitor 210 monitors two parameters: number of processors in use and processor utilization for each such processor, and processor demand from each workload. The resource monitor 210 then makes a prediction as to processor demand/utilization for the next operating interval of the computer system 100 .
  • processor demand/utilization in the next interval will be the same as in the current interval of computer system operation.
  • the computer system user will have specified that percentage of processor utilization at which the user believes the workload will be best operated (e.g., at 100 percent processor utilization, there is no margin for demand changes, and any further demand beyond 100 percent will impair workload operation).
  • percentage of processor utilization at which the user believes the workload will be best operated e.g., at 100 percent processor utilization, there is no margin for demand changes, and any further demand beyond 100 percent will impair workload operation.
  • the resource monitor 210 “predicts” that the workloads will consume 1.5 processors, and the user wants processor utilization to be no more than 50 percent, then the user would prefer that an additional processor be placed in operation so that 1.5 out of 3 processors are operating, each at 50 percent utilization.
  • the WLMS 200 sends a resource request to the computer system 100 , and that request may result in the assignment of one or more additional processors 120 to the workload.
  • the resource request may be passed to the TiCAP system 150 , and one or more instant capacity processors 130 may be assigned to the workload.
  • the distribution of processors to workload, including the assignment of instant capacity processors 130 may be shown on a visual display that is presented to the user in real time or near real time.
  • FIG. 3 illustrates an exemplary user interface 260 for monitoring and controlling computing system resources of the computer system 100 .
  • the interface 260 includes identities 261 of workloads executing on the computer system 100 , percent of processor utilization 262 , active policies 263 for the workloads, and an indication 264 of whether TiCAP resources are authorized and being used by the workloads.
  • the indication of TiCAP authorization and use may include display of various icons, as shown.
  • the interface 260 includes other informational displays that allow the computer system user to monitor operations and to make decisions regarding control of the computing resources.
  • the interface 260 may use various icons to represent TiCAP authorization, TiCAP use, and the amount of TiCAP resources being consumed by a particular work load, a group of workload (arranged, for example, by partition) and the total TiCAP utilization.
  • the icons may be caused to vary as to indicate changes in underlying data. For example, a workload may be authorized by policy to use TiCAP resources during weekdays but not weekends. Thus, a visual display generated for a weekend period would show that specific workload without a TiCAP authorization icon. Similarly, when a specific workload is actually using a TiCAP resource, the visual display would include an icon indicating TiCAP resources in use. Many other variations of the icons, and other data presented on the visual display, are possible.
  • the computer system user may decide to change the policies that pertain to a specific workload. Note that the user interface 260 may display conditions for a current computing interval or a future computing interval, or both. In addition, the computer system user may access similar displays for any prior computing interval.
  • FIG. 4 is a flowchart illustrating an exemplary operation 300 executed by the WLMS 200 to allow monitoring and control of computing resources in the computer system 100 . More particularly, the operation 300 is directed to controlling processor utilization within the computer system 100 .
  • the WLMS 200 may execute other operations for controlling processor resources or for controlling resources in the computer system 100 other than processors.
  • the operation 300 may execute on a continuous or periodic basis. As shown in block 310 , the operation 300 begins a monitoring/control “cycle” by sampling information from each workload executing on the computer system 100 and each resource being consumed by the active workloads. Such sampling can use either a “push” or a “pull” methodology, whereby information periodically and automatically is sent from the workloads and the resources to the WLMS 200 , or whereby information is sent to the WLMS 200 in response to a query or polling command initiated by the WLMS 200 .
  • the WLMS 200 determines current processor utilization (block 315 ). In block 320 , for each workload, the WLMS 200 predicts processor utilization for the next computer interval. In block 325 , the WLMS 200 identifies assigned policies for each workload, and in block 330 , determines from the identified and assigned policies, which policy(s) are active for each workload.
  • the WLMS 200 determines for each workload if that workload's predicted processor utilization will meet the requirements of the workload's active policy(s). That is, a workload's predicted processor utilization could exceed that which can be supplied by the processors the workload currently is using, or the predicted workload could be sufficiently reduced such that one or more processors can be reassigned away from the workload. If the predicted processor utilization for all active workloads will conform to the active policies, the operation 300 moves to block 340 .
  • the WLMS 200 generates an update to the user interface 260 to provide the computer system used with a real time or near real time indication of the predicted requirements for resource allocation among the workloads.
  • the visual display will also be updated to reflect iconic values to indicate which of the workloads is authorized to use TiCAP resources, for example, and an indication of current and future (predicted) TiCAP resource expenditures.
  • the computer system user can: 1) obtain a dynamic record of computer system operations and expenditures, and 2) make policy adjustments to change the predicted resource allocations.
  • the operation 300 moves to block 345 and the WLMS 200 determines if the affected active workload can borrow processor resources from the non-instant capacity processors (i.e., the processors 120 ) that are not in use, or are not operating at their specified maximum capacity.
  • the determination of borrowing is made, for example, on a specific policy that identifies a minimum, owned, and maximum processor assignment for the workload. In this example, if the workload owns two processors and has a maximum assignable number of four processors, then the workload may borrow up to two more processors to meet its predicted processor utilization requirements.
  • the operation 300 moves to block 340 , and resource expenditure and workload operation is presented to the user in real time or near real time by way of the user interface 260 , or a similar interface. However, if in block 345 the workload can borrow additional processors, the operation 300 moves to block 350 .
  • the WLMS 200 determines if there are any processors available to the workload to borrow to meet its predicted processor utilization requirements. If there are non-instant capacity processors available, the operation 300 moves to block 355 and the non-utilized processors are assigned to the workload. The operation 300 then moves to block 340 . If, however, there are no non-instant capacity processors available, the operation moves to block 360 . In block 360 , the WLMS 200 determines if the workload can be assigned an instant capacity processor, and if such an instant capacity processor is available for assignment. If the workload cannot be assigned an instant capacity processor, or if none are available, the operation 300 moves to block 340 .
  • the operation 300 moves to block 365 and the TiCAP system 150 assigns the instant capacity processor to the workload.
  • the operation 300 then moves to block 340 and the WLMS 200 produces information for display to the computer system user by way of the user interface 260 , or a similar interface.
  • the various disclosed embodiments may be implemented as a method, system, and/or apparatus.
  • exemplary embodiments are implemented as one or more computer software programs to implement the methods described herein.
  • the software is implemented as one or more modules (also referred to as code subroutines, or “objects” in object-oriented programming).
  • the location of the software will differ for the various alternative embodiments.
  • the software programming code for example, is accessed by a processor or processors of the computer or server from long-term storage media of some type, such as a CD-ROM drive or hard drive.
  • the software programming code is embodied or stored on any of a variety of known media for use with a data processing system or in any memory device such as semiconductor, magnetic and optical devices, including a disk, hard drive, CD-ROM, ROM, etc.
  • the code is distributed on such media, or is distributed to users from the memory or storage of one computer system over a network of some type to other computer systems for use by users of such other systems.
  • the programming code is embodied in the memory (such as memory of a handheld portable electronic device) and accessed by a processor using a bus.
  • the techniques and methods for embodying software programming code in memory, on physical media, and/or distributing software code via networks are well known and will not be further discussed herein.

Abstract

A method and a corresponding system, implemented as programming on a computer system, controls resource consumption in the computer system. The method includes the steps of monitoring current consumption of resources by workloads executing on the computer system; predicting future consumption of the resources by the workloads; adjusting assignment of resources to workloads based on the predicted future consumption, comprising: determining consumption policies for each workload, comparing the policies to the predicted future consumption, and increasing or decreasing resources for each workload based on the comparison; and providing a visual display of resource consumption and workload execution information, the visual display including iconic values indicating predicted consumption of instant capacity resources and authorization to consume instant capacity resources.

Description

TECHNICAL FIELD
The technical field is management of computing resources.
BACKGROUND
Computer system users may have a need for varying amounts of computing resources. One way to accommodate fluctuating computing needs is through instant capacity (sometimes called instant capacity on demand (ICOD)). Instant capacity is a combined hardware and software system through which customers may acquire computing resources for which they pay a reduced price and have concomitant reduced usage rights. Temporary instant capacity (TiCAP) allows one or more hardware or software systems to be activated for a pre-paid period without requiring permanent usage rights. However, because of the complexity of certain instant capacity management schemes, customers may have trouble predicting and controlling TiCAP expenditures.
SUMMARY
What is disclosed is a method, implemented on a computing device, for controlling computing resource consumption in a computing system, the computing system comprising one or more resources, the computing system executing one or more workloads, each of the workloads assigned one or more of the resources, each of the workloads operating in accordance with one or more active policies, the method, comprising: monitoring execution of the workloads and consumption of the resources; displaying on a first visual display, for a current computing interval, a number of the resources currently consumed by each executing workload; predicting for a subsequent computing interval, numbers of resources that will be consumed for each executing workload; and providing the results of the prediction on a second visual display, wherein the first and the second visual displays comprise: workload identity, resource utilization, the active policies, and resource requests.
Also disclosed is a system, implemented as programming on a computer system, for controlling resource consumption in the computer system, comprising a monitoring module that monitors current consumption of resources by workloads executing on the computer system; a policy processing module that predicts future consumption of the resources by the workloads and provides a resource request to adjust assignment of resources to workloads based on the predicted future consumption, wherein the policy processing module determines consumption policies for each workload, and compares the policies to the predicted future consumption, and wherein the resource request call for increasing or decreasing resources for each workload based on the comparison; and a user interface module that generates a visual display of resource consumption and workload execution.
Further, what is disclosed is a method, implemented as programming on a computer system, for controlling resource consumption in the computer system, comprising the steps of monitoring current consumption of resources by workloads executing on the computer system; predicting future consumption of the resources by the workloads; adjusting assignment of resources to workloads based on the predicted future consumption, comprising: determining consumption policies for each workload, comparing the policies to the predicted future consumption, and increasing or decreasing resources for each workload based on the comparison; and providing a visual display of resource consumption and workload execution information, the visual display including iconic values indicating predicted consumption of instant capacity resources and authorization to consume instant capacity resources.
DESCRIPTION OF THE DRAWINGS
The detailed description will refer to the following drawings in which like reference numerals refer to like items, and in which:
FIG. 1 is a block diagram of an exemplary computer system on which a workload management system for controlling resource consumption is implemented;
FIG. 2 is a block diagram of an exemplary workload management system;
FIG. 3 illustrates an exemplary user interface for the system of FIG. 2; and
FIG. 4 is a flowchart illustrating an exemplary operation of the system of FIG. 2.
DETAILED DESCRIPTION
The herein disclosed system and method for controlling computing system resources is based on an overall computing system architecture that includes multiple processors and other resources. The processor and resources may be divided into two or more partitions. Each partition may have one or more processors assigned. Each partition runs an instance of an operating system. Resources in one partition may be shared with another partition. FIG. 1 is a block diagram of an exemplary version of such a computer system. In FIG. 1, computer system 100 includes partitions 110 i. Each partition 110 i includes fully-licensed processors 120 for which the computer system user has full usage rights. By dividing the computer system 100 into partitions, and by allowing sharing of resources (i.e., the processors 120) across partition boundaries, the computer system 100 may be implemented with a smaller number of processors than if sharing were not allowed, or if workloads were assigned to separate computing systems. Thus, partitioning allows the computer system user to realize higher effective processor capacity with a reduced number of processors.
Each partition 110 i may additionally include one or more metered processors 130, for which the computer system user pays an incremental cost when using the metered resources. An example of a metered processor is an “instant capacity” processor. When using an instant capacity processor, the computer system user acquires usage rights on an “as needed” basis, and is charged accordingly. Thus, some of the instant capacity processors 130 in each partition 110 i may be temporarily assigned to operate when workload demands exceed the computing capacity of the processors 120. The computer system user pays for operation of the processors 120 regardless of their actual operating status. That is, the user pays for each processor 120 whether or not that processor is actually executing any operations. Indeed, the user pays for the processors 120 even if those processors 120 are not powered on. However, the user pays for the instant capacity processors 130 only when the instant capacity processors 130 are powered on and actually executing an operation. By paying for processing capacity “as needed” the computer system user can pay for a “small” computer system yet have a “large” computer system “in standby.” Thus, the computer system 100 shown in FIG. 1 includes some processors (the processors 120) that are licensed and paid for by the user, and some processors (the instant capacity processors 130) that are unlicensed and are used only “on demand.”
Of course, if the computer system 100 can satisfy its resource demands by reallocating unused processor capacity of the processors 120, then there is no need to access the instant capacity processors 130. The computer system 100 thus implements a borrowing scheme to temporarily transfer computing system resources (the processors 120) as needed to meet current demands, and only supplements this borrowing scheme with the instant capacity processors 130 when the available processor capacity from the processors 120 is not sufficient. The relationship between borrowing capacity from the processors 120 and using the instant capacity processors 130 will be described later in more detail.
To implement this borrowing scheme, the computer system user can assign each workload a minimum number of processors, an “owned” number of processors, and a maximum number of processors, and can specify these parameters as a policy to be implemented by the workload management system. As an example, workload 1 “owns” the processors 120 assigned to partition 1001 and workload 2 “owns” the processors assigned to partition 100 2. Should workload 1 require four processors, that need can be satisfied by “borrowing” one of the processors 120 from the partition 100 2, should any of those processors 120 be “available.” However, if none of the “owned” processors in the computer system 100 are available, and if the workload 1 is designated to use instant capacity processors 130, then the required processor capacity may be met by using one or more of the instant capacity processors 130.
To implement the enhanced capacity of the “large” computing system with its instant capacity processors 130, a TiCAP (temporary instant capacity) system 150 maybe made available to the computer system user. Using the TiCAP system 150, the user acquires rights to use the instant capacity processors 130 by, for example, paying in advance for a certain quantity of processing time. Thus, the TiCAP system 150 allows one or more unlicensed instant capacity processors 130 to be activated for a period of pre-paid processing minutes without requiring permanent usage rights. When the instant capacity processor 130 is activated, the computer system user, under TiCAP, obtains temporary usage rights. Note that the instant capacity processors 130 also may be placed into operation under regimes other than TiCAP. For example, an instant capacity processor 130 may be placed into operation to replace a failed processor 120, if the computer system user has implemented such a service plan.
Although the above description, and that which follows, refers to instant capacity processors as TiCAP resources, the TiCAP system 150 may be applied to other shared computing system resources, including, for example, memory, network bandwidth, and storage facilities used by the computer system 100. Furthermore, the computer system may be configured exclusively with metered resources for which usage the computer system user pays an incremental cost; and may be configured with fully-licensed resources, exclusively. Whether configured with metered resources or fully-licensed resources, exclusively, or with a mix of the two, the computer system user may pay an incremental cost when additional resources are being consumed. For example, in a computer system with fully-licensed and metered resources, when more fully-licensed resources are being consumed, the computer system user will at least experience an incremental cost in terms of increased power consumption and consequent increased cooling demand. When more metered resources are being consumed, the computer system user experiences the increased costs of power consumption and cooling, but also may have an increased cost directly tied to using the metered resources (i.e., a pay-per-use cost). The computer system user, naturally, would like some means for monitoring and controlling these incremental costs. One such means includes a visual display that indicates when incremental costs are occurring. Note also, that although costs can increment up, due to added resource consumption, costs also can increment down due to reduced resource consumption. The visual display can provide an indication of current consumption and future, or predicted, consumption, so that the computer system user can anticipate these chances in cost, and can take actions as needed and as allowable, to change or control the cost of operating the computer system.
As an example of a TiCAP implementation, the computer system user may purchase 30 days of prepaid temporary activation for instant capacity processors 130. This allows the temporary processors 130 installed in the computer system 100 to be turned on and off, typically for short periods, to provide added capacity. A temporary instant capacity processor day is 24 hours, or 1,440 minutes of activation for one of the temporary processor cores 130. A TiCAP day may be used by one instant capacity processor 130 operating for 24 hours, or by four instant capacity processors 130 operating for six hours each. If and when the instant capacity processors 130 are placed into operation may be determined by one or more policies implemented by the computer system user. For example, the computer system user could specify that an instant capacity processor 130 be activated whenever the capacity of the corresponding processors 120 exceeds 90 percent, and that the instant capacity processor remain in operation until processor demand is reduced to 75 percent of the processors 120. Many other policies are possible to control activation of the instant capacity processors 130.
In an embodiment, the TiCAP system 150 resides as software on the user's computer system 100, and operates in a standalone manner (i.e., no connection to the TiCAP provider). As instant capacity processors 130 are activated, the user's prepaid account, as implemented and monitored by the TiCAP system 150, is debited. The TiCAP system 150 may predict, at the current rate of instant capacity processor usage, when the user's prepaid account will be depleted, and may send a message to the user with that information. Once the user's prepaid account balance reaches zero, the TiCAP system 150 will deactivate any operating instant capacity processors 130.
Computer system users may benefit from having a system more robust than just the TiCAP system 150 for managing workload on the computer system 100. Such a system would provide the computer system users with information needed to monitor the consumption of computing resources in a fashion that allows the users to understand the amount and source of that consumption and to make, if needed and/or desired, changes to policies that implement that consumption. Included in this more robust system is a provision to provide the computer system user with real time or near real time information relating expenditures (costs) for computing system resources based on the reallocation of those resources among workloads. For example, when instant capacity or TiCAP processors are accessed, the computer system user's expenditures for computing resources likely will increase. The computer system user would like to know why those costs increased, and to know specifically what workload demands led to the increases. In an embodiment, this information is provided to the computer system user by way of a visual or graphical interface. The interface may include varying iconic symbols and values to convey the resource expenditure information (see FIG. 3).
FIG. 2 is a block diagram of an exemplary workload management system (WLMS) 200 that can be used to control resource utilization in the computer system 100 of FIG. 1, and which includes provision for visual display of information to the computer system user.
The WLMS 200 is an automated system that allows users to more precisely manage consumption of computing resources by workloads. A workload is a series of functions executed by an instance of an operating system, or by a subset of the operating system. Policy controls define if, and under what conditions, specific workloads may automatically consume TiCAP resources. Not all workloads are allowed access to TiCAP resources. Furthermore, each workload may be assigned a specific quantity of computing system resources that it “owns.” For example, a workload may normally use one processor 120, but own two processors 120. In addition, the workload may be allowed, by a policy established by the computer system user, to “borrow” up to two additional processors. Should a workload require resources in excess of what the workload owns, the workload borrows up to the maximum allowable (for a total of four processors in the given example). When TiCAP resources are being consumed by the workloads, those workloads which are allowed to use TiCAP resources, and which are borrowing resources, are consuming TiCAP resources.
Shared resource domains are comprised of workloads, each of which is assigned a policy, and each of which is associated with a resource compartment. A resource compartment may be an operating system instance or a subdivision of an operating system instance.
In managing the computer system 100, the user identifies configurations of shared resource domains that the user wants to control. As noted above, the user also defines the policies to be implemented in controlling the shared resource domains. Such policies allow for automatic implementation of the TiCAP system 150, and automatic assignment of instant capacity processors 130, by specifying parameters under which the TiCAP resources may be activated. Thus, each shared resource domain includes workloads, each of which is assigned one or more policies and each of which is associated with a resource compartment.
In FIG. 2, WLMS 200 is coupled to the computer system 100 and TiCAP system 150. The WLMS 200 includes a resource monitor 210 that tracks which workloads are using which resources, and receives requests from the workloads to add additional resources. Coupled to the resource monitor 210 is policy processor 220, which applies policies set by the user with user input device 230 and stored in database 240, to determine if and when to apply additional resources to a requesting workload. Finally, user interface module 250 produces the user interfaces that the computer system user requires in order to efficiently manage the consumption of computer system resources.
The policies may be simple policies (allocate additional processing resources when percent utilization exceeds 75 percent) or complex (different rules for different times of the day, for example). More specifically, a conditional policy (rule set) may employ conditional statements based on the occurrence of specific event. For example, a complex (conditional) rule set may specify one rule for the hours 6 am to 6 pm (condition: the workload is important during daytime and is allowed to consume TiCAP resources), and a second rule for the hours 6 pm to 6 am (condition: the workload is not important during nighttime and is not allowed to consume TiCAP resources). Thus, a conditional policy is one way to create complex rule sets from simple policies.
An active policy is one that is currently applied to a workload based on which conditions are true for that workload. In the above example, if the time of day is noon, and a policy is being implemented, the true condition is daytime and the policy that will be applied is to assign additional TiCAP resources to the workload. If a condition associated with a workload is not true, the policy for the workload is not implemented.
With the workload policies established, the WLMS 200 can compare policy requirements to operating parameters for each active workload and each consumed or available resource to determine if resources assigned to the active workloads should be changed (increased or decreased). If a change is indicated, the WLMS 200 (the policy processor 220) will initiate a resource request. The resource request may then be sent to the TiCAP system 150. Consider, for example, processor utilization. The resource monitor 210 monitors two parameters: number of processors in use and processor utilization for each such processor, and processor demand from each workload. The resource monitor 210 then makes a prediction as to processor demand/utilization for the next operating interval of the computer system 100. For example, if two processors 120 are operating, each at 75 percent utilization, in order to support one workload, then 1.5 out of 2 processors are being used by the workload. The simplest prediction is that processor demand/utilization in the next interval will be the same as in the current interval of computer system operation. The computer system user will have specified that percentage of processor utilization at which the user believes the workload will be best operated (e.g., at 100 percent processor utilization, there is no margin for demand changes, and any further demand beyond 100 percent will impair workload operation). Thus if the resource monitor 210 “predicts” that the workloads will consume 1.5 processors, and the user wants processor utilization to be no more than 50 percent, then the user would prefer that an additional processor be placed in operation so that 1.5 out of 3 processors are operating, each at 50 percent utilization. To initiate a change in processors assigned to the workload, the WLMS 200 sends a resource request to the computer system 100, and that request may result in the assignment of one or more additional processors 120 to the workload. However, if the resource request cannot be met by assignment of processors 120 (e.g., none are available), the resource request may be passed to the TiCAP system 150, and one or more instant capacity processors 130 may be assigned to the workload. The distribution of processors to workload, including the assignment of instant capacity processors 130 may be shown on a visual display that is presented to the user in real time or near real time.
FIG. 3 illustrates an exemplary user interface 260 for monitoring and controlling computing system resources of the computer system 100. The interface 260 includes identities 261 of workloads executing on the computer system 100, percent of processor utilization 262, active policies 263 for the workloads, and an indication 264 of whether TiCAP resources are authorized and being used by the workloads. The indication of TiCAP authorization and use may include display of various icons, as shown. The interface 260 includes other informational displays that allow the computer system user to monitor operations and to make decisions regarding control of the computing resources. For example, the interface 260 may use various icons to represent TiCAP authorization, TiCAP use, and the amount of TiCAP resources being consumed by a particular work load, a group of workload (arranged, for example, by partition) and the total TiCAP utilization. The icons may be caused to vary as to indicate changes in underlying data. For example, a workload may be authorized by policy to use TiCAP resources during weekdays but not weekends. Thus, a visual display generated for a weekend period would show that specific workload without a TiCAP authorization icon. Similarly, when a specific workload is actually using a TiCAP resource, the visual display would include an icon indicating TiCAP resources in use. Many other variations of the icons, and other data presented on the visual display, are possible. Upon viewing the status indications 264 and the active policies 263, the computer system user may decide to change the policies that pertain to a specific workload. Note that the user interface 260 may display conditions for a current computing interval or a future computing interval, or both. In addition, the computer system user may access similar displays for any prior computing interval.
FIG. 4 is a flowchart illustrating an exemplary operation 300 executed by the WLMS 200 to allow monitoring and control of computing resources in the computer system 100. More particularly, the operation 300 is directed to controlling processor utilization within the computer system 100. The WLMS 200 may execute other operations for controlling processor resources or for controlling resources in the computer system 100 other than processors.
The operation 300 may execute on a continuous or periodic basis. As shown in block 310, the operation 300 begins a monitoring/control “cycle” by sampling information from each workload executing on the computer system 100 and each resource being consumed by the active workloads. Such sampling can use either a “push” or a “pull” methodology, whereby information periodically and automatically is sent from the workloads and the resources to the WLMS 200, or whereby information is sent to the WLMS 200 in response to a query or polling command initiated by the WLMS 200.
Once the requisite information is at the WLMS 200, for each workload, the WLMS 200 determines current processor utilization (block 315). In block 320, for each workload, the WLMS 200 predicts processor utilization for the next computer interval. In block 325, the WLMS 200 identifies assigned policies for each workload, and in block 330, determines from the identified and assigned policies, which policy(s) are active for each workload.
In block 335, the WLMS 200 determines for each workload if that workload's predicted processor utilization will meet the requirements of the workload's active policy(s). That is, a workload's predicted processor utilization could exceed that which can be supplied by the processors the workload currently is using, or the predicted workload could be sufficiently reduced such that one or more processors can be reassigned away from the workload. If the predicted processor utilization for all active workloads will conform to the active policies, the operation 300 moves to block 340.
In block 340, the WLMS 200 generates an update to the user interface 260 to provide the computer system used with a real time or near real time indication of the predicted requirements for resource allocation among the workloads. The visual display will also be updated to reflect iconic values to indicate which of the workloads is authorized to use TiCAP resources, for example, and an indication of current and future (predicted) TiCAP resource expenditures. Using the information provided with the visual display, the computer system user can: 1) obtain a dynamic record of computer system operations and expenditures, and 2) make policy adjustments to change the predicted resource allocations.
If the predicted processor utilization for one or more workloads will not conform to the workload's active policy(s), the operation 300 moves to block 345 and the WLMS 200 determines if the affected active workload can borrow processor resources from the non-instant capacity processors (i.e., the processors 120) that are not in use, or are not operating at their specified maximum capacity. The determination of borrowing is made, for example, on a specific policy that identifies a minimum, owned, and maximum processor assignment for the workload. In this example, if the workload owns two processors and has a maximum assignable number of four processors, then the workload may borrow up to two more processors to meet its predicted processor utilization requirements. If in block 345, the workload cannot borrow any additional processors, the operation 300 moves to block 340, and resource expenditure and workload operation is presented to the user in real time or near real time by way of the user interface 260, or a similar interface. However, if in block 345 the workload can borrow additional processors, the operation 300 moves to block 350.
In block 350, the WLMS 200 determines if there are any processors available to the workload to borrow to meet its predicted processor utilization requirements. If there are non-instant capacity processors available, the operation 300 moves to block 355 and the non-utilized processors are assigned to the workload. The operation 300 then moves to block 340. If, however, there are no non-instant capacity processors available, the operation moves to block 360. In block 360, the WLMS 200 determines if the workload can be assigned an instant capacity processor, and if such an instant capacity processor is available for assignment. If the workload cannot be assigned an instant capacity processor, or if none are available, the operation 300 moves to block 340. If the workload can be assigned an instant capacity processor, and if one or more are available, the operation 300 moves to block 365 and the TiCAP system 150 assigns the instant capacity processor to the workload. The operation 300 then moves to block 340 and the WLMS 200 produces information for display to the computer system user by way of the user interface 260, or a similar interface.
The various disclosed embodiments may be implemented as a method, system, and/or apparatus. As one example, exemplary embodiments are implemented as one or more computer software programs to implement the methods described herein. The software is implemented as one or more modules (also referred to as code subroutines, or “objects” in object-oriented programming). The location of the software will differ for the various alternative embodiments. The software programming code, for example, is accessed by a processor or processors of the computer or server from long-term storage media of some type, such as a CD-ROM drive or hard drive. The software programming code is embodied or stored on any of a variety of known media for use with a data processing system or in any memory device such as semiconductor, magnetic and optical devices, including a disk, hard drive, CD-ROM, ROM, etc. The code is distributed on such media, or is distributed to users from the memory or storage of one computer system over a network of some type to other computer systems for use by users of such other systems. Alternatively, the programming code is embodied in the memory (such as memory of a handheld portable electronic device) and accessed by a processor using a bus. The techniques and methods for embodying software programming code in memory, on physical media, and/or distributing software code via networks are well known and will not be further discussed herein.
The terms and descriptions used herein are set forth by way of illustration only and are not meant as limitations. Those skilled in the art will recognize that many variations are possible within the spirit and scope of the invention as defined in the following claims, and their equivalents, in which all terms are to be understood in their broadest possible sense unless otherwise indicated.

Claims (25)

We claim:
1. A method, implemented on a computing device, for controlling computing resource consumption in a computing system, the computing system including one or more resources, the computing system executing one or more workloads, each of the workloads assigned one or more of the resources, each of the workloads operating in accordance with one or more active policies, the method, comprising:
monitoring execution of the workloads and consumption of the resources;
displaying on a first visual display, for a current computing interval, a number of the resources currently consumed by each executing workload;
predicting for a subsequent computing interval, numbers of resources that will be consumed for each executing workload;
providing the results of the prediction on a second visual display, wherein the first and the second visual displays include:
workload identity,
resource utilization,
the active policies, and
resource requests; and
assigning a metered resource to a workload in the event said policies and said prediction indicate the workload requires an additional resource and no fully-licensed resource is available to be borrowed by the workload;
wherein “metered resource” denotes a resource for which a user pays only while the resource is in use, “fully-licensed resource” denotes a resource for which a user pays whether or not it is in use, and “borrowed” denotes use of a resource by one workload while the resource is owned by another workload.
2. The method of claim 1, further comprising:
determining an incremental cost associated with each change in resource utilization; and
providing a visual indication of the incremental cost.
3. The method of claim 2, wherein the resources include plural fully-licensed resources and plural metered resources.
4. The method of claim 3, wherein the fully-licensed resources are fully-licensed processors and the metered resources are instant capacity processors, the method further comprising:
identifying the active policies for each executing workload, the active policies specifying requirements for assignment of the fully-licensed processors to the workloads; and
determining if the predicted number of fully-licensed processors complies with the identified active policies,
wherein if the predicted number is more than the policy requirement, determining if the affected workload can borrow one or more fully-licensed processors and if the predicted number is less than the policy requirement, de-assigning one or more fully-licensed processors from the affected workload, and
wherein, if the affected workload cannot borrow fully-licensed processors, determining if the affected workload can be assigned one or more instant capacity processors.
5. The method of claim 4 further comprising, if the affected workload can borrow one or more fully-licensed processors, assigning the one or more fully-licensed processors to the affected workload.
6. The method of claim 4 further comprising, if the affected workload can be assigned one or more instant capacity processors, assigning the one or more instant capacity processors to the affected workload.
7. The method of claim 1, further comprising:
dividing the computer system into one or more partitions; and
allowing borrowing of resources within a partition only and not across partition boundaries.
8. The method of claim 1, further comprising:
dividing the computer system into one or more partitions; and
allowing borrowing of resources across partition boundaries.
9. The method of claim 1, wherein the first and the second visual displays provide icons to indicate allowable assignment of resources to workloads.
10. The method of claim 9, further comprising modifying the icons to indicate a change in the amount of resources being consumed by each of the workloads.
11. The method of claim 1, wherein the resource utilization is provided in an aggregate amount, the aggregate amount indicated by an aggregate amount icon.
12. The method of claim 11, wherein the computer system is divided into partitions and wherein the aggregate amount is provided on a per partition basis.
13. The method of claim 1, wherein the resources are temporary instant capacity processors.
14. The method of claim 1, further comprising:
receiving a modification to one or more policies; and
determining compliance with the modified policies.
15. The method of claim 1, wherein the active policies vary with time of day.
16. A method; implemented as programming on a computer system, for controlling resource consumption in the computer system, the method comprising:
monitoring current consumption of resources by workloads executing on the computer system;
predicting future consumption of the resources by the workloads;
adjusting assignment of resources to workloads based on the predicted future consumption, the adjusting including,
determining consumption policies for each workload,
comparing the policies to the predicted future consumption, and
increasing or decreasing resources for each workload based on the comparison, said increasing including assigning a metered resource to a workload in the event said policies and said prediction indicate the workload requires an additional resource and no fully-licensed resource is available to be borrowed by the workload; and
providing a visual display of resource consumption and workload execution information, the visual display including iconic values indicating predicted consumption of instant capacity resources and authorization to consume instant capacity resources;
wherein “metered resource” denotes a resource for which a user pays only while the resource is in use, “fully-licensed resource” denotes a resource for which a user pays whether or not it is in use, and “borrowed” denotes use of a resource by one workload while the resource is owned by another workload.
17. The method of claim 16, wherein the resources are processors, and wherein the processors include plural fully-licensed processors and plural temporary instant capacity processors.
18. The method of claim 17, wherein the plural fully-licensed processors are owned by respective workloads and step of increasing resources first attempts to borrow one or more of the fully-licensed processors.
19. The method of claim 18, wherein if no fully-licensed processors are available to borrow, and the step of increasing resources includes activating one or more of the temporary instant capacity processors.
20. A system comprising non-transitory computer-readable storage media encoded with code that, when executed using a processor, defines functionality for:
a monitoring module that monitors current consumption of resources by workloads executing on the computer system;
a policy processing module that predicts future consumption of the resources by the workloads and provides a resource request to adjust assignment of resources to workloads based on the predicted future consumption, wherein the policy processing module determines consumption policies for each workload, and compares the policies to the predicted future consumption, and wherein resource requests call for increasing or decreasing resources for each workload based on the comparison;
a user interface module that generates a visual display of resource consumption and workload execution; and
a metered-resource system that assigns metered resources to a workload provided that equivalent fully-licensed resources are not available for the workload to borrow;
wherein “metered resource” denotes a resource for which a user pays only while the resource is in use, “fully-licensed resource” denotes a resource for which a user pays whether or not it is in use, and “borrowed” denotes use of a resource by one workload while the resource is owned by another workload.
21. The system of claim 20, wherein the visual display comprises:
workload identity;
processor utilization;
the consumption policies;
resource requests; and
instant capacity processor utilization, wherein the visual display provides a first icon to indicate allowable assignment of instant capacity processors to workloads and a second icon to indicate an amount of instant capacity processors being consumed by each workload.
22. A method for allocating resources among workloads for successive allocation periods in a system including metered resources for which a user pays only while the metered resources are in use and fully-licensed resources for which a user pays whether or not the fully-licensed resources are in use, said method comprising:
in response to a request for more resources in a subsequent allocation period than a first workload of said workloads is allocated in a present allocation period, determining whether there are fully-licensed resources not currently assigned to said first workload but available to fulfill said request,
if sufficient fully-licensed resources not currently assigned to said first workload are available to partially or completely fulfill said request, allocating fully-licensed resources to said first workload for said subsequent allocation period, and
if sufficient fully-resources are not available to completely fulfill said request, assigning at least one metered resource to said workload.
23. Non-transitory media encoded with code configured to implement the method of claim 22.
24. A method as recited in claim 22 wherein:
each of said workloads is assigned, for said allocation periods collectively:
a minimum resource level that must be allocated to that workload each allocation period,
a maximum resource level that must not be exceeded even if a high amount of said resources is requested, and
an owned resource level up to which resources are preferentially allocated to that owned resource level but, if they are not requested by that workload, may be borrowed by another workload so that the amount for resources allocated to a workload for an allocation period exceeds its owned amount for that allocation period; and
said allocating sufficient fully-licensed resources to said first workload includes allocating resources to said first workload according to the following priority levels,
available fully-licensed resources up to the minimum resource level for said first workload,
available fully-licensed resources up to the respective minimum resource level up to the owned resource level for said first workload, and
available fully-licensed resources owned by other workloads and not causing an allocation to exceed the maximum resource level for said first workload;
said assigning at least one metered resource occurring only if there are insufficient fully-licensed resources owned by other workloads other than said first workload available to meet the lesser of the requested amount and the respective maximum amount.
25. Non-transitory media encoded with code configured to implement the method of claim 24.
US12/216,235 2008-07-01 2008-07-01 Controlling computing resource consumption Active 2032-10-10 US8782655B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/216,235 US8782655B2 (en) 2008-07-01 2008-07-01 Controlling computing resource consumption

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/216,235 US8782655B2 (en) 2008-07-01 2008-07-01 Controlling computing resource consumption

Publications (2)

Publication Number Publication Date
US20100005473A1 US20100005473A1 (en) 2010-01-07
US8782655B2 true US8782655B2 (en) 2014-07-15

Family

ID=41465349

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/216,235 Active 2032-10-10 US8782655B2 (en) 2008-07-01 2008-07-01 Controlling computing resource consumption

Country Status (1)

Country Link
US (1) US8782655B2 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10346355B2 (en) * 2016-12-23 2019-07-09 Qumulo, Inc. Filesystem block sampling to identify user consumption of storage resources
US10555145B1 (en) * 2012-06-05 2020-02-04 Amazon Technologies, Inc. Learned configuration of modification policies for program execution capacity
US10614033B1 (en) 2019-01-30 2020-04-07 Qumulo, Inc. Client aware pre-fetch policy scoring system
US10637793B1 (en) * 2016-06-30 2020-04-28 EMC IP Holding Company LLC Capacity based licensing
US10725977B1 (en) 2019-10-21 2020-07-28 Qumulo, Inc. Managing file system state during replication jobs
US10795796B1 (en) 2020-01-24 2020-10-06 Qumulo, Inc. Predictive performance analysis for file systems
US10860547B2 (en) 2014-04-23 2020-12-08 Qumulo, Inc. Data mobility, accessibility, and consistency in a data storage system
US10860414B1 (en) 2020-01-31 2020-12-08 Qumulo, Inc. Change notification in distributed file systems
US10860372B1 (en) 2020-01-24 2020-12-08 Qumulo, Inc. Managing throughput fairness and quality of service in file systems
US10877942B2 (en) 2015-06-17 2020-12-29 Qumulo, Inc. Filesystem capacity and performance metrics and visualizations
US10936551B1 (en) 2020-03-30 2021-03-02 Qumulo, Inc. Aggregating alternate data stream metrics for file systems
US10936538B1 (en) 2020-03-30 2021-03-02 Qumulo, Inc. Fair sampling of alternate data stream metrics for file systems
US11132126B1 (en) 2021-03-16 2021-09-28 Qumulo, Inc. Backup services for distributed file systems in cloud computing environments
US11132336B2 (en) 2015-01-12 2021-09-28 Qumulo, Inc. Filesystem hierarchical capacity quantity and aggregate metrics
US11151001B2 (en) 2020-01-28 2021-10-19 Qumulo, Inc. Recovery checkpoints for distributed file systems
US11151092B2 (en) 2019-01-30 2021-10-19 Qumulo, Inc. Data replication in distributed file systems
US11157458B1 (en) 2021-01-28 2021-10-26 Qumulo, Inc. Replicating files in distributed file systems using object-based data storage
US11256682B2 (en) 2016-12-09 2022-02-22 Qumulo, Inc. Managing storage quotas in a shared storage system
US11294604B1 (en) 2021-10-22 2022-04-05 Qumulo, Inc. Serverless disk drives based on cloud storage
US11323541B2 (en) * 2018-12-21 2022-05-03 Huawei Technologies Co., Ltd. Data deterministic deliverable communication technology based on QoS as a service
US11347699B2 (en) 2018-12-20 2022-05-31 Qumulo, Inc. File system cache tiers
US11354273B1 (en) 2021-11-18 2022-06-07 Qumulo, Inc. Managing usable storage space in distributed file systems
US11360936B2 (en) 2018-06-08 2022-06-14 Qumulo, Inc. Managing per object snapshot coverage in filesystems
US11461241B2 (en) 2021-03-03 2022-10-04 Qumulo, Inc. Storage tier management for file systems
US11567660B2 (en) 2021-03-16 2023-01-31 Qumulo, Inc. Managing cloud storage for distributed file systems
US11599508B1 (en) 2022-01-31 2023-03-07 Qumulo, Inc. Integrating distributed file systems with object stores
US11669255B2 (en) 2021-06-30 2023-06-06 Qumulo, Inc. Distributed resource caching by reallocation of storage caching using tokens and agents with non-depleted cache allocations
US11722150B1 (en) 2022-09-28 2023-08-08 Qumulo, Inc. Error resistant write-ahead log
US11729269B1 (en) 2022-10-26 2023-08-15 Qumulo, Inc. Bandwidth management in distributed file systems
US11775481B2 (en) 2020-09-30 2023-10-03 Qumulo, Inc. User interfaces for managing distributed file systems
US11921677B1 (en) 2023-11-07 2024-03-05 Qumulo, Inc. Sharing namespaces across file system clusters
US11934660B1 (en) 2023-11-07 2024-03-19 Qumulo, Inc. Tiered data storage with ephemeral and persistent tiers

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8447993B2 (en) * 2008-01-23 2013-05-21 Palo Alto Research Center Incorporated Integrated energy savings and business operations in data centers
US8271818B2 (en) * 2009-04-30 2012-09-18 Hewlett-Packard Development Company, L.P. Managing under-utilized resources in a computer
US8443373B2 (en) * 2010-01-26 2013-05-14 Microsoft Corporation Efficient utilization of idle resources in a resource manager
US8621477B2 (en) 2010-10-29 2013-12-31 International Business Machines Corporation Real-time monitoring of job resource consumption and prediction of resource deficiency based on future availability
US9367373B2 (en) * 2011-11-09 2016-06-14 Unisys Corporation Automatic configuration consistency check
KR20140121705A (en) * 2013-04-08 2014-10-16 삼성전자주식회사 Method for operating task external device and an electronic device thereof
US20150046676A1 (en) * 2013-08-12 2015-02-12 Qualcomm Incorporated Method and Devices for Data Path and Compute Hardware Optimization
WO2015082253A1 (en) * 2013-12-04 2015-06-11 Koninklijke Philips N.V. Prediction of critical work load in radiation therapy workflow
CN104750545A (en) * 2013-12-27 2015-07-01 乐视网信息技术(北京)股份有限公司 Process scheduling method and device
US9621439B2 (en) * 2014-02-28 2017-04-11 International Business Machines Corporation Dynamic and adaptive quota shares
WO2016105362A1 (en) * 2014-12-23 2016-06-30 Hewlett Packard Enterprise Development Lp Resource predictors indicative of predicted resource usage
US10749813B1 (en) * 2016-03-24 2020-08-18 EMC IP Holding Company LLC Spatial-temporal cloud resource scheduling
US10754697B2 (en) 2018-01-29 2020-08-25 Bank Of America Corporation System for allocating resources for use in data processing operations
EP3588295B1 (en) * 2018-06-27 2022-10-19 Accenture Global Solutions Limited Self-managed intelligent elastic cloud stack
EP3599713A1 (en) * 2018-07-25 2020-01-29 Siemens Aktiengesellschaft Frequency converter with temporarily released resources
US20220413931A1 (en) * 2021-06-23 2022-12-29 Quanta Cloud Technology Inc. Intelligent resource management

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5930773A (en) 1997-12-17 1999-07-27 Avista Advantage, Inc. Computerized resource accounting methods and systems, computerized utility management methods and systems, multi-user utility management methods and systems, and energy-consumption-based tracking methods and systems
US6094650A (en) 1997-12-15 2000-07-25 Manning & Napier Information Services Database analysis using a probabilistic ontology
US20020116441A1 (en) * 2000-12-08 2002-08-22 Yiping Ding System and method for automatic workload characterization
US20030083994A1 (en) 2001-11-01 2003-05-01 Arun Ramachandran Process to build and use usage based licensing server data structure for usage based licensing
US20030083999A1 (en) 2001-11-01 2003-05-01 Arun Ramachandran Temporal processing of usage data in a usage based licensing
US20030135339A1 (en) 2002-01-17 2003-07-17 Dario Gristina System for managing resource infrastructure and resource consumption in real time
US20040236852A1 (en) 2003-04-03 2004-11-25 International Business Machines Corporation Method to provide on-demand resource access
US20050154860A1 (en) 2004-01-13 2005-07-14 International Business Machines Corporation Method and data processing system optimizing performance through reporting of thread-level hardware resource utilization
US20050198641A1 (en) 2004-01-30 2005-09-08 Tamar Eilam Arbitration in a computing utility system
US20060026279A1 (en) 2004-07-28 2006-02-02 Microsoft Corporation Strategies for monitoring the consumption of resources
US20060106741A1 (en) 2004-11-17 2006-05-18 San Vision Energy Technology Inc. Utility monitoring system and method for relaying personalized real-time utility consumption information to a consumer
US20060136928A1 (en) 2004-12-21 2006-06-22 Crawford Isom L Jr System and method for associating workload management definitions with computing containers
US20060224925A1 (en) 2005-04-05 2006-10-05 International Business Machines Corporation Method and system for analyzing an application
US20060253587A1 (en) 2005-04-29 2006-11-09 Microsoft Corporation Method and system for shared resource providers
US7693995B2 (en) * 2005-11-09 2010-04-06 Hitachi, Ltd. Arbitration apparatus for allocating computer resource and arbitration method therefor

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6094650A (en) 1997-12-15 2000-07-25 Manning & Napier Information Services Database analysis using a probabilistic ontology
US5930773A (en) 1997-12-17 1999-07-27 Avista Advantage, Inc. Computerized resource accounting methods and systems, computerized utility management methods and systems, multi-user utility management methods and systems, and energy-consumption-based tracking methods and systems
US20020116441A1 (en) * 2000-12-08 2002-08-22 Yiping Ding System and method for automatic workload characterization
US20030083994A1 (en) 2001-11-01 2003-05-01 Arun Ramachandran Process to build and use usage based licensing server data structure for usage based licensing
US20030083999A1 (en) 2001-11-01 2003-05-01 Arun Ramachandran Temporal processing of usage data in a usage based licensing
US20030135339A1 (en) 2002-01-17 2003-07-17 Dario Gristina System for managing resource infrastructure and resource consumption in real time
US20040236852A1 (en) 2003-04-03 2004-11-25 International Business Machines Corporation Method to provide on-demand resource access
US20050154860A1 (en) 2004-01-13 2005-07-14 International Business Machines Corporation Method and data processing system optimizing performance through reporting of thread-level hardware resource utilization
US20050198641A1 (en) 2004-01-30 2005-09-08 Tamar Eilam Arbitration in a computing utility system
US20060026279A1 (en) 2004-07-28 2006-02-02 Microsoft Corporation Strategies for monitoring the consumption of resources
US20060106741A1 (en) 2004-11-17 2006-05-18 San Vision Energy Technology Inc. Utility monitoring system and method for relaying personalized real-time utility consumption information to a consumer
US20060136928A1 (en) 2004-12-21 2006-06-22 Crawford Isom L Jr System and method for associating workload management definitions with computing containers
US20060224925A1 (en) 2005-04-05 2006-10-05 International Business Machines Corporation Method and system for analyzing an application
US20060253587A1 (en) 2005-04-29 2006-11-09 Microsoft Corporation Method and system for shared resource providers
US7693995B2 (en) * 2005-11-09 2010-04-06 Hitachi, Ltd. Arbitration apparatus for allocating computer resource and arbitration method therefor

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10555145B1 (en) * 2012-06-05 2020-02-04 Amazon Technologies, Inc. Learned configuration of modification policies for program execution capacity
US10860547B2 (en) 2014-04-23 2020-12-08 Qumulo, Inc. Data mobility, accessibility, and consistency in a data storage system
US11461286B2 (en) 2014-04-23 2022-10-04 Qumulo, Inc. Fair sampling in a hierarchical filesystem
US11132336B2 (en) 2015-01-12 2021-09-28 Qumulo, Inc. Filesystem hierarchical capacity quantity and aggregate metrics
US10877942B2 (en) 2015-06-17 2020-12-29 Qumulo, Inc. Filesystem capacity and performance metrics and visualizations
US10637793B1 (en) * 2016-06-30 2020-04-28 EMC IP Holding Company LLC Capacity based licensing
US11256682B2 (en) 2016-12-09 2022-02-22 Qumulo, Inc. Managing storage quotas in a shared storage system
US10346355B2 (en) * 2016-12-23 2019-07-09 Qumulo, Inc. Filesystem block sampling to identify user consumption of storage resources
US10459884B1 (en) * 2016-12-23 2019-10-29 Qumulo, Inc. Filesystem block sampling to identify user consumption of storage resources
US11360936B2 (en) 2018-06-08 2022-06-14 Qumulo, Inc. Managing per object snapshot coverage in filesystems
US11347699B2 (en) 2018-12-20 2022-05-31 Qumulo, Inc. File system cache tiers
US11323541B2 (en) * 2018-12-21 2022-05-03 Huawei Technologies Co., Ltd. Data deterministic deliverable communication technology based on QoS as a service
US11151092B2 (en) 2019-01-30 2021-10-19 Qumulo, Inc. Data replication in distributed file systems
US10614033B1 (en) 2019-01-30 2020-04-07 Qumulo, Inc. Client aware pre-fetch policy scoring system
US10725977B1 (en) 2019-10-21 2020-07-28 Qumulo, Inc. Managing file system state during replication jobs
US10860372B1 (en) 2020-01-24 2020-12-08 Qumulo, Inc. Managing throughput fairness and quality of service in file systems
US11734147B2 (en) 2020-01-24 2023-08-22 Qumulo Inc. Predictive performance analysis for file systems
US10795796B1 (en) 2020-01-24 2020-10-06 Qumulo, Inc. Predictive performance analysis for file systems
US11294718B2 (en) 2020-01-24 2022-04-05 Qumulo, Inc. Managing throughput fairness and quality of service in file systems
US11372735B2 (en) 2020-01-28 2022-06-28 Qumulo, Inc. Recovery checkpoints for distributed file systems
US11151001B2 (en) 2020-01-28 2021-10-19 Qumulo, Inc. Recovery checkpoints for distributed file systems
US10860414B1 (en) 2020-01-31 2020-12-08 Qumulo, Inc. Change notification in distributed file systems
US10936538B1 (en) 2020-03-30 2021-03-02 Qumulo, Inc. Fair sampling of alternate data stream metrics for file systems
US10936551B1 (en) 2020-03-30 2021-03-02 Qumulo, Inc. Aggregating alternate data stream metrics for file systems
US11775481B2 (en) 2020-09-30 2023-10-03 Qumulo, Inc. User interfaces for managing distributed file systems
US11372819B1 (en) 2021-01-28 2022-06-28 Qumulo, Inc. Replicating files in distributed file systems using object-based data storage
US11157458B1 (en) 2021-01-28 2021-10-26 Qumulo, Inc. Replicating files in distributed file systems using object-based data storage
US11461241B2 (en) 2021-03-03 2022-10-04 Qumulo, Inc. Storage tier management for file systems
US11435901B1 (en) 2021-03-16 2022-09-06 Qumulo, Inc. Backup services for distributed file systems in cloud computing environments
US11132126B1 (en) 2021-03-16 2021-09-28 Qumulo, Inc. Backup services for distributed file systems in cloud computing environments
US11567660B2 (en) 2021-03-16 2023-01-31 Qumulo, Inc. Managing cloud storage for distributed file systems
US11669255B2 (en) 2021-06-30 2023-06-06 Qumulo, Inc. Distributed resource caching by reallocation of storage caching using tokens and agents with non-depleted cache allocations
US11294604B1 (en) 2021-10-22 2022-04-05 Qumulo, Inc. Serverless disk drives based on cloud storage
US11354273B1 (en) 2021-11-18 2022-06-07 Qumulo, Inc. Managing usable storage space in distributed file systems
US11599508B1 (en) 2022-01-31 2023-03-07 Qumulo, Inc. Integrating distributed file systems with object stores
US11722150B1 (en) 2022-09-28 2023-08-08 Qumulo, Inc. Error resistant write-ahead log
US11729269B1 (en) 2022-10-26 2023-08-15 Qumulo, Inc. Bandwidth management in distributed file systems
US11921677B1 (en) 2023-11-07 2024-03-05 Qumulo, Inc. Sharing namespaces across file system clusters
US11934660B1 (en) 2023-11-07 2024-03-19 Qumulo, Inc. Tiered data storage with ephemeral and persistent tiers

Also Published As

Publication number Publication date
US20100005473A1 (en) 2010-01-07

Similar Documents

Publication Publication Date Title
US8782655B2 (en) Controlling computing resource consumption
US11681562B2 (en) Resource manager for managing the sharing of resources among multiple workloads in a distributed computing environment
US8209695B1 (en) Reserving resources in a resource-on-demand system for user desktop utility demand
US9886322B2 (en) System and method for providing advanced reservations in a compute environment
US8650296B1 (en) Workload reallocation involving inter-server transfer of software license rights and intra-server transfer of hardware resources
US8689227B2 (en) System and method for integrating capacity planning and workload management
US8495648B1 (en) Managing allocation of computing capacity
US8621476B2 (en) Method and apparatus for resource management in grid computing systems
CN102844724B (en) Power supply in managing distributed computing system
US8024736B1 (en) System for controlling a distribution of unutilized computer resources
US7890629B2 (en) System and method of providing reservation masks within a compute environment
US20050188075A1 (en) System and method for supporting transaction and parallel services in a clustered system based on a service level agreement
US20080271039A1 (en) Systems and methods for providing capacity management of resource pools for servicing workloads
US20100043009A1 (en) Resource Allocation in Multi-Core Environment
US20070266388A1 (en) System and method for providing advanced reservations in a compute environment
US7711822B1 (en) Resource management in application servers
CN116450358A (en) Resource management for virtual machines in cloud computing systems
WO2006107531A2 (en) Simple integration of an on-demand compute environment
WO2011076486A1 (en) A method and system for dynamic workload allocation in a computing center which optimizes the overall energy consumption
US8826287B1 (en) System for adjusting computer resources allocated for executing an application using a control plug-in
CN113010309B (en) Cluster resource scheduling method, device, storage medium, equipment and program product
KR101743028B1 (en) Apparatus and method of dynamic resource allocation algorithm for service differentiation in virtualized environments
US8332861B1 (en) Virtualized temporary instant capacity
KR100858205B1 (en) System for offering application service provider service in grid-base and method thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BLANDING, WILLIAM H.;GRECO, FRANKLIN;REEL/FRAME:021247/0361;SIGNING DATES FROM 20080630 TO 20080701

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BLANDING, WILLIAM H.;GRECO, FRANKLIN;SIGNING DATES FROM 20080630 TO 20080701;REEL/FRAME:021247/0361

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
AS Assignment

Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001

Effective date: 20151027

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

AS Assignment

Owner name: OT PATENT ESCROW, LLC, ILLINOIS

Free format text: PATENT ASSIGNMENT, SECURITY INTEREST, AND LIEN AGREEMENT;ASSIGNORS:HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP;HEWLETT PACKARD ENTERPRISE COMPANY;REEL/FRAME:055269/0001

Effective date: 20210115

AS Assignment

Owner name: VALTRUS INNOVATIONS LIMITED, IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OT PATENT ESCROW, LLC;REEL/FRAME:057650/0537

Effective date: 20210803

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8