US8782655B2 - Controlling computing resource consumption - Google Patents
Controlling computing resource consumption Download PDFInfo
- Publication number
- US8782655B2 US8782655B2 US12/216,235 US21623508A US8782655B2 US 8782655 B2 US8782655 B2 US 8782655B2 US 21623508 A US21623508 A US 21623508A US 8782655 B2 US8782655 B2 US 8782655B2
- Authority
- US
- United States
- Prior art keywords
- resources
- workload
- resource
- processors
- fully
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000000007 visual effect Effects 0.000 claims abstract description 28
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 238000013475 authorization Methods 0.000 claims abstract description 6
- 230000003247 decreasing effect Effects 0.000 claims abstract description 6
- 238000005192 partition Methods 0.000 claims description 22
- 238000012545 processing Methods 0.000 claims description 9
- 230000008859 change Effects 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 2
- 230000003213 activating effect Effects 0.000 claims 1
- 238000012986 modification Methods 0.000 claims 1
- 230000004048 modification Effects 0.000 claims 1
- 238000007726 management method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000001816 cooling Methods 0.000 description 2
- 238000013468 resource allocation Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5044—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3414—Workload generation, e.g. scripts, playback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3433—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5019—Workload prediction
Definitions
- the technical field is management of computing resources.
- Instant capacity is a combined hardware and software system through which customers may acquire computing resources for which they pay a reduced price and have concomitant reduced usage rights.
- Temporary instant capacity allows one or more hardware or software systems to be activated for a pre-paid period without requiring permanent usage rights.
- customers may have trouble predicting and controlling TiCAP expenditures.
- a system for controlling resource consumption in the computer system, comprising a monitoring module that monitors current consumption of resources by workloads executing on the computer system; a policy processing module that predicts future consumption of the resources by the workloads and provides a resource request to adjust assignment of resources to workloads based on the predicted future consumption, wherein the policy processing module determines consumption policies for each workload, and compares the policies to the predicted future consumption, and wherein the resource request call for increasing or decreasing resources for each workload based on the comparison; and a user interface module that generates a visual display of resource consumption and workload execution.
- a method for controlling resource consumption in the computer system, comprising the steps of monitoring current consumption of resources by workloads executing on the computer system; predicting future consumption of the resources by the workloads; adjusting assignment of resources to workloads based on the predicted future consumption, comprising: determining consumption policies for each workload, comparing the policies to the predicted future consumption, and increasing or decreasing resources for each workload based on the comparison; and providing a visual display of resource consumption and workload execution information, the visual display including iconic values indicating predicted consumption of instant capacity resources and authorization to consume instant capacity resources.
- FIG. 1 is a block diagram of an exemplary computer system on which a workload management system for controlling resource consumption is implemented;
- FIG. 2 is a block diagram of an exemplary workload management system
- FIG. 3 illustrates an exemplary user interface for the system of FIG. 2 ;
- FIG. 4 is a flowchart illustrating an exemplary operation of the system of FIG. 2 .
- FIG. 1 is a block diagram of an exemplary version of such a computer system.
- computer system 100 includes partitions 110 i .
- Each partition 110 i includes fully-licensed processors 120 for which the computer system user has full usage rights.
- the computer system 100 By dividing the computer system 100 into partitions, and by allowing sharing of resources (i.e., the processors 120 ) across partition boundaries, the computer system 100 may be implemented with a smaller number of processors than if sharing were not allowed, or if workloads were assigned to separate computing systems. Thus, partitioning allows the computer system user to realize higher effective processor capacity with a reduced number of processors.
- Each partition 110 i may additionally include one or more metered processors 130 , for which the computer system user pays an incremental cost when using the metered resources.
- An example of a metered processor is an “instant capacity” processor.
- the computer system user acquires usage rights on an “as needed” basis, and is charged accordingly.
- some of the instant capacity processors 130 in each partition 110 i may be temporarily assigned to operate when workload demands exceed the computing capacity of the processors 120 .
- the computer system user pays for operation of the processors 120 regardless of their actual operating status. That is, the user pays for each processor 120 whether or not that processor is actually executing any operations. Indeed, the user pays for the processors 120 even if those processors 120 are not powered on.
- the user pays for the instant capacity processors 130 only when the instant capacity processors 130 are powered on and actually executing an operation.
- the computer system user can pay for a “small” computer system yet have a “large” computer system “in standby.”
- the computer system 100 shown in FIG. 1 includes some processors (the processors 120 ) that are licensed and paid for by the user, and some processors (the instant capacity processors 130 ) that are unlicensed and are used only “on demand.”
- the computer system 100 can satisfy its resource demands by reallocating unused processor capacity of the processors 120 , then there is no need to access the instant capacity processors 130 .
- the computer system 100 thus implements a borrowing scheme to temporarily transfer computing system resources (the processors 120 ) as needed to meet current demands, and only supplements this borrowing scheme with the instant capacity processors 130 when the available processor capacity from the processors 120 is not sufficient.
- the relationship between borrowing capacity from the processors 120 and using the instant capacity processors 130 will be described later in more detail.
- the computer system user can assign each workload a minimum number of processors, an “owned” number of processors, and a maximum number of processors, and can specify these parameters as a policy to be implemented by the workload management system.
- workload 1 “owns” the processors 120 assigned to partition 1001
- workload 2 “owns” the processors assigned to partition 100 2 .
- a TiCAP (temporary instant capacity) system 150 may be made available to the computer system user.
- the user acquires rights to use the instant capacity processors 130 by, for example, paying in advance for a certain quantity of processing time.
- the TiCAP system 150 allows one or more unlicensed instant capacity processors 130 to be activated for a period of pre-paid processing minutes without requiring permanent usage rights.
- the computer system user under TiCAP, obtains temporary usage rights.
- the instant capacity processors 130 also may be placed into operation under regimes other than TiCAP. For example, an instant capacity processor 130 may be placed into operation to replace a failed processor 120 , if the computer system user has implemented such a service plan.
- the TiCAP system 150 may be applied to other shared computing system resources, including, for example, memory, network bandwidth, and storage facilities used by the computer system 100 .
- the computer system may be configured exclusively with metered resources for which usage the computer system user pays an incremental cost; and may be configured with fully-licensed resources, exclusively. Whether configured with metered resources or fully-licensed resources, exclusively, or with a mix of the two, the computer system user may pay an incremental cost when additional resources are being consumed. For example, in a computer system with fully-licensed and metered resources, when more fully-licensed resources are being consumed, the computer system user will at least experience an incremental cost in terms of increased power consumption and consequent increased cooling demand.
- the computer system user When more metered resources are being consumed, the computer system user experiences the increased costs of power consumption and cooling, but also may have an increased cost directly tied to using the metered resources (i.e., a pay-per-use cost).
- the computer system user naturally, would like some means for monitoring and controlling these incremental costs.
- One such means includes a visual display that indicates when incremental costs are occurring. Note also, that although costs can increment up, due to added resource consumption, costs also can increment down due to reduced resource consumption.
- the visual display can provide an indication of current consumption and future, or predicted, consumption, so that the computer system user can anticipate these chances in cost, and can take actions as needed and as allowable, to change or control the cost of operating the computer system.
- the computer system user may purchase 30 days of prepaid temporary activation for instant capacity processors 130 .
- This allows the temporary processors 130 installed in the computer system 100 to be turned on and off, typically for short periods, to provide added capacity.
- a temporary instant capacity processor day is 24 hours, or 1,440 minutes of activation for one of the temporary processor cores 130 .
- a TiCAP day may be used by one instant capacity processor 130 operating for 24 hours, or by four instant capacity processors 130 operating for six hours each. If and when the instant capacity processors 130 are placed into operation may be determined by one or more policies implemented by the computer system user.
- the computer system user could specify that an instant capacity processor 130 be activated whenever the capacity of the corresponding processors 120 exceeds 90 percent, and that the instant capacity processor remain in operation until processor demand is reduced to 75 percent of the processors 120 .
- Many other policies are possible to control activation of the instant capacity processors 130 .
- the TiCAP system 150 resides as software on the user's computer system 100 , and operates in a standalone manner (i.e., no connection to the TiCAP provider). As instant capacity processors 130 are activated, the user's prepaid account, as implemented and monitored by the TiCAP system 150 , is debited. The TiCAP system 150 may predict, at the current rate of instant capacity processor usage, when the user's prepaid account will be depleted, and may send a message to the user with that information. Once the user's prepaid account balance reaches zero, the TiCAP system 150 will deactivate any operating instant capacity processors 130 .
- Computer system users may benefit from having a system more robust than just the TiCAP system 150 for managing workload on the computer system 100 .
- Such a system would provide the computer system users with information needed to monitor the consumption of computing resources in a fashion that allows the users to understand the amount and source of that consumption and to make, if needed and/or desired, changes to policies that implement that consumption.
- Included in this more robust system is a provision to provide the computer system user with real time or near real time information relating expenditures (costs) for computing system resources based on the reallocation of those resources among workloads. For example, when instant capacity or TiCAP processors are accessed, the computer system user's expenditures for computing resources likely will increase. The computer system user would like to know why those costs increased, and to know specifically what workload demands led to the increases.
- this information is provided to the computer system user by way of a visual or graphical interface.
- the interface may include varying iconic symbols and values to convey the resource expenditure information (see FIG. 3 ).
- FIG. 2 is a block diagram of an exemplary workload management system (WLMS) 200 that can be used to control resource utilization in the computer system 100 of FIG. 1 , and which includes provision for visual display of information to the computer system user.
- WLMS workload management system
- the WLMS 200 is an automated system that allows users to more precisely manage consumption of computing resources by workloads.
- a workload is a series of functions executed by an instance of an operating system, or by a subset of the operating system. Policy controls define if, and under what conditions, specific workloads may automatically consume TiCAP resources. Not all workloads are allowed access to TiCAP resources.
- each workload may be assigned a specific quantity of computing system resources that it “owns.” For example, a workload may normally use one processor 120 , but own two processors 120 . In addition, the workload may be allowed, by a policy established by the computer system user, to “borrow” up to two additional processors.
- the workload borrows up to the maximum allowable (for a total of four processors in the given example).
- TiCAP resources are being consumed by the workloads, those workloads which are allowed to use TiCAP resources, and which are borrowing resources, are consuming TiCAP resources.
- Shared resource domains are comprised of workloads, each of which is assigned a policy, and each of which is associated with a resource compartment.
- a resource compartment may be an operating system instance or a subdivision of an operating system instance.
- each shared resource domain includes workloads, each of which is assigned one or more policies and each of which is associated with a resource compartment.
- WLMS 200 is coupled to the computer system 100 and TiCAP system 150 .
- the WLMS 200 includes a resource monitor 210 that tracks which workloads are using which resources, and receives requests from the workloads to add additional resources.
- Coupled to the resource monitor 210 is policy processor 220 , which applies policies set by the user with user input device 230 and stored in database 240 , to determine if and when to apply additional resources to a requesting workload.
- user interface module 250 produces the user interfaces that the computer system user requires in order to efficiently manage the consumption of computer system resources.
- the policies may be simple policies (allocate additional processing resources when percent utilization exceeds 75 percent) or complex (different rules for different times of the day, for example). More specifically, a conditional policy (rule set) may employ conditional statements based on the occurrence of specific event. For example, a complex (conditional) rule set may specify one rule for the hours 6 am to 6 pm (condition: the workload is important during daytime and is allowed to consume TiCAP resources), and a second rule for the hours 6 pm to 6 am (condition: the workload is not important during nighttime and is not allowed to consume TiCAP resources). Thus, a conditional policy is one way to create complex rule sets from simple policies.
- An active policy is one that is currently applied to a workload based on which conditions are true for that workload. In the above example, if the time of day is noon, and a policy is being implemented, the true condition is daytime and the policy that will be applied is to assign additional TiCAP resources to the workload. If a condition associated with a workload is not true, the policy for the workload is not implemented.
- the WLMS 200 can compare policy requirements to operating parameters for each active workload and each consumed or available resource to determine if resources assigned to the active workloads should be changed (increased or decreased). If a change is indicated, the WLMS 200 (the policy processor 220 ) will initiate a resource request. The resource request may then be sent to the TiCAP system 150 .
- processor utilization For example, processor utilization.
- the resource monitor 210 monitors two parameters: number of processors in use and processor utilization for each such processor, and processor demand from each workload. The resource monitor 210 then makes a prediction as to processor demand/utilization for the next operating interval of the computer system 100 .
- processor demand/utilization in the next interval will be the same as in the current interval of computer system operation.
- the computer system user will have specified that percentage of processor utilization at which the user believes the workload will be best operated (e.g., at 100 percent processor utilization, there is no margin for demand changes, and any further demand beyond 100 percent will impair workload operation).
- percentage of processor utilization at which the user believes the workload will be best operated e.g., at 100 percent processor utilization, there is no margin for demand changes, and any further demand beyond 100 percent will impair workload operation.
- the resource monitor 210 “predicts” that the workloads will consume 1.5 processors, and the user wants processor utilization to be no more than 50 percent, then the user would prefer that an additional processor be placed in operation so that 1.5 out of 3 processors are operating, each at 50 percent utilization.
- the WLMS 200 sends a resource request to the computer system 100 , and that request may result in the assignment of one or more additional processors 120 to the workload.
- the resource request may be passed to the TiCAP system 150 , and one or more instant capacity processors 130 may be assigned to the workload.
- the distribution of processors to workload, including the assignment of instant capacity processors 130 may be shown on a visual display that is presented to the user in real time or near real time.
- FIG. 3 illustrates an exemplary user interface 260 for monitoring and controlling computing system resources of the computer system 100 .
- the interface 260 includes identities 261 of workloads executing on the computer system 100 , percent of processor utilization 262 , active policies 263 for the workloads, and an indication 264 of whether TiCAP resources are authorized and being used by the workloads.
- the indication of TiCAP authorization and use may include display of various icons, as shown.
- the interface 260 includes other informational displays that allow the computer system user to monitor operations and to make decisions regarding control of the computing resources.
- the interface 260 may use various icons to represent TiCAP authorization, TiCAP use, and the amount of TiCAP resources being consumed by a particular work load, a group of workload (arranged, for example, by partition) and the total TiCAP utilization.
- the icons may be caused to vary as to indicate changes in underlying data. For example, a workload may be authorized by policy to use TiCAP resources during weekdays but not weekends. Thus, a visual display generated for a weekend period would show that specific workload without a TiCAP authorization icon. Similarly, when a specific workload is actually using a TiCAP resource, the visual display would include an icon indicating TiCAP resources in use. Many other variations of the icons, and other data presented on the visual display, are possible.
- the computer system user may decide to change the policies that pertain to a specific workload. Note that the user interface 260 may display conditions for a current computing interval or a future computing interval, or both. In addition, the computer system user may access similar displays for any prior computing interval.
- FIG. 4 is a flowchart illustrating an exemplary operation 300 executed by the WLMS 200 to allow monitoring and control of computing resources in the computer system 100 . More particularly, the operation 300 is directed to controlling processor utilization within the computer system 100 .
- the WLMS 200 may execute other operations for controlling processor resources or for controlling resources in the computer system 100 other than processors.
- the operation 300 may execute on a continuous or periodic basis. As shown in block 310 , the operation 300 begins a monitoring/control “cycle” by sampling information from each workload executing on the computer system 100 and each resource being consumed by the active workloads. Such sampling can use either a “push” or a “pull” methodology, whereby information periodically and automatically is sent from the workloads and the resources to the WLMS 200 , or whereby information is sent to the WLMS 200 in response to a query or polling command initiated by the WLMS 200 .
- the WLMS 200 determines current processor utilization (block 315 ). In block 320 , for each workload, the WLMS 200 predicts processor utilization for the next computer interval. In block 325 , the WLMS 200 identifies assigned policies for each workload, and in block 330 , determines from the identified and assigned policies, which policy(s) are active for each workload.
- the WLMS 200 determines for each workload if that workload's predicted processor utilization will meet the requirements of the workload's active policy(s). That is, a workload's predicted processor utilization could exceed that which can be supplied by the processors the workload currently is using, or the predicted workload could be sufficiently reduced such that one or more processors can be reassigned away from the workload. If the predicted processor utilization for all active workloads will conform to the active policies, the operation 300 moves to block 340 .
- the WLMS 200 generates an update to the user interface 260 to provide the computer system used with a real time or near real time indication of the predicted requirements for resource allocation among the workloads.
- the visual display will also be updated to reflect iconic values to indicate which of the workloads is authorized to use TiCAP resources, for example, and an indication of current and future (predicted) TiCAP resource expenditures.
- the computer system user can: 1) obtain a dynamic record of computer system operations and expenditures, and 2) make policy adjustments to change the predicted resource allocations.
- the operation 300 moves to block 345 and the WLMS 200 determines if the affected active workload can borrow processor resources from the non-instant capacity processors (i.e., the processors 120 ) that are not in use, or are not operating at their specified maximum capacity.
- the determination of borrowing is made, for example, on a specific policy that identifies a minimum, owned, and maximum processor assignment for the workload. In this example, if the workload owns two processors and has a maximum assignable number of four processors, then the workload may borrow up to two more processors to meet its predicted processor utilization requirements.
- the operation 300 moves to block 340 , and resource expenditure and workload operation is presented to the user in real time or near real time by way of the user interface 260 , or a similar interface. However, if in block 345 the workload can borrow additional processors, the operation 300 moves to block 350 .
- the WLMS 200 determines if there are any processors available to the workload to borrow to meet its predicted processor utilization requirements. If there are non-instant capacity processors available, the operation 300 moves to block 355 and the non-utilized processors are assigned to the workload. The operation 300 then moves to block 340 . If, however, there are no non-instant capacity processors available, the operation moves to block 360 . In block 360 , the WLMS 200 determines if the workload can be assigned an instant capacity processor, and if such an instant capacity processor is available for assignment. If the workload cannot be assigned an instant capacity processor, or if none are available, the operation 300 moves to block 340 .
- the operation 300 moves to block 365 and the TiCAP system 150 assigns the instant capacity processor to the workload.
- the operation 300 then moves to block 340 and the WLMS 200 produces information for display to the computer system user by way of the user interface 260 , or a similar interface.
- the various disclosed embodiments may be implemented as a method, system, and/or apparatus.
- exemplary embodiments are implemented as one or more computer software programs to implement the methods described herein.
- the software is implemented as one or more modules (also referred to as code subroutines, or “objects” in object-oriented programming).
- the location of the software will differ for the various alternative embodiments.
- the software programming code for example, is accessed by a processor or processors of the computer or server from long-term storage media of some type, such as a CD-ROM drive or hard drive.
- the software programming code is embodied or stored on any of a variety of known media for use with a data processing system or in any memory device such as semiconductor, magnetic and optical devices, including a disk, hard drive, CD-ROM, ROM, etc.
- the code is distributed on such media, or is distributed to users from the memory or storage of one computer system over a network of some type to other computer systems for use by users of such other systems.
- the programming code is embodied in the memory (such as memory of a handheld portable electronic device) and accessed by a processor using a bus.
- the techniques and methods for embodying software programming code in memory, on physical media, and/or distributing software code via networks are well known and will not be further discussed herein.
Abstract
Description
Claims (25)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/216,235 US8782655B2 (en) | 2008-07-01 | 2008-07-01 | Controlling computing resource consumption |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/216,235 US8782655B2 (en) | 2008-07-01 | 2008-07-01 | Controlling computing resource consumption |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100005473A1 US20100005473A1 (en) | 2010-01-07 |
US8782655B2 true US8782655B2 (en) | 2014-07-15 |
Family
ID=41465349
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/216,235 Active 2032-10-10 US8782655B2 (en) | 2008-07-01 | 2008-07-01 | Controlling computing resource consumption |
Country Status (1)
Country | Link |
---|---|
US (1) | US8782655B2 (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10346355B2 (en) * | 2016-12-23 | 2019-07-09 | Qumulo, Inc. | Filesystem block sampling to identify user consumption of storage resources |
US10555145B1 (en) * | 2012-06-05 | 2020-02-04 | Amazon Technologies, Inc. | Learned configuration of modification policies for program execution capacity |
US10614033B1 (en) | 2019-01-30 | 2020-04-07 | Qumulo, Inc. | Client aware pre-fetch policy scoring system |
US10637793B1 (en) * | 2016-06-30 | 2020-04-28 | EMC IP Holding Company LLC | Capacity based licensing |
US10725977B1 (en) | 2019-10-21 | 2020-07-28 | Qumulo, Inc. | Managing file system state during replication jobs |
US10795796B1 (en) | 2020-01-24 | 2020-10-06 | Qumulo, Inc. | Predictive performance analysis for file systems |
US10860547B2 (en) | 2014-04-23 | 2020-12-08 | Qumulo, Inc. | Data mobility, accessibility, and consistency in a data storage system |
US10860414B1 (en) | 2020-01-31 | 2020-12-08 | Qumulo, Inc. | Change notification in distributed file systems |
US10860372B1 (en) | 2020-01-24 | 2020-12-08 | Qumulo, Inc. | Managing throughput fairness and quality of service in file systems |
US10877942B2 (en) | 2015-06-17 | 2020-12-29 | Qumulo, Inc. | Filesystem capacity and performance metrics and visualizations |
US10936551B1 (en) | 2020-03-30 | 2021-03-02 | Qumulo, Inc. | Aggregating alternate data stream metrics for file systems |
US10936538B1 (en) | 2020-03-30 | 2021-03-02 | Qumulo, Inc. | Fair sampling of alternate data stream metrics for file systems |
US11132126B1 (en) | 2021-03-16 | 2021-09-28 | Qumulo, Inc. | Backup services for distributed file systems in cloud computing environments |
US11132336B2 (en) | 2015-01-12 | 2021-09-28 | Qumulo, Inc. | Filesystem hierarchical capacity quantity and aggregate metrics |
US11151001B2 (en) | 2020-01-28 | 2021-10-19 | Qumulo, Inc. | Recovery checkpoints for distributed file systems |
US11151092B2 (en) | 2019-01-30 | 2021-10-19 | Qumulo, Inc. | Data replication in distributed file systems |
US11157458B1 (en) | 2021-01-28 | 2021-10-26 | Qumulo, Inc. | Replicating files in distributed file systems using object-based data storage |
US11256682B2 (en) | 2016-12-09 | 2022-02-22 | Qumulo, Inc. | Managing storage quotas in a shared storage system |
US11294604B1 (en) | 2021-10-22 | 2022-04-05 | Qumulo, Inc. | Serverless disk drives based on cloud storage |
US11323541B2 (en) * | 2018-12-21 | 2022-05-03 | Huawei Technologies Co., Ltd. | Data deterministic deliverable communication technology based on QoS as a service |
US11347699B2 (en) | 2018-12-20 | 2022-05-31 | Qumulo, Inc. | File system cache tiers |
US11354273B1 (en) | 2021-11-18 | 2022-06-07 | Qumulo, Inc. | Managing usable storage space in distributed file systems |
US11360936B2 (en) | 2018-06-08 | 2022-06-14 | Qumulo, Inc. | Managing per object snapshot coverage in filesystems |
US11461241B2 (en) | 2021-03-03 | 2022-10-04 | Qumulo, Inc. | Storage tier management for file systems |
US11567660B2 (en) | 2021-03-16 | 2023-01-31 | Qumulo, Inc. | Managing cloud storage for distributed file systems |
US11599508B1 (en) | 2022-01-31 | 2023-03-07 | Qumulo, Inc. | Integrating distributed file systems with object stores |
US11669255B2 (en) | 2021-06-30 | 2023-06-06 | Qumulo, Inc. | Distributed resource caching by reallocation of storage caching using tokens and agents with non-depleted cache allocations |
US11722150B1 (en) | 2022-09-28 | 2023-08-08 | Qumulo, Inc. | Error resistant write-ahead log |
US11729269B1 (en) | 2022-10-26 | 2023-08-15 | Qumulo, Inc. | Bandwidth management in distributed file systems |
US11775481B2 (en) | 2020-09-30 | 2023-10-03 | Qumulo, Inc. | User interfaces for managing distributed file systems |
US11921677B1 (en) | 2023-11-07 | 2024-03-05 | Qumulo, Inc. | Sharing namespaces across file system clusters |
US11934660B1 (en) | 2023-11-07 | 2024-03-19 | Qumulo, Inc. | Tiered data storage with ephemeral and persistent tiers |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8447993B2 (en) * | 2008-01-23 | 2013-05-21 | Palo Alto Research Center Incorporated | Integrated energy savings and business operations in data centers |
US8271818B2 (en) * | 2009-04-30 | 2012-09-18 | Hewlett-Packard Development Company, L.P. | Managing under-utilized resources in a computer |
US8443373B2 (en) * | 2010-01-26 | 2013-05-14 | Microsoft Corporation | Efficient utilization of idle resources in a resource manager |
US8621477B2 (en) | 2010-10-29 | 2013-12-31 | International Business Machines Corporation | Real-time monitoring of job resource consumption and prediction of resource deficiency based on future availability |
US9367373B2 (en) * | 2011-11-09 | 2016-06-14 | Unisys Corporation | Automatic configuration consistency check |
KR20140121705A (en) * | 2013-04-08 | 2014-10-16 | 삼성전자주식회사 | Method for operating task external device and an electronic device thereof |
US20150046676A1 (en) * | 2013-08-12 | 2015-02-12 | Qualcomm Incorporated | Method and Devices for Data Path and Compute Hardware Optimization |
WO2015082253A1 (en) * | 2013-12-04 | 2015-06-11 | Koninklijke Philips N.V. | Prediction of critical work load in radiation therapy workflow |
CN104750545A (en) * | 2013-12-27 | 2015-07-01 | 乐视网信息技术(北京)股份有限公司 | Process scheduling method and device |
US9621439B2 (en) * | 2014-02-28 | 2017-04-11 | International Business Machines Corporation | Dynamic and adaptive quota shares |
WO2016105362A1 (en) * | 2014-12-23 | 2016-06-30 | Hewlett Packard Enterprise Development Lp | Resource predictors indicative of predicted resource usage |
US10749813B1 (en) * | 2016-03-24 | 2020-08-18 | EMC IP Holding Company LLC | Spatial-temporal cloud resource scheduling |
US10754697B2 (en) | 2018-01-29 | 2020-08-25 | Bank Of America Corporation | System for allocating resources for use in data processing operations |
EP3588295B1 (en) * | 2018-06-27 | 2022-10-19 | Accenture Global Solutions Limited | Self-managed intelligent elastic cloud stack |
EP3599713A1 (en) * | 2018-07-25 | 2020-01-29 | Siemens Aktiengesellschaft | Frequency converter with temporarily released resources |
US20220413931A1 (en) * | 2021-06-23 | 2022-12-29 | Quanta Cloud Technology Inc. | Intelligent resource management |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5930773A (en) | 1997-12-17 | 1999-07-27 | Avista Advantage, Inc. | Computerized resource accounting methods and systems, computerized utility management methods and systems, multi-user utility management methods and systems, and energy-consumption-based tracking methods and systems |
US6094650A (en) | 1997-12-15 | 2000-07-25 | Manning & Napier Information Services | Database analysis using a probabilistic ontology |
US20020116441A1 (en) * | 2000-12-08 | 2002-08-22 | Yiping Ding | System and method for automatic workload characterization |
US20030083994A1 (en) | 2001-11-01 | 2003-05-01 | Arun Ramachandran | Process to build and use usage based licensing server data structure for usage based licensing |
US20030083999A1 (en) | 2001-11-01 | 2003-05-01 | Arun Ramachandran | Temporal processing of usage data in a usage based licensing |
US20030135339A1 (en) | 2002-01-17 | 2003-07-17 | Dario Gristina | System for managing resource infrastructure and resource consumption in real time |
US20040236852A1 (en) | 2003-04-03 | 2004-11-25 | International Business Machines Corporation | Method to provide on-demand resource access |
US20050154860A1 (en) | 2004-01-13 | 2005-07-14 | International Business Machines Corporation | Method and data processing system optimizing performance through reporting of thread-level hardware resource utilization |
US20050198641A1 (en) | 2004-01-30 | 2005-09-08 | Tamar Eilam | Arbitration in a computing utility system |
US20060026279A1 (en) | 2004-07-28 | 2006-02-02 | Microsoft Corporation | Strategies for monitoring the consumption of resources |
US20060106741A1 (en) | 2004-11-17 | 2006-05-18 | San Vision Energy Technology Inc. | Utility monitoring system and method for relaying personalized real-time utility consumption information to a consumer |
US20060136928A1 (en) | 2004-12-21 | 2006-06-22 | Crawford Isom L Jr | System and method for associating workload management definitions with computing containers |
US20060224925A1 (en) | 2005-04-05 | 2006-10-05 | International Business Machines Corporation | Method and system for analyzing an application |
US20060253587A1 (en) | 2005-04-29 | 2006-11-09 | Microsoft Corporation | Method and system for shared resource providers |
US7693995B2 (en) * | 2005-11-09 | 2010-04-06 | Hitachi, Ltd. | Arbitration apparatus for allocating computer resource and arbitration method therefor |
-
2008
- 2008-07-01 US US12/216,235 patent/US8782655B2/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6094650A (en) | 1997-12-15 | 2000-07-25 | Manning & Napier Information Services | Database analysis using a probabilistic ontology |
US5930773A (en) | 1997-12-17 | 1999-07-27 | Avista Advantage, Inc. | Computerized resource accounting methods and systems, computerized utility management methods and systems, multi-user utility management methods and systems, and energy-consumption-based tracking methods and systems |
US20020116441A1 (en) * | 2000-12-08 | 2002-08-22 | Yiping Ding | System and method for automatic workload characterization |
US20030083994A1 (en) | 2001-11-01 | 2003-05-01 | Arun Ramachandran | Process to build and use usage based licensing server data structure for usage based licensing |
US20030083999A1 (en) | 2001-11-01 | 2003-05-01 | Arun Ramachandran | Temporal processing of usage data in a usage based licensing |
US20030135339A1 (en) | 2002-01-17 | 2003-07-17 | Dario Gristina | System for managing resource infrastructure and resource consumption in real time |
US20040236852A1 (en) | 2003-04-03 | 2004-11-25 | International Business Machines Corporation | Method to provide on-demand resource access |
US20050154860A1 (en) | 2004-01-13 | 2005-07-14 | International Business Machines Corporation | Method and data processing system optimizing performance through reporting of thread-level hardware resource utilization |
US20050198641A1 (en) | 2004-01-30 | 2005-09-08 | Tamar Eilam | Arbitration in a computing utility system |
US20060026279A1 (en) | 2004-07-28 | 2006-02-02 | Microsoft Corporation | Strategies for monitoring the consumption of resources |
US20060106741A1 (en) | 2004-11-17 | 2006-05-18 | San Vision Energy Technology Inc. | Utility monitoring system and method for relaying personalized real-time utility consumption information to a consumer |
US20060136928A1 (en) | 2004-12-21 | 2006-06-22 | Crawford Isom L Jr | System and method for associating workload management definitions with computing containers |
US20060224925A1 (en) | 2005-04-05 | 2006-10-05 | International Business Machines Corporation | Method and system for analyzing an application |
US20060253587A1 (en) | 2005-04-29 | 2006-11-09 | Microsoft Corporation | Method and system for shared resource providers |
US7693995B2 (en) * | 2005-11-09 | 2010-04-06 | Hitachi, Ltd. | Arbitration apparatus for allocating computer resource and arbitration method therefor |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10555145B1 (en) * | 2012-06-05 | 2020-02-04 | Amazon Technologies, Inc. | Learned configuration of modification policies for program execution capacity |
US10860547B2 (en) | 2014-04-23 | 2020-12-08 | Qumulo, Inc. | Data mobility, accessibility, and consistency in a data storage system |
US11461286B2 (en) | 2014-04-23 | 2022-10-04 | Qumulo, Inc. | Fair sampling in a hierarchical filesystem |
US11132336B2 (en) | 2015-01-12 | 2021-09-28 | Qumulo, Inc. | Filesystem hierarchical capacity quantity and aggregate metrics |
US10877942B2 (en) | 2015-06-17 | 2020-12-29 | Qumulo, Inc. | Filesystem capacity and performance metrics and visualizations |
US10637793B1 (en) * | 2016-06-30 | 2020-04-28 | EMC IP Holding Company LLC | Capacity based licensing |
US11256682B2 (en) | 2016-12-09 | 2022-02-22 | Qumulo, Inc. | Managing storage quotas in a shared storage system |
US10346355B2 (en) * | 2016-12-23 | 2019-07-09 | Qumulo, Inc. | Filesystem block sampling to identify user consumption of storage resources |
US10459884B1 (en) * | 2016-12-23 | 2019-10-29 | Qumulo, Inc. | Filesystem block sampling to identify user consumption of storage resources |
US11360936B2 (en) | 2018-06-08 | 2022-06-14 | Qumulo, Inc. | Managing per object snapshot coverage in filesystems |
US11347699B2 (en) | 2018-12-20 | 2022-05-31 | Qumulo, Inc. | File system cache tiers |
US11323541B2 (en) * | 2018-12-21 | 2022-05-03 | Huawei Technologies Co., Ltd. | Data deterministic deliverable communication technology based on QoS as a service |
US11151092B2 (en) | 2019-01-30 | 2021-10-19 | Qumulo, Inc. | Data replication in distributed file systems |
US10614033B1 (en) | 2019-01-30 | 2020-04-07 | Qumulo, Inc. | Client aware pre-fetch policy scoring system |
US10725977B1 (en) | 2019-10-21 | 2020-07-28 | Qumulo, Inc. | Managing file system state during replication jobs |
US10860372B1 (en) | 2020-01-24 | 2020-12-08 | Qumulo, Inc. | Managing throughput fairness and quality of service in file systems |
US11734147B2 (en) | 2020-01-24 | 2023-08-22 | Qumulo Inc. | Predictive performance analysis for file systems |
US10795796B1 (en) | 2020-01-24 | 2020-10-06 | Qumulo, Inc. | Predictive performance analysis for file systems |
US11294718B2 (en) | 2020-01-24 | 2022-04-05 | Qumulo, Inc. | Managing throughput fairness and quality of service in file systems |
US11372735B2 (en) | 2020-01-28 | 2022-06-28 | Qumulo, Inc. | Recovery checkpoints for distributed file systems |
US11151001B2 (en) | 2020-01-28 | 2021-10-19 | Qumulo, Inc. | Recovery checkpoints for distributed file systems |
US10860414B1 (en) | 2020-01-31 | 2020-12-08 | Qumulo, Inc. | Change notification in distributed file systems |
US10936538B1 (en) | 2020-03-30 | 2021-03-02 | Qumulo, Inc. | Fair sampling of alternate data stream metrics for file systems |
US10936551B1 (en) | 2020-03-30 | 2021-03-02 | Qumulo, Inc. | Aggregating alternate data stream metrics for file systems |
US11775481B2 (en) | 2020-09-30 | 2023-10-03 | Qumulo, Inc. | User interfaces for managing distributed file systems |
US11372819B1 (en) | 2021-01-28 | 2022-06-28 | Qumulo, Inc. | Replicating files in distributed file systems using object-based data storage |
US11157458B1 (en) | 2021-01-28 | 2021-10-26 | Qumulo, Inc. | Replicating files in distributed file systems using object-based data storage |
US11461241B2 (en) | 2021-03-03 | 2022-10-04 | Qumulo, Inc. | Storage tier management for file systems |
US11435901B1 (en) | 2021-03-16 | 2022-09-06 | Qumulo, Inc. | Backup services for distributed file systems in cloud computing environments |
US11132126B1 (en) | 2021-03-16 | 2021-09-28 | Qumulo, Inc. | Backup services for distributed file systems in cloud computing environments |
US11567660B2 (en) | 2021-03-16 | 2023-01-31 | Qumulo, Inc. | Managing cloud storage for distributed file systems |
US11669255B2 (en) | 2021-06-30 | 2023-06-06 | Qumulo, Inc. | Distributed resource caching by reallocation of storage caching using tokens and agents with non-depleted cache allocations |
US11294604B1 (en) | 2021-10-22 | 2022-04-05 | Qumulo, Inc. | Serverless disk drives based on cloud storage |
US11354273B1 (en) | 2021-11-18 | 2022-06-07 | Qumulo, Inc. | Managing usable storage space in distributed file systems |
US11599508B1 (en) | 2022-01-31 | 2023-03-07 | Qumulo, Inc. | Integrating distributed file systems with object stores |
US11722150B1 (en) | 2022-09-28 | 2023-08-08 | Qumulo, Inc. | Error resistant write-ahead log |
US11729269B1 (en) | 2022-10-26 | 2023-08-15 | Qumulo, Inc. | Bandwidth management in distributed file systems |
US11921677B1 (en) | 2023-11-07 | 2024-03-05 | Qumulo, Inc. | Sharing namespaces across file system clusters |
US11934660B1 (en) | 2023-11-07 | 2024-03-19 | Qumulo, Inc. | Tiered data storage with ephemeral and persistent tiers |
Also Published As
Publication number | Publication date |
---|---|
US20100005473A1 (en) | 2010-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8782655B2 (en) | Controlling computing resource consumption | |
US11681562B2 (en) | Resource manager for managing the sharing of resources among multiple workloads in a distributed computing environment | |
US8209695B1 (en) | Reserving resources in a resource-on-demand system for user desktop utility demand | |
US9886322B2 (en) | System and method for providing advanced reservations in a compute environment | |
US8650296B1 (en) | Workload reallocation involving inter-server transfer of software license rights and intra-server transfer of hardware resources | |
US8689227B2 (en) | System and method for integrating capacity planning and workload management | |
US8495648B1 (en) | Managing allocation of computing capacity | |
US8621476B2 (en) | Method and apparatus for resource management in grid computing systems | |
CN102844724B (en) | Power supply in managing distributed computing system | |
US8024736B1 (en) | System for controlling a distribution of unutilized computer resources | |
US7890629B2 (en) | System and method of providing reservation masks within a compute environment | |
US20050188075A1 (en) | System and method for supporting transaction and parallel services in a clustered system based on a service level agreement | |
US20080271039A1 (en) | Systems and methods for providing capacity management of resource pools for servicing workloads | |
US20100043009A1 (en) | Resource Allocation in Multi-Core Environment | |
US20070266388A1 (en) | System and method for providing advanced reservations in a compute environment | |
US7711822B1 (en) | Resource management in application servers | |
CN116450358A (en) | Resource management for virtual machines in cloud computing systems | |
WO2006107531A2 (en) | Simple integration of an on-demand compute environment | |
WO2011076486A1 (en) | A method and system for dynamic workload allocation in a computing center which optimizes the overall energy consumption | |
US8826287B1 (en) | System for adjusting computer resources allocated for executing an application using a control plug-in | |
CN113010309B (en) | Cluster resource scheduling method, device, storage medium, equipment and program product | |
KR101743028B1 (en) | Apparatus and method of dynamic resource allocation algorithm for service differentiation in virtualized environments | |
US8332861B1 (en) | Virtualized temporary instant capacity | |
KR100858205B1 (en) | System for offering application service provider service in grid-base and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BLANDING, WILLIAM H.;GRECO, FRANKLIN;REEL/FRAME:021247/0361;SIGNING DATES FROM 20080630 TO 20080701 Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BLANDING, WILLIAM H.;GRECO, FRANKLIN;SIGNING DATES FROM 20080630 TO 20080701;REEL/FRAME:021247/0361 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
AS | Assignment |
Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001 Effective date: 20151027 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
AS | Assignment |
Owner name: OT PATENT ESCROW, LLC, ILLINOIS Free format text: PATENT ASSIGNMENT, SECURITY INTEREST, AND LIEN AGREEMENT;ASSIGNORS:HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP;HEWLETT PACKARD ENTERPRISE COMPANY;REEL/FRAME:055269/0001 Effective date: 20210115 |
|
AS | Assignment |
Owner name: VALTRUS INNOVATIONS LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OT PATENT ESCROW, LLC;REEL/FRAME:057650/0537 Effective date: 20210803 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |