US20090172434A1 - Latency based platform coordination - Google Patents

Latency based platform coordination Download PDF

Info

Publication number
US20090172434A1
US20090172434A1 US12/006,251 US625107A US2009172434A1 US 20090172434 A1 US20090172434 A1 US 20090172434A1 US 625107 A US625107 A US 625107A US 2009172434 A1 US2009172434 A1 US 2009172434A1
Authority
US
United States
Prior art keywords
latency
bridge
components
value
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/006,251
Inventor
Seh W. Kwa
Robert Gough
Neil Songer
Jaya L. Jeyaseelan
Barnes Cooper
Nilesh V. Shah
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US12/006,251 priority Critical patent/US20090172434A1/en
Publication of US20090172434A1 publication Critical patent/US20090172434A1/en
Priority to US12/960,277 priority patent/US20110078473A1/en
Priority to US13/213,353 priority patent/US8332675B2/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JEYASEELAN, JAYA L., SONGER, NEIL, KWA, SEH W., SHAH, NILESH V., COOPER, BARNES, GOUGH, ROBERT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3246Power saving characterised by the action undertaken by software initiated power-off
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/329Power saving characterised by the action undertaken by task scheduling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Power management of the interconnected devices is becoming more of a concern as computers implement mobile system platforms where the computers and devices are battery powered.
  • One of the biggest challenges of implementing an aggressive platform power management for mobile PC client and handheld devices is the lack of awareness of device latency tolerance to main memory accesses (DMA) and application latency dependency to facilitate power policy decisions.
  • DMA main memory accesses
  • Deeper sleep states gain greater power savings, but at the cost of longer resume time. For example, deeper sleep states helps microprocessors achieve very low power, but require up to 200 microseconds to resume versus keeping the processor in a “lighter” (shallower) sleep state.
  • Platform phase-locked loop (PLL) shutdown requires 20-50 microseconds to resume, versus 10's of nanoseconds with clock gating.
  • FIGS. 1A-1C are schematic block diagrams of portions of an apparatus that supports latency based platform coordination, according to some embodiments.
  • FIG. 2 is a flowchart illustrating operations in a method to implement latency based platform coordination, according to some embodiments.
  • FIG. 3 is a schematic timing diagram of an example of latency reporting and policy engine coordination, according to some embodiments.
  • FIG. 4 is a schematic illustration of a computer system, in accordance with some embodiments.
  • Described herein are exemplary systems and methods for implementing latency based platform coordination which, in some embodiments, may be implemented in an electronic device such as, e.g., a computer system.
  • an electronic device such as, e.g., a computer system.
  • numerous specific details are set forth to provide a thorough understanding of various embodiments. However, it will be understood by those skilled in the art that the various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been illustrated or described in detail so as not to obscure the particular embodiments.
  • FIGS. 1A-1C are schematic block diagrams of portions of an apparatus that supports latency based platform coordination, according to some embodiments.
  • FIG. 2 is a flowchart illustrating operations in a method to implement latency based platform coordination, according to some embodiments.
  • a system to implement latency based platform coordination comprises one or more processors 110 and a platform control hub (PCH) 115 , which in combination are sometimes referred to as the root complex.
  • a policy engine 130 is implemented in the system as an abstract device which comprises logic to implement latency based platform coordination.
  • the policy engine 130 may be implemented as logic instructions stored on a computer readable medium which, when executed by a processor configure the processor to implement latency based platform coordination operations.
  • the policy engine 130 may be reduced to logic, for example in a programmable logic device such as a field programmable gate array (FPGA) or may be reduced to hardwired circuit logic.
  • the policy engine 130 may be implemented as a single, discrete entity, or may be distributed between multiple processing components in the root complex.
  • the system further comprises a plurality of components 125 coupled to the policy engine 130 by a bridge/switching device 120 .
  • each of the plurality of components reports (operation 210 ) its snoop latency, alone or in combination with its non-snoop latency to the policy engine to the policy engine 130 .
  • the latency parameters may be reported as a tuple, in which the snoop latency parameter is represented by the symbol Sn, where an identifies the component, and in which the non-snoop latency is represented by the symbol NSn, where an identifies the component.
  • Lat(S1, NS1) represents the snoop and non-snoop latency parameters for the first component.
  • Lat(S2, NS2) represents the snoop and non-snoop latency parameters for the second component
  • Lat(S3, NS3) represents the snoop and non-snoop latency parameters for the third component.
  • the system may comprise dozens or even hundreds of components.
  • each of the components 125 reports its latency parameters through the bridge/switching device 120 , which receives the parameters at operation 215 .
  • one or more of the components 125 may be coupled directly to one of the processors 110 in the policy engine 130 , such that the device could report its latency parameter directly to the policy engine 130 .
  • the bridge/switching device 120 has a characteristic delay indicated in the drawings by the symbol ⁇ .
  • the delay, ⁇ , associated with the bridge/switching device 120 may be variable as a function of the switching/transmission capacity associated with the bridge/switching device 120 , the traffic flowing through the bridge/switching device 120 , and the power state of the bridge/switching device 120 .
  • a bridge/switching device that is an inactive/idle state or sleep state would have a higher characteristic delay than a bridge/switching device 120 that is an active state.
  • a bridge/switching device 120 with a high traffic load would have a higher characteristic delay than a bridge/switching device 120 with a low traffic load.
  • the bridge/switching device 120 comprises logic to selectively report latency parameters from the components 125 coupled to the bridge/switching device 120 .
  • the bridge/switching device 120 comprises logic to modify the reported latency parameters in order to compensate for the delay, ⁇ , associated with the bridge/switching device 120 .
  • the bridge/switching device implements logic to deduct the characteristic delay, ⁇ , associated with the bridge/switching device 120 from each of the latency parameters for each of the components coupled to the bridge/switching device 120 , at operation 220 .
  • the bridge/switching device 120 may further implement logic to report the latency parameters to the policy engine 130 , at operation 225 .
  • the bridge/switching device 120 may report to the policy engine the MIN(Lat(S1 ⁇ , NS1 ⁇ ), Lat(S2 ⁇ , NS2 ⁇ ), Lat(S3 ⁇ , NS3 ⁇ )).
  • the policy engine 130 receives the reported latency parameters from the bridge/switching device 120 at operation 130 .
  • the policy engine 130 implements logic to compute a minimum latency tolerance value (operation 235 ) from the latency parameters reported into the policy engine 130 .
  • the policy engine 130 uses a minimum latency tolerance value to determine a power management policy for the system.
  • FIG. 1B is a schematic illustration of an example in which the system is in an active mode.
  • Each of the components 125 reports their respective latency values into the bridge/switching device 120 .
  • the bridge/switching device 120 In an active state, the bridge/switching device 120 has a characteristic delay of 2 ⁇ s.
  • the bridge/switching device receives the latency parameters from the respective components 125 and deducts the characteristic delay of 2 ⁇ s from the reported parameters. The bridge/switching device 120 then reports the minimum latency parameter tuple to the policy engine 130 .
  • FIG. 1C is a schematic illustration of an example in which the system is in an active mode.
  • Each of the components 125 reports their respective latency values into the bridge/switching device 120 .
  • the bridge/switching device 120 In an active state, the bridge/switching device 120 has a characteristic delay of 20 ⁇ s.
  • the bridge/switching device receives the latency parameters from the respective components 125 and deducts the characteristic delay of 2o ⁇ s from the reported parameters. The bridge/switching device 120 then reports the minimum latency parameter tuple to the policy engine 130 .
  • FIG. 3 is a schematic timing diagram of an example of latency reporting and policy engine coordination, according to some embodiments.
  • FIG. 3 illustrates the utilization of latency reporting while two policy engines (PE 1 and PE 2 ) share latency information and coordinate to steer the appropriate C-states for microprocessors, memory controller power management and any other platform PLL power management.
  • PE 1 and PE 2 policy engines
  • a device exhibiting a bursty traffic pattern with intermediate low power states when active and thus reporting a low latency tolerance.
  • the microprocessor may resist entering deeper sleep states that would impact performance such as flushing its caches. However, when the device is idle and reports an extended latency tolerance, that information becomes helpful to enhance utilization of deeper sleep states for the platform and microprocessor while armed with the knowledge that any visible degradation impact is unlikely.
  • FIG. 4 is a schematic illustration of an architecture of a computer system which may implement latency based platform coordination n accordance with some embodiments.
  • Computer system 400 includes a computing device 402 and a power adapter 404 (e.g., to supply electrical power to the computing device 402 ).
  • the computing device 402 may be any suitable computing device such as a laptop (or notebook) computer, a personal digital assistant, a desktop computing device (e.g., a workstation or a desktop computer), a rack-mounted computing device, and the like.
  • Electrical power may be provided to various components of the computing device 402 (e.g., through a computing device power supply 406 ) from one or more of the following sources: one or more battery packs, an alternating current (AC) outlet (e.g., through a transformer and/or adaptor such as a power adapter 404 ), automotive power supplies, airplane power supplies, and the like.
  • the power adapter 404 may transform the power supply source output (e.g., the AC outlet voltage of about 110 VAC to 240 VAC) to a direct current (DC) voltage ranging between about 7 VDC to 12.6 VDC.
  • the power adapter 404 may be an AC/DC adapter.
  • the computing device 402 may also include one or more central processing unit(s) (CPUs) 408 coupled to a bus 410 .
  • the CPU 408 may be one or more processors in the Pentium® family of processors including the Pentium® II processor family, Pentium® III processors, Pentium® IV processors, Core and Core2 processors available from Intel® Corporation of Santa Clara, Calif.
  • other CPUs may be used, such as Intel's Itanium®, XEONTM, and Celeron® processors.
  • processors from other manufactures may be utilized.
  • the processors may have a single or multi core design.
  • a chipset 412 may be coupled to the bus 410 .
  • the chipset 412 may include a memory control hub (MCH) 414 .
  • the MCH 414 may include a memory controller 416 that is coupled to a main system memory 418 .
  • the main system memory 418 stores data and sequences of instructions that are executed by the CPU 408 , or any other device included in the system 400 .
  • the main system memory 418 includes random access memory (RAM); however, the main system memory 418 may be implemented using other memory types such as dynamic RAM (DRAM), synchronous DRAM (SDRAM), and the like. Additional devices may also be coupled to the bus 410 , such as multiple CPUs and/or multiple system memories.
  • main memory 418 may include a one or more flash memory devices.
  • main memory 418 may include either NAND or NOR flash memory devices, which may provide hundreds of megabytes, or even many gigabytes of storage capacity.
  • the MCH 414 may also include a graphics interface 420 coupled to a graphics accelerator 422 .
  • the graphics interface 420 is coupled to the graphics accelerator 422 via an accelerated graphics port (AGP).
  • AGP accelerated graphics port
  • a display (such as a flat panel display) 440 may be coupled to the graphics interface 420 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display.
  • the display 440 signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display.
  • a hub interface 424 couples the MCH 414 to an input/output control hub (ICH) 426 .
  • the ICH 426 provides an interface to input/output (I/O) devices coupled to the computer system 400 .
  • the ICH 426 may be coupled to a peripheral component interconnect (PCI) bus.
  • PCI peripheral component interconnect
  • the ICH 426 includes a PCI bridge 428 that provides an interface to a PCI bus 430 .
  • the PCI bridge 428 provides a data path between the CPU 408 and peripheral devices.
  • PCI ExpressTM architecture available through Intel® Corporation of Santa Clara, Calif.
  • the PCI bus 430 may be coupled to a network interface card (NIC) 432 and one or more disk drive(s) 434 .
  • NIC network interface card
  • Other devices may be coupled to the PCI bus 430 .
  • the CPU 408 and the MCH 414 may be combined to form a single chip.
  • the graphics accelerator 422 may be included within the MCH 414 in other embodiments.
  • peripherals coupled to the ICH 426 may include, in various embodiments, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), universal serial bus (USB) port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), and the like.
  • IDE integrated drive electronics
  • SCSI small computer system interface
  • USB universal serial bus
  • DVI digital video interface
  • BIOS 450 may be embodied as logic instructions encoded on a memory module such as, e.g., a flash memory module.
  • logic instructions as referred to herein relates to expressions which may be understood by one or more machines for performing one or more logical operations.
  • logic instructions may comprise instructions which are interpretable by a processor compiler for executing one or more operations on one or more data objects.
  • this is merely an example of machine-readable instructions and embodiments are not limited in this respect.
  • a computer readable medium may comprise one or more storage devices for storing computer readable instructions or data.
  • Such storage devices may comprise storage media such as, for example, optical, magnetic or semiconductor storage media.
  • this is merely an example of a computer readable medium and embodiments are not limited in this respect.
  • logic as referred to herein relates to structure for performing one or more logical operations.
  • logic may comprise circuitry which provides one or more output signals based upon one or more input signals.
  • Such circuitry may comprise a finite state machine which receives a digital input and provides a digital output, or circuitry which provides one or more analog output signals in response to one or more analog input signals.
  • Such circuitry may be provided in an application specific integrated circuit (ASIC) or field programmable gate array (FPGA).
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • logic may comprise machine-readable instructions stored in a memory in combination with processing circuitry to execute such machine-readable instructions.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • Some of the methods described herein may be embodied as logic instructions on a computer-readable medium. When executed on a processor, the logic instructions cause a processor to be programmed as a special-purpose machine that implements the described methods.
  • the processor when configured by the logic instructions to execute the methods described herein, constitutes structure for performing the described methods.
  • the methods described herein may be reduced to logic on, e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC) or the like.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • Coupled may mean that two or more elements are in direct physical or electrical contact.
  • coupled may also mean that two or more elements may not be in direct contact with each other, but yet may still cooperate or interact with each other.

Abstract

In some embodiments, an electronic apparatus comprises at least one processor, a plurality of components, and a policy engine comprising logic to receive latency data from one or more components in the electronic device, compute a minimum latency tolerance value from the latency data, and determine a power management policy from the minimum latency tolerance value.

Description

    RELATED APPLICATIONS
  • None.
  • BACKGROUND
  • Power management of the interconnected devices is becoming more of a concern as computers implement mobile system platforms where the computers and devices are battery powered. One of the biggest challenges of implementing an aggressive platform power management for mobile PC client and handheld devices is the lack of awareness of device latency tolerance to main memory accesses (DMA) and application latency dependency to facilitate power policy decisions. Deeper sleep states gain greater power savings, but at the cost of longer resume time. For example, deeper sleep states helps microprocessors achieve very low power, but require up to 200 microseconds to resume versus keeping the processor in a “lighter” (shallower) sleep state. Platform phase-locked loop (PLL) shutdown requires 20-50 microseconds to resume, versus 10's of nanoseconds with clock gating.
  • Due to the lack of awareness in device latency tolerance, some computing platforms maintain system resources in an available state (especially data paths and system memory) even during idle states. Maintaining these resources in an available state consumes power.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The detailed description is described with reference to the accompanying figures, in which:
  • FIGS. 1A-1C are schematic block diagrams of portions of an apparatus that supports latency based platform coordination, according to some embodiments.
  • FIG. 2 is a flowchart illustrating operations in a method to implement latency based platform coordination, according to some embodiments.
  • FIG. 3 is a schematic timing diagram of an example of latency reporting and policy engine coordination, according to some embodiments.
  • FIG. 4 is a schematic illustration of a computer system, in accordance with some embodiments.
  • DETAILED DESCRIPTION
  • Described herein are exemplary systems and methods for implementing latency based platform coordination which, in some embodiments, may be implemented in an electronic device such as, e.g., a computer system. In the following description, numerous specific details are set forth to provide a thorough understanding of various embodiments. However, it will be understood by those skilled in the art that the various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been illustrated or described in detail so as not to obscure the particular embodiments.
  • Embodiments of systems which implement latency based platform coordination will be explained with reference to FIGS. 1A-1C and FIG. 2. FIGS. 1A-1C are schematic block diagrams of portions of an apparatus that supports latency based platform coordination, according to some embodiments. FIG. 2 is a flowchart illustrating operations in a method to implement latency based platform coordination, according to some embodiments.
  • Referring first to FIG. 1A, a system to implement latency based platform coordination comprises one or more processors 110 and a platform control hub (PCH) 115, which in combination are sometimes referred to as the root complex. A policy engine 130 is implemented in the system as an abstract device which comprises logic to implement latency based platform coordination. In some embodiments, the policy engine 130 may be implemented as logic instructions stored on a computer readable medium which, when executed by a processor configure the processor to implement latency based platform coordination operations. In some embodiments, the policy engine 130 may be reduced to logic, for example in a programmable logic device such as a field programmable gate array (FPGA) or may be reduced to hardwired circuit logic. The policy engine 130 may be implemented as a single, discrete entity, or may be distributed between multiple processing components in the root complex.
  • The system further comprises a plurality of components 125 coupled to the policy engine 130 by a bridge/switching device 120. In some embodiments, each of the plurality of components reports (operation 210) its snoop latency, alone or in combination with its non-snoop latency to the policy engine to the policy engine 130. In the embodiment depicted in FIG. 1, the latency parameters may be reported as a tuple, in which the snoop latency parameter is represented by the symbol Sn, where an identifies the component, and in which the non-snoop latency is represented by the symbol NSn, where an identifies the component. Thus, Lat(S1, NS1) represents the snoop and non-snoop latency parameters for the first component. Similarly, Lat(S2, NS2) represents the snoop and non-snoop latency parameters for the second component and Lat(S3, NS3) represents the snoop and non-snoop latency parameters for the third component. In practice, the system may comprise dozens or even hundreds of components.
  • In the embodiment depicted in FIG. 1A, each of the components 125 reports its latency parameters through the bridge/switching device 120, which receives the parameters at operation 215. In other embodiments, one or more of the components 125 may be coupled directly to one of the processors 110 in the policy engine 130, such that the device could report its latency parameter directly to the policy engine 130. In some embodiments, the bridge/switching device 120 has a characteristic delay indicated in the drawings by the symbol Δ. The delay, Δ, associated with the bridge/switching device 120 may be variable as a function of the switching/transmission capacity associated with the bridge/switching device 120, the traffic flowing through the bridge/switching device 120, and the power state of the bridge/switching device 120. For example, a bridge/switching device that is an inactive/idle state or sleep state would have a higher characteristic delay than a bridge/switching device 120 that is an active state. Similarly, a bridge/switching device 120 with a high traffic load would have a higher characteristic delay than a bridge/switching device 120 with a low traffic load.
  • In some embodiments, the bridge/switching device 120 comprises logic to selectively report latency parameters from the components 125 coupled to the bridge/switching device 120. In addition, in some embodiments the bridge/switching device 120 comprises logic to modify the reported latency parameters in order to compensate for the delay, Δ, associated with the bridge/switching device 120. In one embodiment, the bridge/switching device implements logic to deduct the characteristic delay, Δ, associated with the bridge/switching device 120 from each of the latency parameters for each of the components coupled to the bridge/switching device 120, at operation 220. The bridge/switching device 120 may further implement logic to report the latency parameters to the policy engine 130, at operation 225. For example, the bridge/switching device 120 may report to the policy engine the MIN(Lat(S1−Δ, NS1−Δ), Lat(S2−Δ, NS2−Δ), Lat(S3−Δ, NS3−Δ)).
  • The policy engine 130 receives the reported latency parameters from the bridge/switching device 120 at operation 130. In some embodiments, the policy engine 130 implements logic to compute a minimum latency tolerance value (operation 235) from the latency parameters reported into the policy engine 130. The policy engine 130 then uses a minimum latency tolerance value to determine a power management policy for the system.
  • FIG. 1B is a schematic illustration of an example in which the system is in an active mode. Each of the components 125 reports their respective latency values into the bridge/switching device 120. In an active state, the bridge/switching device 120 has a characteristic delay of 2 μs. As described above, the bridge/switching device receives the latency parameters from the respective components 125 and deducts the characteristic delay of 2 μs from the reported parameters. The bridge/switching device 120 then reports the minimum latency parameter tuple to the policy engine 130.
  • FIG. 1C is a schematic illustration of an example in which the system is in an active mode. Each of the components 125 reports their respective latency values into the bridge/switching device 120. In an active state, the bridge/switching device 120 has a characteristic delay of 20 μs. As described above, the bridge/switching device receives the latency parameters from the respective components 125 and deducts the characteristic delay of 2o μs from the reported parameters. The bridge/switching device 120 then reports the minimum latency parameter tuple to the policy engine 130.
  • FIG. 3 is a schematic timing diagram of an example of latency reporting and policy engine coordination, according to some embodiments. FIG. 3 illustrates the utilization of latency reporting while two policy engines (PE1 and PE2) share latency information and coordinate to steer the appropriate C-states for microprocessors, memory controller power management and any other platform PLL power management. A device exhibiting a bursty traffic pattern with intermediate low power states when active and thus reporting a low latency tolerance. The microprocessor may resist entering deeper sleep states that would impact performance such as flushing its caches. However, when the device is idle and reports an extended latency tolerance, that information becomes helpful to enhance utilization of deeper sleep states for the platform and microprocessor while armed with the knowledge that any visible degradation impact is unlikely.
  • FIG. 4 is a schematic illustration of an architecture of a computer system which may implement latency based platform coordination n accordance with some embodiments. Computer system 400 includes a computing device 402 and a power adapter 404 (e.g., to supply electrical power to the computing device 402). The computing device 402 may be any suitable computing device such as a laptop (or notebook) computer, a personal digital assistant, a desktop computing device (e.g., a workstation or a desktop computer), a rack-mounted computing device, and the like.
  • Electrical power may be provided to various components of the computing device 402 (e.g., through a computing device power supply 406) from one or more of the following sources: one or more battery packs, an alternating current (AC) outlet (e.g., through a transformer and/or adaptor such as a power adapter 404), automotive power supplies, airplane power supplies, and the like. In one embodiment, the power adapter 404 may transform the power supply source output (e.g., the AC outlet voltage of about 110 VAC to 240 VAC) to a direct current (DC) voltage ranging between about 7 VDC to 12.6 VDC. Accordingly, the power adapter 404 may be an AC/DC adapter.
  • The computing device 402 may also include one or more central processing unit(s) (CPUs) 408 coupled to a bus 410. In one embodiment, the CPU 408 may be one or more processors in the Pentium® family of processors including the Pentium® II processor family, Pentium® III processors, Pentium® IV processors, Core and Core2 processors available from Intel® Corporation of Santa Clara, Calif. Alternatively, other CPUs may be used, such as Intel's Itanium®, XEON™, and Celeron® processors. Also, one or more processors from other manufactures may be utilized. Moreover, the processors may have a single or multi core design.
  • A chipset 412 may be coupled to the bus 410. The chipset 412 may include a memory control hub (MCH) 414. The MCH 414 may include a memory controller 416 that is coupled to a main system memory 418. The main system memory 418 stores data and sequences of instructions that are executed by the CPU 408, or any other device included in the system 400. In some embodiments, the main system memory 418 includes random access memory (RAM); however, the main system memory 418 may be implemented using other memory types such as dynamic RAM (DRAM), synchronous DRAM (SDRAM), and the like. Additional devices may also be coupled to the bus 410, such as multiple CPUs and/or multiple system memories.
  • In some embodiments, main memory 418 may include a one or more flash memory devices. For example, main memory 418 may include either NAND or NOR flash memory devices, which may provide hundreds of megabytes, or even many gigabytes of storage capacity.
  • The MCH 414 may also include a graphics interface 420 coupled to a graphics accelerator 422. In one embodiment, the graphics interface 420 is coupled to the graphics accelerator 422 via an accelerated graphics port (AGP). In an embodiment, a display (such as a flat panel display) 440 may be coupled to the graphics interface 420 through, for example, a signal converter that translates a digital representation of an image stored in a storage device such as video memory or system memory into display signals that are interpreted and displayed by the display. The display 440 signals produced by the display device may pass through various control devices before being interpreted by and subsequently displayed on the display.
  • A hub interface 424 couples the MCH 414 to an input/output control hub (ICH) 426. The ICH 426 provides an interface to input/output (I/O) devices coupled to the computer system 400. The ICH 426 may be coupled to a peripheral component interconnect (PCI) bus. Hence, the ICH 426 includes a PCI bridge 428 that provides an interface to a PCI bus 430. The PCI bridge 428 provides a data path between the CPU 408 and peripheral devices. Additionally, other types of I/O interconnect topologies may be utilized such as the PCI Express™ architecture, available through Intel® Corporation of Santa Clara, Calif.
  • The PCI bus 430 may be coupled to a network interface card (NIC) 432 and one or more disk drive(s) 434. Other devices may be coupled to the PCI bus 430. In addition, the CPU 408 and the MCH 414 may be combined to form a single chip. Furthermore, the graphics accelerator 422 may be included within the MCH 414 in other embodiments.
  • Additionally, other peripherals coupled to the ICH 426 may include, in various embodiments, integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive(s), universal serial bus (USB) port(s), a keyboard, a mouse, parallel port(s), serial port(s), floppy disk drive(s), digital output support (e.g., digital video interface (DVI)), and the like.
  • System 400 may further include a basic input/output system (BIOS) 450 to manage, among other things, the boot-up operations of computing system 400. BIOS 450 may be embodied as logic instructions encoded on a memory module such as, e.g., a flash memory module.
  • The terms “logic instructions” as referred to herein relates to expressions which may be understood by one or more machines for performing one or more logical operations. For example, logic instructions may comprise instructions which are interpretable by a processor compiler for executing one or more operations on one or more data objects. However, this is merely an example of machine-readable instructions and embodiments are not limited in this respect.
  • The terms “computer readable medium” as referred to herein relates to media capable of maintaining expressions which are perceivable by one or more machines. For example, a computer readable medium may comprise one or more storage devices for storing computer readable instructions or data. Such storage devices may comprise storage media such as, for example, optical, magnetic or semiconductor storage media. However, this is merely an example of a computer readable medium and embodiments are not limited in this respect.
  • The term “logic” as referred to herein relates to structure for performing one or more logical operations. For example, logic may comprise circuitry which provides one or more output signals based upon one or more input signals. Such circuitry may comprise a finite state machine which receives a digital input and provides a digital output, or circuitry which provides one or more analog output signals in response to one or more analog input signals. Such circuitry may be provided in an application specific integrated circuit (ASIC) or field programmable gate array (FPGA). Also, logic may comprise machine-readable instructions stored in a memory in combination with processing circuitry to execute such machine-readable instructions. However, these are merely examples of structures which may provide logic and embodiments are not limited in this respect.
  • Some of the methods described herein may be embodied as logic instructions on a computer-readable medium. When executed on a processor, the logic instructions cause a processor to be programmed as a special-purpose machine that implements the described methods. The processor, when configured by the logic instructions to execute the methods described herein, constitutes structure for performing the described methods. Alternatively, the methods described herein may be reduced to logic on, e.g., a field programmable gate array (FPGA), an application specific integrated circuit (ASIC) or the like.
  • In the description and claims, the terms coupled and connected, along with their derivatives, may be used. In particular embodiments, connected may be used to indicate that two or more elements are in direct physical or electrical contact with each other. Coupled may mean that two or more elements are in direct physical or electrical contact. However, coupled may also mean that two or more elements may not be in direct contact with each other, but yet may still cooperate or interact with each other.
  • Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least an implementation. The appearances of the phrase “in one embodiment” in various places in the specification may or may not be all referring to the same embodiment.
  • Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that claimed subject matter may not be limited to the specific features or acts described. Rather, the specific features and acts are disclosed as sample forms of implementing the claimed subject matter.

Claims (14)

1. A method to implement latency based platform coordination in an electronic device, comprising:
receiving, in a policy engine, latency data from one or more components in the electronic device;
computing a minimum latency tolerance value from the latency data; and
determining a power management policy from the minimum latency tolerance value.
2. The method of claim 1, wherein receiving, in a policy engine, latency data from one or more components in the electronic device comprises receiving a snoop latency tolerance and a non-snoop latency tolerance from the one or more components
3. The method of claim 2, wherein:
the latency data from the one or more components is transmitted via an intermediate bridge/switch device
the bridge has at least one delay value for data transmitted via the bridge/switch device; and
the bridge deducts the delay value from the latency data.
4. The method of claim 3, wherein:
the bridge comprises a first delay value when the bridge is in a low power state and a second delay value when the bridge is in an active power state; and
the bridge deducts one of the first delay value or the second delay value from the latency data.
5. The method of claim 1, wherein computing a minimum latency tolerance value from the latency data comprises:
comparing a plurality of latency values received from a plurality of components; and
selecting the lowest latency value from the plurality of latency values.
6. The method of claim 1, wherein the policy engine monitors latency values over time during operation of the electronic device and updates power management policies as a function of changes in the latency tolerance values.
7. The method of claim 1, wherein determining a power management policy from the minimum latency tolerance value comprises selecting a sleep state that permits the system to meet the minimum latency tolerance value.
8. An electronic apparatus, comprising:
at least one processor;
a plurality of components; and
a policy engine comprising logic to:
receive latency data from one or more components in the electronic device;
compute a minimum latency tolerance value from the latency data; and
determine a power management policy from the minimum latency tolerance value.
9. The electronic apparatus of claim 8, wherein the policy engine further comprises logic to receive a snoop latency tolerance and a non-snoop latency tolerance from the one or more components
10. The electronic apparatus of claim 9, wherein:
the latency data from the one or more components is transmitted via an intermediate bridge/switch device
the bridge has at least one delay value for data transmitted via the bridge/switch device; and
the bridge deducts the delay value from the latency data.
11. The electronic apparatus of claim 10, wherein:
the bridge comprises a first delay value when the bridge is in a low power state and a second delay value when the bridge is in an active power state; and
the bridge deducts one of the first delay value or the second delay value from the latency data.
12. The electronic apparatus of claim 8, wherein the policy engine further comprises logic to:
compare a plurality of latency values received from a plurality of components; and
select the lowest latency value from the plurality of latency values.
13. The electronic apparatus of claim 8, wherein the policy engine further comprises logic to monitor latency values over time during operation of the electronic device and updates power management policies as a function of changes in the latency tolerance values.
14. The electronic apparatus of claim 8, wherein the policy engine further comprises logic to select a sleep state that permits the system to meet the minimum latency tolerance value.
US12/006,251 2007-12-31 2007-12-31 Latency based platform coordination Abandoned US20090172434A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US12/006,251 US20090172434A1 (en) 2007-12-31 2007-12-31 Latency based platform coordination
US12/960,277 US20110078473A1 (en) 2007-12-31 2010-12-03 Latency based platform coordination
US13/213,353 US8332675B2 (en) 2007-12-31 2011-08-19 Latency based platform coordination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/006,251 US20090172434A1 (en) 2007-12-31 2007-12-31 Latency based platform coordination

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US12/960,277 Continuation US20110078473A1 (en) 2007-12-31 2010-12-03 Latency based platform coordination
US13/213,353 Continuation US8332675B2 (en) 2007-12-31 2011-08-19 Latency based platform coordination

Publications (1)

Publication Number Publication Date
US20090172434A1 true US20090172434A1 (en) 2009-07-02

Family

ID=40800117

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/006,251 Abandoned US20090172434A1 (en) 2007-12-31 2007-12-31 Latency based platform coordination
US12/960,277 Abandoned US20110078473A1 (en) 2007-12-31 2010-12-03 Latency based platform coordination
US13/213,353 Expired - Fee Related US8332675B2 (en) 2007-12-31 2011-08-19 Latency based platform coordination

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/960,277 Abandoned US20110078473A1 (en) 2007-12-31 2010-12-03 Latency based platform coordination
US13/213,353 Expired - Fee Related US8332675B2 (en) 2007-12-31 2011-08-19 Latency based platform coordination

Country Status (1)

Country Link
US (3) US20090172434A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090249103A1 (en) * 2008-03-31 2009-10-01 Jeyaseelan Jaya L Platform power management based on latency guidance
US20090327774A1 (en) * 2008-06-26 2009-12-31 Jeyaseelan Jaya L Coordinated link power management
US20100169684A1 (en) * 2008-12-31 2010-07-01 Jeyaseelan Jaya L Downstream device service latency reporting for power management
US20100169685A1 (en) * 2008-12-31 2010-07-01 Gough Robert E Idle duration reporting for power management
FR2947924A1 (en) * 2009-07-07 2011-01-14 Thales Sa METHOD AND DEVICE FOR THE DYNAMIC MANAGEMENT OF CONSUMPTION IN A PROCESSOR
US20110173474A1 (en) * 2010-01-11 2011-07-14 Salsbery Brian J Dynamic low power mode implementation for computing devices
US20110173475A1 (en) * 2010-01-11 2011-07-14 Frantz Andrew J Domain specific language, compiler and jit for dynamic power management
US20110302626A1 (en) * 2007-12-31 2011-12-08 Kwa Seh W Latency based platform coordination
WO2012109564A2 (en) * 2011-02-11 2012-08-16 Intel Corporation Techniques for managing power consumption state of a processor
US20130007492A1 (en) * 2011-06-30 2013-01-03 Sokol Jr Joseph Timer interrupt latency
US20140006824A1 (en) * 2012-06-29 2014-01-02 Christian Maciocco Using device idle duration information to optimize energy efficiency
US20140082242A1 (en) * 2012-09-18 2014-03-20 Apple Inc. Reducing latency in a peripheral component interconnect express link
WO2014004506A3 (en) * 2012-06-25 2014-03-27 Qualcomm Incorporated System and method for reducing power consumption in a wireless communication system
US20140189403A1 (en) * 2012-12-28 2014-07-03 Eugene Gorbatov Periodic activity alignment
US20140344599A1 (en) * 2013-05-15 2014-11-20 Advanced Micro Devices, Inc. Method and System for Power Management
WO2015164011A1 (en) * 2014-04-22 2015-10-29 Qualcomm Incorporated Latency-based power mode units for controlling power modes of processor cores, and related methods and systems
US20160041595A1 (en) * 2013-06-26 2016-02-11 Intel Corporation Controlling Reduced Power States Using Platform Latency Tolerance

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130275791A1 (en) * 2012-04-12 2013-10-17 Qualcomm Incorporated Method and System for Tracking and Selecting Optimal Power Conserving Modes of a PCD
US9696785B2 (en) 2013-12-28 2017-07-04 Intel Corporation Electronic device having a controller to enter a low power mode

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6155160A (en) * 1998-06-04 2000-12-05 Hochbrueckner; Kenneth Propane detector system
US20040128576A1 (en) * 2002-12-31 2004-07-01 Michael Gutman Active state link power management
US20050273633A1 (en) * 2004-06-02 2005-12-08 Intel Corporation Hardware coordination of power management activities
US7716506B1 (en) * 2006-12-14 2010-05-11 Nvidia Corporation Apparatus, method, and system for dynamically selecting power down level

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5838603A (en) * 1994-10-11 1998-11-17 Matsushita Electric Industrial Co., Ltd. Semiconductor device and method for fabricating the same, memory core chip and memory peripheral circuit chip
US20090172434A1 (en) * 2007-12-31 2009-07-02 Kwa Seh W Latency based platform coordination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6155160A (en) * 1998-06-04 2000-12-05 Hochbrueckner; Kenneth Propane detector system
US20040128576A1 (en) * 2002-12-31 2004-07-01 Michael Gutman Active state link power management
US20050273633A1 (en) * 2004-06-02 2005-12-08 Intel Corporation Hardware coordination of power management activities
US7716506B1 (en) * 2006-12-14 2010-05-11 Nvidia Corporation Apparatus, method, and system for dynamically selecting power down level

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110302626A1 (en) * 2007-12-31 2011-12-08 Kwa Seh W Latency based platform coordination
US8332675B2 (en) * 2007-12-31 2012-12-11 Intel Corporation Latency based platform coordination
US8631257B2 (en) 2008-03-31 2014-01-14 Intel Corporation Platform power management based on latency guidance
US20090249103A1 (en) * 2008-03-31 2009-10-01 Jeyaseelan Jaya L Platform power management based on latency guidance
US8176341B2 (en) 2008-03-31 2012-05-08 Intel Corporation Platform power management based on latency guidance
JP2014206996A (en) * 2008-03-31 2014-10-30 インテル・コーポレーション Platform power management based on latency guidance
US20090327774A1 (en) * 2008-06-26 2009-12-31 Jeyaseelan Jaya L Coordinated link power management
US8255713B2 (en) 2008-06-26 2012-08-28 Intel Corporation Management of link states using plateform and device latencies
US8868948B2 (en) 2008-06-26 2014-10-21 Intel Corporation Method and system for coordinating link power management with platform power management
US9838967B2 (en) 2008-12-31 2017-12-05 Intel Corporation Downstream device service latency reporting for power management
US9459684B2 (en) 2008-12-31 2016-10-04 Intel Corporation Idle duration reporting for power management
US20100169685A1 (en) * 2008-12-31 2010-07-01 Gough Robert E Idle duration reporting for power management
US8607075B2 (en) 2008-12-31 2013-12-10 Intel Corporation Idle duration reporting for power management
US20100169684A1 (en) * 2008-12-31 2010-07-01 Jeyaseelan Jaya L Downstream device service latency reporting for power management
US10182398B2 (en) 2008-12-31 2019-01-15 Intel Corporation Downstream device service latency reporting for power management
US8601296B2 (en) 2008-12-31 2013-12-03 Intel Corporation Downstream device service latency reporting for power management
US20110113271A1 (en) * 2009-07-07 2011-05-12 Thales Method and device for the dynamic management of consumption in a processor
EP2275903A1 (en) * 2009-07-07 2011-01-19 Thales Method and apparatus for dynamic power consumption management of a processor
FR2947924A1 (en) * 2009-07-07 2011-01-14 Thales Sa METHOD AND DEVICE FOR THE DYNAMIC MANAGEMENT OF CONSUMPTION IN A PROCESSOR
US8375233B2 (en) 2009-07-07 2013-02-12 Thales Method and device for the dynamic management of consumption in a processor
US20110173474A1 (en) * 2010-01-11 2011-07-14 Salsbery Brian J Dynamic low power mode implementation for computing devices
US8504855B2 (en) 2010-01-11 2013-08-06 Qualcomm Incorporated Domain specific language, compiler and JIT for dynamic power management
US20110173475A1 (en) * 2010-01-11 2011-07-14 Frantz Andrew J Domain specific language, compiler and jit for dynamic power management
US9182810B2 (en) 2010-01-11 2015-11-10 Qualcomm Incorporated Domain specific language, compiler and JIT for dynamic power management
US9235251B2 (en) 2010-01-11 2016-01-12 Qualcomm Incorporated Dynamic low power mode implementation for computing devices
US8560749B2 (en) 2011-02-11 2013-10-15 Intel Corporation Techniques for managing power consumption state of a processor involving use of latency tolerance report value
WO2012109564A3 (en) * 2011-02-11 2012-12-27 Intel Corporation Techniques for managing power consumption state of a processor
WO2012109564A2 (en) * 2011-02-11 2012-08-16 Intel Corporation Techniques for managing power consumption state of a processor
US20130007492A1 (en) * 2011-06-30 2013-01-03 Sokol Jr Joseph Timer interrupt latency
WO2014004506A3 (en) * 2012-06-25 2014-03-27 Qualcomm Incorporated System and method for reducing power consumption in a wireless communication system
US9264986B2 (en) 2012-06-25 2016-02-16 Qualcomm Incorporated System and method for reducing power consumption in a wireless communication system
US20140006824A1 (en) * 2012-06-29 2014-01-02 Christian Maciocco Using device idle duration information to optimize energy efficiency
US9015510B2 (en) * 2012-06-29 2015-04-21 Intel Corporation Optimizing energy efficiency using device idle duration information and latency tolerance based on a pre-wake configuration of a platform associated to the device
US9740645B2 (en) 2012-09-18 2017-08-22 Apple Inc. Reducing latency in a peripheral component interconnect express link
US9015396B2 (en) * 2012-09-18 2015-04-21 Apple Inc. Reducing latency in a peripheral component interconnect express link
US20140082242A1 (en) * 2012-09-18 2014-03-20 Apple Inc. Reducing latency in a peripheral component interconnect express link
US9213390B2 (en) * 2012-12-28 2015-12-15 Intel Corporation Periodic activity alignment
US20140189403A1 (en) * 2012-12-28 2014-07-03 Eugene Gorbatov Periodic activity alignment
US20140344599A1 (en) * 2013-05-15 2014-11-20 Advanced Micro Devices, Inc. Method and System for Power Management
US20160041595A1 (en) * 2013-06-26 2016-02-11 Intel Corporation Controlling Reduced Power States Using Platform Latency Tolerance
US9541983B2 (en) * 2013-06-26 2017-01-10 Intel Corporation Controlling reduced power states using platform latency tolerance
CN106233225A (en) * 2014-04-22 2016-12-14 高通股份有限公司 For power mode unit based on time delay controlling the power mode of processor core and associated method and system
JP2017519274A (en) * 2014-04-22 2017-07-13 クアルコム,インコーポレイテッド Latency-based power mode unit for controlling the power mode of a processor core, and related methods and systems
TWI595353B (en) * 2014-04-22 2017-08-11 高通公司 Latency-based power mode units for controlling power modes of processor cores, and related methods and systems
JP6151465B1 (en) * 2014-04-22 2017-06-21 クアルコム,インコーポレイテッド Latency-based power mode unit for controlling the power mode of a processor core, and related methods and systems
US9552033B2 (en) 2014-04-22 2017-01-24 Qualcomm Incorporated Latency-based power mode units for controlling power modes of processor cores, and related methods and systems
KR101826088B1 (en) 2014-04-22 2018-02-06 퀄컴 인코포레이티드 Latency-based power mode units for controlling power modes of processor cores, and related methods and systems
WO2015164011A1 (en) * 2014-04-22 2015-10-29 Qualcomm Incorporated Latency-based power mode units for controlling power modes of processor cores, and related methods and systems

Also Published As

Publication number Publication date
US8332675B2 (en) 2012-12-11
US20110302626A1 (en) 2011-12-08
US20110078473A1 (en) 2011-03-31

Similar Documents

Publication Publication Date Title
US8332675B2 (en) Latency based platform coordination
EP2818972B1 (en) Mapping a performance request to an operating frequency in a processor
CN102566739B (en) Multicore processor system and dynamic power management method and control device thereof
US8560869B2 (en) Dynamic power reduction
EP2894542B1 (en) Estimating scalability of a workload
EP2796961B1 (en) Controlling power and performance in a system agent of a processor
KR101748747B1 (en) Controlling configurable peak performance limits of a processor
US9377841B2 (en) Adaptively limiting a maximum operating frequency in a multicore processor
US20140181545A1 (en) Dynamic Balancing Of Power Across A Plurality Of Processor Domains According To Power Policy Control Bias
US10761579B2 (en) Supercapacitor-based power supply protection for multi-node systems
US9335813B2 (en) Method and system for run-time reallocation of leakage current and dynamic power supply current
CN104011626A (en) System, method and apparatus for energy efficiency and energy conservation by configuring power management parameters during run time
US9639143B2 (en) Interfacing dynamic hardware power managed blocks and software power managed blocks
US20140237276A1 (en) Method and Apparatus for Determining Tunable Parameters to Use in Power and Performance Management
US9405351B2 (en) Performing frequency coordination in a multiprocessor system
US9360909B2 (en) System, method and apparatus for energy efficiency and energy conservation by configuring power management parameters during run time
US20130007475A1 (en) Efficient frequency boost operation
US9377833B2 (en) Electronic device and power management method
Zagacki et al. Power Improvements on 2008 Desktop Platforms.
Kardach Advances in ultrabook™ platform power management

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KWA, SEH W.;GOUGH, ROBERT;SONGER, NEIL;AND OTHERS;SIGNING DATES FROM 20080208 TO 20080305;REEL/FRAME:026892/0840

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION