US20040024971A1 - Method and apparatus for write cache flush and fill mechanisms - Google Patents

Method and apparatus for write cache flush and fill mechanisms Download PDF

Info

Publication number
US20040024971A1
US20040024971A1 US10/631,353 US63135303A US2004024971A1 US 20040024971 A1 US20040024971 A1 US 20040024971A1 US 63135303 A US63135303 A US 63135303A US 2004024971 A1 US2004024971 A1 US 2004024971A1
Authority
US
United States
Prior art keywords
cache
write
memory
lines
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/631,353
Inventor
Zohar Bogin
Steven Clohset
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/631,353 priority Critical patent/US20040024971A1/en
Publication of US20040024971A1 publication Critical patent/US20040024971A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • G06F12/0897Caches characterised by their organisation or structure with two or more cache hierarchy levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/30Providing cache or TLB in specific location of a processing system
    • G06F2212/304In main memory subsystem

Definitions

  • the invention pertains generally to computer systems. In particular, it pertains to a write cache for writing data to memory.
  • processors can typically operate at much faster speeds than their main memory
  • main memory most computer systems now use high-speed cache memory as local memory that the processor can access for most of its needs.
  • cache memory is fast, it is also much more expensive than the dynamic random access memory (DRAM) typically used for main memory, and the amount of available cache memory is typically only a fraction of the amount of DRAM memory in the system. Since much software involves repetitive execution of the same code, it is feasible to copy the code about to be executed from main memory into cache memory, where it can then be repetitively executed at high speed. Because copying from a slower memory also takes time, many computer systems have a hierarchy with multiple levels of cache, with each subsequent level being faster and smaller than the one below it, and main memory at the bottom of the hierarchy.
  • FIG. 1 A conventional system 10 is shown in FIG. 1.
  • a CPU 11 is closely coupled to a cache memory 12 , which contains the code and data currently being executed and also the code and data that was recently executed. Data that has been written to cache is also written to main memory 13 by transmitting it to I/O control logic 14 , from where it is placed into write queue 16 to await its turn to be written into memory 13 .
  • write queue 16 allows the memory system to collect these competing memory requests, but it does nothing to change the order or grouping of the data being written to memory.
  • FIG. 1 shows a computer system of the prior art.
  • FIG. 2 shows a computer system of the invention.
  • FIG. 3 shows a block diagram of the write cache logic.
  • FIG. 4 shows a flow chart of a flush sequence involving spatial location.
  • FIG. 5 shows a flow chart of triggering a flush operation.
  • FIG. 6 shows a flow chart of a partial write operation.
  • the invention incorporates a write cache to collect the various portions of data to be written back to main memory, and organizes them in a more efficient manner before writing the data to memory.
  • FIG. 2 shows a simplified block diagram of a system 20 of the invention.
  • CPU 21 , cache memory 22 , memory 23 , bus controller 25 and graphics controller 27 can operate much as before. In one embodiment, those devices can be unchanged from their prior art counterparts processor 11 , cache memory 12 , memory 13 , bus controller 15 and graphics controller 27 .
  • system 20 includes write cache 29 , which can be used to collect write data that is destined for writing into memory 23 , and organize that data in specific ways so that fewer overall writes may be necessary than in conventional systems.
  • CPU 21 Whenever CPU 21 performs a data write function, that data is written not only to cache memory 22 for immediate use by the currently executing software, but is also written to main memory 23 so that main memory 23 will have an updated version of the data. It is important that main memory be updated within a reasonable time, because it is unpredictable how soon that data will be read from main memory 23 to be used again. If CPU 21 reads the previously written data for some other use, it is likely that the read operation will retrieve the data from cache memory 22 , where it was initially written. In that case, it is relatively unimportant whether the data has yet been updated in main memory 23 .
  • CPU 21 reads the data after a substantial delay, it is possible that the cache line containing that data will have been purged from cache memory 22 , and the data will have to be retrieved from main memory 23 . In that case, it is imperative that CPU 21 retrieves the latest version of the data, so it is important that the write data has been written to main memory 23 by that time.
  • graphics controller 27 and bus controller 25 may also perform read and write operations to memory, but they generally do not have access to cache memory 22 , so they must deal solely with main memory 23 .
  • main memory 23 When performing a read, it is important that they read the latest version of the data, so this increases the need to update main memory 23 as soon as is feasible after CPU 21 has changed it.
  • One embodiment of the invention therefore implements a write-through cache system, so that every write by the CPU is immediately sent to I/O controller 24 for updating main memory 23 through write queue 26 .
  • Graphics controller 27 and bus controller 25 can also write data to main memory, so they can transmit write data destined for queue 26 , from where it will be written to main memory.
  • writes from CPU 21 , bus controller 25 , and graphics controller 27 may come at any time relative to each other. In a conventional system, these writes may therefore be randomly intermingled in queue 26 on a first-come, first served basis, potentially resulting in multiple separate writes to the same block of memory.
  • Write cache 29 can be strategically placed between I/O controller 24 and write queue 26 so that the write data can be temporarily stored and reorganized in ways that reduce the number of write operations to memory 23 , thereby improving the efficiency of the overall memory system.
  • a data write operation writes data to a particular location. Since CPU 21 operates primarily out of cache memory, in one embodiment this write operation can write data first to cache memory 22 . At the same time, or shortly thereafter, the same data can be sent to I/O controller 24 for writing to the location in main memory that corresponds to the location in cache that was just updated. I/O controller 24 can then transmit this data to write cache 29 , where it can be organized with other write data for efficient transmission to write queue 26 .
  • Write queue 26 can buffer the data and present it to memory controller 28 in the same order in which it was received from write cache 29 . Memory controller 28 can then write the data into main memory 23 . Since a memory takes a predetermined amount of time to read or write data, and the data requests may come at unpredictable times, write queue 26 can smooth out the process by holding any data that comes in faster than it can be written to memory.
  • FIG. 3 shows a more detailed view of write cache 29 .
  • Write cache 29 can receive memory write requests in the form of data and address information from I/O controller 24 .
  • I/O controller 24 The details of I/O controller 24 are not shown. However, those familiar with computer architecture will appreciate that it can contain interfaces to a processor bus, a graphics controller, at least one bus controller, and write cache 29 , as well as arbitration and control logic to control the flow of data between those interfaces.
  • bus 201 can be a memory circuit.
  • bus in this context refers to a connection containing multiple lines to move data between two or more points.
  • Overall control of operations within write cache 29 can be provided by control logic 294 .
  • the addresses stored in write cache storage 291 can be provided over bus 202 to cache lookup logic 295 , which can compare a portion of the address of the incoming request (either read or write) with the current contents of the write cache. If the address matches, the block of data containing the address of the request already exists in the write cache. This is useful in partial cache writes, which are described later.
  • write cache storage 291 Once it has been determined that a specific entry in write cache storage 291 will be flushed, or dispatched, meaning that it will be written from write cache 29 to memory and purged from write cache storage. 291 , the address and associated valid bits can be passed to flush dispatcher 292 over bus 206 .
  • An “entry” can be a cache line, which can be 64 bytes in size.
  • Page lookup logic 296 can receive the most significant bits of the address over bus 203 of the entry being flushed to memory, and compare them with the equivalent bits of all other valid address entries in the cache. This can be used to determine whether other entries are within the same block of memory and should be flushed together as a related group. The size of the block being thus considered can be programmable. This feature is described later in more detail. Data to be flushed can be presented to address translation logic 293 over bus 207 .
  • An address conversion step can be performed before the data is sent to the memory controller.
  • the software operates with virtual addresses rather than physical ones. This allows the computer to physically interface with much more memory than the software can comprehend.
  • These virtual addresses must be converted to the assigned physical addresses before the address and data information is actually presented to memory. Since the addresses presented to write cache 29 by I/O controller 24 are virtual addresses, these can be converted to the actual physical addresses by address translation logic 293 .
  • address translation logic 293 Such virtual-to-physical address translation processes are well known in the computer field, and are not described in further detail herein.
  • the physical address can be presented over bus 208 to write queue 26 in preparation for the write operation to memory controller 28 .
  • memory controller 28 When memory controller 28 is ready to accept a write request, it can receive the physical memory address information over bus 209 , and the associated data information over bus 210 . Since flush dispatcher 292 , address translator 293 , and write queue 26 are concerned primarily with addresses rather than the associated data, it may not be necessary to funnel the associated data through these devices. In one embodiment the data is simply held in write cache storage logic 291 , and presented to memory controller 28 at the same time as the associated address information.
  • the logic for organizing the data in write cache 29 can perform multiple functions, either separately or together. Some of the processes that can be performed by this logic are described in more detail below:
  • Write cache 29 can flush entries to memory based on the proximity of the various entries to each other. For example, if several sequentially flushed entries are within the same memory page, that page of physical memory will only have to be opened once, and all the related entries can be written into it before another page of memory is opened. Since opening a page of memory can be time-consuming, this can result in a significant savings in time. Proximity does not have to be based on pages, but may also be based on other block sizes as well, such as eight cache lines. A cache line can be 64 bytes, so a block size of eight cache lines in that case would be 512 bytes beginning on a cache line boundary.
  • the block size to be used in these comparisons can be programmable, which can be used to tune the memory system for efficient operation with the particular system parameters.
  • the block size can be reprogrammed on the fly, so that block size can be dynamically changed to tune the memory system for the application currently running.
  • Various criteria can be used to determine which entry in the write cache will be flushed first. Once that decision has been made, other entries that are within the same block can be identified and flushed. For example, when a flushing operation begins, the oldest entry may be flushed. Then page lookup logic 296 can be used to identify all other entries in write cache 29 that are within the same block, by performing a comparison of the first chosen address with all other entries in the write cache. This comparison can be performed with a content addressable memory (CAM) function.
  • block size is determined by ignoring a specific number of the least significant bits, and performing the compare only on the bit positions above that range. This allows a quick and simple comparison with a block size that is a power of two, for example, 128 bytes or 512 bytes. Flushing can continue as long as entries remain in write cache that are within the defined block.
  • FIG. 4 shows a flow chart 40 of this process.
  • an entry is chosen to be flushed from write cache by writing it to memory.
  • Various criteria can be used to make the choice.
  • the address of the chosen entry, or at least an upper portion of the address can be compared with the addresses of all the other entries in the write cache.
  • flow chart 40 implies that nothing is flushed until the comparisons have been made and related entries identified. However, in one embodiment, the chosen entry can begin the flushing process before or during the time the comparisons are being conducted. This can save time by performing the comparison operation in parallel with the first flushing operation.
  • Entries can be flushed based on how long they have been in the write cache, with the oldest entries being flushed first.
  • the relative “age” of the entries can be determined by assigning a counter value to each entry as it is placed in write cache 29 , and incrementing the counter after each assignment. The entries with the smallest values were entered first and are therefore the oldest entries.
  • a pseudo least-recently-used (LRU) mechanism can be used to initiate the flushing operation.
  • the write cache can stall (not flush any entries) as long as the total number of entries in write cache storage 291 is less than a predetermined low threshold value. Once the number of entries exceeds the low threshold, the oldest entry can be flushed first, followed by other entries determined to be associated with the oldest entry.
  • the entries thus associated with the oldest entry can be determined by various criteria, such as the spatial location criteria previously described. It should be noted that the first entry to be flushed can also be based on other criteria, without affecting the use of a low threshold to initiate flushing.
  • write cache 29 will again examine the number of entries remaining in write cache 29 , and will stall until that value exceeds the low threshold.
  • a high threshold can also be used to affect the flushing operation.
  • Memory operations can have high or low priority, with all high priority operations being interleaved so that no high priority memory access will have to wait too long for its turn.
  • Low priority operations on the other hand, will typically have to wait until all high priority operations have been completed.
  • Write flushing operations can normally be assigned to low priority. However, if the number of entries in write cache storage 291 exceeds a high threshold value (which is higher than the low threshold value), the resulting flushing operations can be assigned to the high priority category. This can prevent write cache storage 291 from being unable to accept new entries because it filled to capacity while the flushing operations were waiting for other, higher-priority, memory operations to complete.
  • write cache 29 can hold 16 cache lines, with a low threshold value of 4 cache lines and a high threshold value of 8 cache lines.
  • FIG. 5 shows a flow chart 50 of this process.
  • the number of entries in the write cache is monitored. If the number of entries does not exceed the low threshold value, monitoring continues by looping through steps 51 and 52 . Once the number of entries exceeds the low threshold value, processing moves to step 53 , where the priority for flushing operations can be set or retained at the low priority level. As previously described, this can cause the memory operations triggered by flushing to wait on all high priority memory operations to complete.
  • the number of entries in write cache is checked to see if it also exceeds the high threshold value. If it does, the priority for flushing operations can be set at the high priority level.
  • step 55 is skipped and the low priority status is retained.
  • the first entry for flushing is chosen at step 56 .
  • various methods may be used to choose which entry will be flushed first.
  • step 57 the chosen entry and all related entries can then be flushed. Once this happens, processing can return to step 51 to examine the number of entries in the write cache again.
  • a write operation by a CPU or other device can involve writing an entire cache line
  • a write can also involve only a part of a cache line, and may involve writing as little as one byte.
  • these partial writes are sent to write cache 29 , they can be merged into the associated cache line if that cache line is already stored in write cache 29 . However, if no data in that cache line has previously been stored (at least not since it was last flushed), there will be no cache line to merge the partial write into. In this instance, the pertinent cache line can be retrieved from main memory and stored in write cache 29 , where it can then be updated with the partial write information.
  • the determination of whether the cache line associated with a partial write is already in write cache 29 can be determined by cache lookup logic 295 , which can compare the upper bits of the address of the partial write with the corresponding address bits of the entries residing in write cache 29 . If a match is found, the relevant cache line is already in write cache 29 and can be updated with the partial write. If a match is not found, the relevant cache line is not in write cache 29 , and must be retrieved from main memory as previously described and placed in write cache 29 before it can be updated with the partial write.
  • FIG. 6 shows a flow chart 60 of this process.
  • a partial write is executed to the write cache.
  • the contents of the write cache are examined to determine if the cache line that includes the address of the partial write is already in the write cache. If not, that cache line is retrieved from main memory at step 63 and placed into the write cache: Whether the cache line was already in cache, or placed in cache at step 63 , the cache line is then updated at step 64 by writing the data of the partial write into the correct location(s) in the cache line. Processing can then continue normally at step 65 with full writes, partial writes, or whatever else occurs while waiting for a triggering event to initiate a flush of that cache line. For example, if additional partial writes occur to the same cache line, the cache line can be updated with those partial writes, except that the cache line will already be in write cache, so retrieving it from main memory would not be necessary.
  • the invention can be implemented in hardware or as a method.
  • the invention can also be implemented as instructions stored on a machine-readable medium, which can be read and executed by at least one processor to perform the functions described herein.
  • a machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
  • a machine-readable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others.

Abstract

A write cache that reduces the number of memory accesses required to write data to main memory. When a memory write request is executed, the request not only updates the relevant location in cache memory, but the request is also directed to updating the corresponding location in main memory. A separate write cache is dedicated to temporarily holding multiple write requests so that they can be organized for more efficient transmission to memory in burst transfers. In one embodiment, all writes within a predefined range of addresses can be written to memory as a group. In another embodiment, entries are held in the write cache until a minimum number of entries are available for writing to memory, and a least-recently-used mechanism can be used to decide which entries to transmit first. In yet another embodiment, partial writes are merged into a single cache line, to be written to memory in a single burst transmission.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The invention pertains generally to computer systems. In particular, it pertains to a write cache for writing data to memory. [0002]
  • 2. Description of the Related Art [0003]
  • Because processors can typically operate at much faster speeds than their main memory, most computer systems now use high-speed cache memory as local memory that the processor can access for most of its needs. However, although cache memory is fast, it is also much more expensive than the dynamic random access memory (DRAM) typically used for main memory, and the amount of available cache memory is typically only a fraction of the amount of DRAM memory in the system. Since much software involves repetitive execution of the same code, it is feasible to copy the code about to be executed from main memory into cache memory, where it can then be repetitively executed at high speed. Because copying from a slower memory also takes time, many computer systems have a hierarchy with multiple levels of cache, with each subsequent level being faster and smaller than the one below it, and main memory at the bottom of the hierarchy. [0004]
  • Whenever a processor (CPU) or other device executes a write function, it is changing the contents of one or more memory locations. Due to the cached memory structure, this change happens first in the cache memory from which the processor is executing. This data must then be updated in main memory (and any lower levels of cache memory) to maintain consistency and preserve the change for future use. Since burst transfers are generally more efficient overall than individual word or byte transfers, the data is written back to main memory in blocks of predetermined size, with each block containing whatever changes were made to the data in that block. [0005]
  • Many conventional systems employ write-through cache. In a write-through cache memory system, each time data is written (i.e., changed) into cache, the changed cache line is written back to memory so that cache and main memory will be in agreement and other devices reading the changed memory location will not be reading “stale” data that is no longer correct. This is typically done by writing each changed block of data to a buffer, or queue, from where it can be written back to memory as the competing demands on the memory system allow. [0006]
  • A [0007] conventional system 10 is shown in FIG. 1. A CPU 11 is closely coupled to a cache memory 12, which contains the code and data currently being executed and also the code and data that was recently executed. Data that has been written to cache is also written to main memory 13 by transmitting it to I/O control logic 14, from where it is placed into write queue 16 to await its turn to be written into memory 13.
  • As it exits write [0008] queue 16, the appropriate address and data signals are presented to memory controller 18, which opens the page in memory 13 and writes the data to the selected locations within that page. Graphics controller 17 can also read and write data to memory, as can multiple devices on the input-output (I/O) buses interfaced to bus controller 15, so I/O control logic 14 arbitrates the write requests from these various sources and places them into write queue 16.
  • Since multiple devices can try to write data to [0009] main memory 13 at the same time, write queue 16 allows the memory system to collect these competing memory requests, but it does nothing to change the order or grouping of the data being written to memory. There are several deficiencies in this conventional process:
  • 1) Since the various sections of memory that are being changed may belong in scattered pages of memory, multiple pages of memory must be sequentially opened and closed. Opening a page of memory is time-consuming; sequentially opening several can significantly affect the efficiency of memory operations. [0010]
  • 2) Writing several blocks of data into a page separately is inefficient. But the order in which those blocks are accepted and written is somewhat random, and conventional systems have no mechanism to save up a group of related blocks and organize them for a smaller number of burst transmissions. [0011]
  • 3) Various parts of the data in a single cache line may be written at different times. Initiating a separate block of writes for each one is inefficient, but conventional systems have no mechanism to collect separate partial writes to the same cache line for a single burst transmission. [0012]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a computer system of the prior art. [0013]
  • FIG. 2 shows a computer system of the invention. [0014]
  • FIG. 3 shows a block diagram of the write cache logic. [0015]
  • FIG. 4 shows a flow chart of a flush sequence involving spatial location. [0016]
  • FIG. 5 shows a flow chart of triggering a flush operation. [0017]
  • FIG. 6 shows a flow chart of a partial write operation.[0018]
  • DETAILED DESCRIPTION OF THE INVENTION
  • The invention incorporates a write cache to collect the various portions of data to be written back to main memory, and organizes them in a more efficient manner before writing the data to memory. [0019]
  • FIG. 2 shows a simplified block diagram of a [0020] system 20 of the invention. CPU 21, cache memory 22, memory 23, bus controller 25 and graphics controller 27 can operate much as before. In one embodiment, those devices can be unchanged from their prior art counterparts processor 11, cache memory 12, memory 13, bus controller 15 and graphics controller 27. However, system 20 includes write cache 29, which can be used to collect write data that is destined for writing into memory 23, and organize that data in specific ways so that fewer overall writes may be necessary than in conventional systems.
  • Whenever [0021] CPU 21 performs a data write function, that data is written not only to cache memory 22 for immediate use by the currently executing software, but is also written to main memory 23 so that main memory 23 will have an updated version of the data. It is important that main memory be updated within a reasonable time, because it is unpredictable how soon that data will be read from main memory 23 to be used again. If CPU 21 reads the previously written data for some other use, it is likely that the read operation will retrieve the data from cache memory 22, where it was initially written. In that case, it is relatively unimportant whether the data has yet been updated in main memory 23. However, if CPU 21 reads the data after a substantial delay, it is possible that the cache line containing that data will have been purged from cache memory 22, and the data will have to be retrieved from main memory 23. In that case, it is imperative that CPU 21 retrieves the latest version of the data, so it is important that the write data has been written to main memory 23 by that time.
  • Other devices, such as [0022] graphics controller 27 and bus controller 25, may also perform read and write operations to memory, but they generally do not have access to cache memory 22, so they must deal solely with main memory 23. When performing a read, it is important that they read the latest version of the data, so this increases the need to update main memory 23 as soon as is feasible after CPU 21 has changed it. One embodiment of the invention therefore implements a write-through cache system, so that every write by the CPU is immediately sent to I/O controller 24 for updating main memory 23 through write queue 26.
  • [0023] Graphics controller 27 and bus controller 25 can also write data to main memory, so they can transmit write data destined for queue 26, from where it will be written to main memory. Writes from CPU 21, bus controller 25, and graphics controller 27 may come at any time relative to each other. In a conventional system, these writes may therefore be randomly intermingled in queue 26 on a first-come, first served basis, potentially resulting in multiple separate writes to the same block of memory.
  • [0024] Write cache 29 can be strategically placed between I/O controller 24 and write queue 26 so that the write data can be temporarily stored and reorganized in ways that reduce the number of write operations to memory 23, thereby improving the efficiency of the overall memory system.
  • A data write operation writes data to a particular location. Since [0025] CPU 21 operates primarily out of cache memory, in one embodiment this write operation can write data first to cache memory 22. At the same time, or shortly thereafter, the same data can be sent to I/O controller 24 for writing to the location in main memory that corresponds to the location in cache that was just updated. I/O controller 24 can then transmit this data to write cache 29, where it can be organized with other write data for efficient transmission to write queue 26. Write queue 26 can buffer the data and present it to memory controller 28 in the same order in which it was received from write cache 29. Memory controller 28 can then write the data into main memory 23. Since a memory takes a predetermined amount of time to read or write data, and the data requests may come at unpredictable times, write queue 26 can smooth out the process by holding any data that comes in faster than it can be written to memory.
  • FIG. 3 shows a more detailed view of write [0026] cache 29. Write cache 29 can receive memory write requests in the form of data and address information from I/O controller 24. The details of I/O controller 24 are not shown. However, those familiar with computer architecture will appreciate that it can contain interfaces to a processor bus, a graphics controller, at least one bus controller, and write cache 29, as well as arbitration and control logic to control the flow of data between those interfaces. When the data and address information is received over bus 201, that information can be stored in write cache storage 291, which can be a memory circuit. The term “bus” in this context refers to a connection containing multiple lines to move data between two or more points. Overall control of operations within write cache 29 can be provided by control logic 294. The addresses stored in write cache storage 291 can be provided over bus 202 to cache lookup logic 295, which can compare a portion of the address of the incoming request (either read or write) with the current contents of the write cache. If the address matches, the block of data containing the address of the request already exists in the write cache. This is useful in partial cache writes, which are described later.
  • Once it has been determined that a specific entry in [0027] write cache storage 291 will be flushed, or dispatched, meaning that it will be written from write cache 29 to memory and purged from write cache storage. 291, the address and associated valid bits can be passed to flush dispatcher 292 over bus 206. An “entry” can be a cache line, which can be 64 bytes in size. Page lookup logic 296 can receive the most significant bits of the address over bus 203 of the entry being flushed to memory, and compare them with the equivalent bits of all other valid address entries in the cache. This can be used to determine whether other entries are within the same block of memory and should be flushed together as a related group. The size of the block being thus considered can be programmable. This feature is described later in more detail. Data to be flushed can be presented to address translation logic 293 over bus 207.
  • An address conversion step can be performed before the data is sent to the memory controller. In most modern computer systems, the software operates with virtual addresses rather than physical ones. This allows the computer to physically interface with much more memory than the software can comprehend. These virtual addresses must be converted to the assigned physical addresses before the address and data information is actually presented to memory. Since the addresses presented to write [0028] cache 29 by I/O controller 24 are virtual addresses, these can be converted to the actual physical addresses by address translation logic 293. Such virtual-to-physical address translation processes are well known in the computer field, and are not described in further detail herein.
  • Once the address translation has taken place, the physical address can be presented over [0029] bus 208 to write queue 26 in preparation for the write operation to memory controller 28. When memory controller 28 is ready to accept a write request, it can receive the physical memory address information over bus 209, and the associated data information over bus 210. Since flush dispatcher 292, address translator 293, and write queue 26 are concerned primarily with addresses rather than the associated data, it may not be necessary to funnel the associated data through these devices. In one embodiment the data is simply held in write cache storage logic 291, and presented to memory controller 28 at the same time as the associated address information.
  • The logic for organizing the data in [0030] write cache 29 can perform multiple functions, either separately or together. Some of the processes that can be performed by this logic are described in more detail below:
  • Spatial Locality [0031]
  • [0032] Write cache 29 can flush entries to memory based on the proximity of the various entries to each other. For example, if several sequentially flushed entries are within the same memory page, that page of physical memory will only have to be opened once, and all the related entries can be written into it before another page of memory is opened. Since opening a page of memory can be time-consuming, this can result in a significant savings in time. Proximity does not have to be based on pages, but may also be based on other block sizes as well, such as eight cache lines. A cache line can be 64 bytes, so a block size of eight cache lines in that case would be 512 bytes beginning on a cache line boundary. The block size to be used in these comparisons can be programmable, which can be used to tune the memory system for efficient operation with the particular system parameters. In one embodiment, the block size can be reprogrammed on the fly, so that block size can be dynamically changed to tune the memory system for the application currently running.
  • Various criteria can be used to determine which entry in the write cache will be flushed first. Once that decision has been made, other entries that are within the same block can be identified and flushed. For example, when a flushing operation begins, the oldest entry may be flushed. Then [0033] page lookup logic 296 can be used to identify all other entries in write cache 29 that are within the same block, by performing a comparison of the first chosen address with all other entries in the write cache. This comparison can be performed with a content addressable memory (CAM) function. In one embodiment, block size is determined by ignoring a specific number of the least significant bits, and performing the compare only on the bit positions above that range. This allows a quick and simple comparison with a block size that is a power of two, for example, 128 bytes or 512 bytes. Flushing can continue as long as entries remain in write cache that are within the defined block.
  • FIG. 4 shows a [0034] flow chart 40 of this process. At step 41, an entry is chosen to be flushed from write cache by writing it to memory. Various criteria can be used to make the choice. At step 42 the address of the chosen entry, or at least an upper portion of the address, can be compared with the addresses of all the other entries in the write cache. At step 43, it is determined whether the comparison found any matches in the write cache. If no matches were found, the chosen entry can be flushed at step 44. However, if one or more matches were found, the chosen entry and all the matching entries can be flushed at step 45. In either case, after the flushing operation is complete, processing can continue at step 46.
  • The organization of [0035] flow chart 40 implies that nothing is flushed until the comparisons have been made and related entries identified. However, in one embodiment, the chosen entry can begin the flushing process before or during the time the comparisons are being conducted. This can save time by performing the comparison operation in parallel with the first flushing operation.
  • Triggers to Initiate a Flush Operation [0036]
  • Entries can be flushed based on how long they have been in the write cache, with the oldest entries being flushed first. The relative “age” of the entries can be determined by assigning a counter value to each entry as it is placed in [0037] write cache 29, and incrementing the counter after each assignment. The entries with the smallest values were entered first and are therefore the oldest entries. A pseudo least-recently-used (LRU) mechanism can be used to initiate the flushing operation. The write cache can stall (not flush any entries) as long as the total number of entries in write cache storage 291 is less than a predetermined low threshold value. Once the number of entries exceeds the low threshold, the oldest entry can be flushed first, followed by other entries determined to be associated with the oldest entry. The entries thus associated with the oldest entry can be determined by various criteria, such as the spatial location criteria previously described. It should be noted that the first entry to be flushed can also be based on other criteria, without affecting the use of a low threshold to initiate flushing.
  • Once a flush operation has been triggered as described above, the end of the flushing operation may be based on other criteria. In one embodiment, after all related entries have been flushed, write [0038] cache 29 will again examine the number of entries remaining in write cache 29, and will stall until that value exceeds the low threshold.
  • A high threshold can also be used to affect the flushing operation. Memory operations can have high or low priority, with all high priority operations being interleaved so that no high priority memory access will have to wait too long for its turn. Low priority operations, on the other hand, will typically have to wait until all high priority operations have been completed. Write flushing operations can normally be assigned to low priority. However, if the number of entries in [0039] write cache storage 291 exceeds a high threshold value (which is higher than the low threshold value), the resulting flushing operations can be assigned to the high priority category. This can prevent write cache storage 291 from being unable to accept new entries because it filled to capacity while the flushing operations were waiting for other, higher-priority, memory operations to complete. In one embodiment, write cache 29 can hold 16 cache lines, with a low threshold value of 4 cache lines and a high threshold value of 8 cache lines.
  • FIG. 5 shows a [0040] flow chart 50 of this process. At step 51, the number of entries in the write cache is monitored. If the number of entries does not exceed the low threshold value, monitoring continues by looping through steps 51 and 52. Once the number of entries exceeds the low threshold value, processing moves to step 53, where the priority for flushing operations can be set or retained at the low priority level. As previously described, this can cause the memory operations triggered by flushing to wait on all high priority memory operations to complete. At step 54, the number of entries in write cache is checked to see if it also exceeds the high threshold value. If it does, the priority for flushing operations can be set at the high priority level. With this priority, memory operations triggered by flushing will not have to wait for all high priority memory operations to complete. If the number of entries does not exceed the high threshold value at step 54, step 55 is skipped and the low priority status is retained. In either case, once priority is set, the first entry for flushing is chosen at step 56. As previously described, various methods may be used to choose which entry will be flushed first. At step 57, the chosen entry and all related entries can then be flushed. Once this happens, processing can return to step 51 to examine the number of entries in the write cache again.
  • Partial Writes [0041]
  • Although a write operation by a CPU or other device can involve writing an entire cache line, a write can also involve only a part of a cache line, and may involve writing as little as one byte. When these partial writes are sent to write [0042] cache 29, they can be merged into the associated cache line if that cache line is already stored in write cache 29. However, if no data in that cache line has previously been stored (at least not since it was last flushed), there will be no cache line to merge the partial write into. In this instance, the pertinent cache line can be retrieved from main memory and stored in write cache 29, where it can then be updated with the partial write information. Once placed in write cache 29, that cache line can remain available for updating by any further partial writes that occur before the cache line is finally flushed and written back to main memory. Whether the cache line had to be retrieved or not, this process can have the effect of merging multiple partial writes into a single cache line before flushing the cache line to main memory, thereby converting what could be multiple memory transfers into a single burst transfer.
  • The determination of whether the cache line associated with a partial write is already in [0043] write cache 29 can be determined by cache lookup logic 295, which can compare the upper bits of the address of the partial write with the corresponding address bits of the entries residing in write cache 29. If a match is found, the relevant cache line is already in write cache 29 and can be updated with the partial write. If a match is not found, the relevant cache line is not in write cache 29, and must be retrieved from main memory as previously described and placed in write cache 29 before it can be updated with the partial write.
  • FIG. 6 shows a [0044] flow chart 60 of this process. At step 61, a partial write is executed to the write cache. At step 62, the contents of the write cache are examined to determine if the cache line that includes the address of the partial write is already in the write cache. If not, that cache line is retrieved from main memory at step 63 and placed into the write cache: Whether the cache line was already in cache, or placed in cache at step 63, the cache line is then updated at step 64 by writing the data of the partial write into the correct location(s) in the cache line. Processing can then continue normally at step 65 with full writes, partial writes, or whatever else occurs while waiting for a triggering event to initiate a flush of that cache line. For example, if additional partial writes occur to the same cache line, the cache line can be updated with those partial writes, except that the cache line will already be in write cache, so retrieving it from main memory would not be necessary.
  • The invention can be implemented in hardware or as a method. The invention can also be implemented as instructions stored on a machine-readable medium, which can be read and executed by at least one processor to perform the functions described herein. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. [0045]
  • The foregoing description is intended to be illustrative and not limiting. Variations will occur to those of skill in the art. Those variations are intended to be included in the invention, which is limited only by the spirit and scope of the appended claims. [0046]

Claims (30)

We claim:
1. An apparatus, comprising:
write cache storage to store cache lines of write data;
a flush dispatcher coupled to the write cache storage to dispatch the cache lines to memory;
control logic to:
select a first cache line with a first address in the write cache storage for dispatching;
determine if a second cache line in the write cache storage has a second address within a predetermined range of the first address; and
dispatch the first and second cache lines if the second address is within the predetermined range of the first address.
2. The apparatus of claim 1, wherein the predetermined range is programmable.
3. The apparatus of claim 1, wherein control logic to select a first cache line includes logic to select an oldest cache line in the write cache storage.
4. A method, comprising:
receiving a plurality of write requests and storing the plurality of write requests in a plurality of cache lines in a write cache storage;
selecting a first one of the plurality of cache lines for dispatching to a memory, wherein the first one of the plurality of cache lines has a first address;
determining if a second one of the plurality of cache lines has a second address within a predetermined range of the first address;
dispatching the first and second ones of the plurality of cache lines to the memory if the second address is within the predetermined range of the first address; and
dispatching the first one but not the second one of the plurality of cache lines to the memory if the second address is not within the predetermined range of the first address.
5. The method of claim 4, wherein selecting a first one includes selecting an oldest of the plurality of cache lines.
6. The method of claim 4, further comprising programming the predetermined range before selecting.
7. A machine-readable medium having stored thereon instructions, which when executed by a processor cause said processor to perform:
receiving a plurality of write requests and storing the plurality of write requests in a plurality of cache lines in a write cache storage;
selecting a first one of the plurality of cache lines for dispatching to a memory, wherein the first one of the plurality of cache lines has a first address;
determining if a second one of the plurality of cache lines has a second address within a predetermined range of the first address;
dispatching the first and second ones of the plurality of cache lines to the memory if the first address is within the predetermined range of the second address; and
dispatching the first one but not the second one of the plurality of cache lines to the memory if the second address is not within the predetermined range of the first address.
8. The medium of claim 7, wherein selecting a first one includes selecting an oldest of the plurality of cache lines.
9. The medium of claim 7, further comprising programming the predetermined range before selecting.
10. An apparatus, comprising:
write cache storage to store cache lines of write data;
a flush dispatcher coupled to the write cache storage to dispatch the cache lines to memory;
control logic to:
dispatch at least one of the cache lines to memory if the number of cache lines in the write cache storage exceeds a first predetermined value; and
not dispatch any of the cache lines to memory if the number of cache lines in the write cache storage does not exceed the first predetermined value.
11. The apparatus of claim 10, wherein the control logic is further to:
dispatch the at least one of the cache lines with a high priority if the number of cache lines in the write cache storage exceeds a second predetermined value higher than the first predetermined value; and
dispatch the at least one of the cache lines with a low priority if the number of cache lines in the write cache storage does not exceed the second predetermined value.
12. The apparatus of claim 10, wherein the control logic is further to dispatch an oldest one of the cache lines first if the number of cache lines in the write cache storage exceeds the first predetermined value.
13. A method, comprising:
storing write requests in cache lines of write data in write cache storage;
dispatching at least one of the cache lines to memory if the number of cache lines in the write cache storage exceeds a first predetermined value; and
not dispatching any of the cache lines to memory if the number of cache lines in the write cache storage does not exceed the first predetermined value.
14. The method of claim 13, wherein dispatching further includes:
dispatching the at least one of the cache lines with a high priority if the number of cache lines in the write cache storage exceeds a second predetermined value higher than the first predetermined value; and
dispatching the at least one of the cache lines with a low priority if the number of cache lines in the write cache storage does not exceed the second predetermined value.
15. The method of claim 13, wherein dispatching further includes dispatching an oldest one of the cache lines first if the number of cache lines in the write cache storage exceeds the first predetermined value.
16. A machine-readable medium having stored thereon instructions, which when executed by a processor cause said processor to perform:
storing write requests in cache lines of write data in write cache storage;
dispatching at least one of the cache lines to memory if the number of cache lines in the write cache storage exceeds a first predetermined value; and
not dispatching any of the cache lines to memory if the number of cache lines in the write cache storage does not exceed the first predetermined value.
17. The medium of claim 16, wherein dispatching further includes:
dispatching the at least one of the cache lines with a high priority if the number of cache lines in the write cache storage exceeds a second predetermined value higher than the first predetermined value; and
dispatching the at least one of the cache lines with a low priority if the number of cache lines in the write cache storage does not exceed the second predetermined value.
18. The medium of claim 16, wherein dispatching further includes dispatching an oldest one of the cache lines first if the number of cache lines in the write cache storage exceeds the first predetermined value.
19. An apparatus, comprising:
write cache storage to receive a plurality of partial write requests for merging into associated cache lines of write data;
a flush dispatcher coupled to the write cache storage to dispatch the cache lines to memory;
control logic to:
determine if a first cache line associated with a first of the plurality of partial write requests is stored in the write cache storage;
if the first cache line is not stored in the write cache storage:
retrieve the first cache line from memory;
store the retrieved first cache line in the write cache storage; and
merge the first of the plurality of partial write requests into the retrieved first cache line.
20. The apparatus of claim 19, wherein the control logic is further to merge a second of the plurality of partial write requests into the retrieved first cache line if the retrieved first cache line is associated with the second of the plurality of partial write requests.
21. The apparatus of claim, 19, wherein the control logic is further to merge the first of the plurality of partial write requests into the first cache line if the first cache line is determined to be stored in the write cache storage.
22. A method, comprising:
receiving a plurality of partial write requests for merging into associated cache lines of write data in write cache storage;
determining if a first cache line associated with a first of the plurality of partial write requests is stored in the write cache storage; and
if the first cache line is not stored in the write cache storage:
retrieving the first cache line from memory;
storing the retrieved first cache line in the write cache storage; and
merging the first of the plurality of partial write requests into the retrieved first cache line.
23. The method of claim 22, further comprising merging a second of the plurality of partial write requests into the retrieved first cache line if the retrieved first cache line is associated with the second of the plurality of partial write requests.
24. The method of claim 22, further comprising merging the first of the plurality of partial write requests into the first cache line if the first cache line is stored in the write cache storage.
25. A machine-readable medium having stored thereon instructions, which when executed by a processor cause said processor to perform:
receiving a plurality of partial write requests for merging into associated cache lines of write data in write cache storage;
determining if a first cache line associated with a first of the plurality of partial write requests is stored in the write cache storage; and
if the first cache line is not stored in the write cache storage:
retrieving the first cache line from memory;
storing the retrieved first cache line in the write cache storage; and
merging the first of the plurality of partial write requests into the retrieved first cache line.
26. The medium of claim 25, further comprising merging a second of the plurality of partial write requests into the retrieved first cache line if the retrieved first cache line is associated with the second of the plurality of partial write requests.
27. The medium of claim 25, further comprising merging the first of the plurality of partial write requests into the first cache line if the first cache line is stored in the write cache storage.
28. An apparatus, comprising:
write cache storage to receive write requests and to store cache lines of write data;
a flush dispatcher coupled to the write cache storage to dispatch the cache lines to memory;
control logic to:
select a first cache line with a first address in the write cache storage for dispatching;
dispatch the first cache line to memory if the number of cache lines in the write cache storage exceeds a first predetermined value;
not dispatch any of the cache lines to memory if the number of cache lines in the write cache storage does not exceed the first predetermined value;
determine if a second cache line in the write cache storage has a second address within a predetermined range of the first address;
dispatch the second cache line if the second address is within the predetermined range of the first address;
determine if a third cache line associated with a particular one of a plurality of partial write requests is stored in the write cache storage;
if the third cache line is not stored in the write cache storage:
retrieve the third cache line from memory;
store the retrieved third cache line in the write cache storage; and
merge the particular one of the plurality of partial write requests into the retrieved third cache line.
29. The apparatus of claim 28, wherein the predetermined range is programmable.
30. The apparatus of claim 28, wherein the control logic is further to:
dispatch the first cache line with a high priority if the number of cache lines in the write cache storage exceeds a second predetermined value higher than the first predetermined value; and
dispatch the first cache line with a low priority if the number of cache lines in the write cache storage exceeds the first predetermined value and does not exceed the second predetermined value.
US10/631,353 2000-09-21 2003-07-30 Method and apparatus for write cache flush and fill mechanisms Abandoned US20040024971A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/631,353 US20040024971A1 (en) 2000-09-21 2003-07-30 Method and apparatus for write cache flush and fill mechanisms

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/667,405 US6658533B1 (en) 2000-09-21 2000-09-21 Method and apparatus for write cache flush and fill mechanisms
US10/631,353 US20040024971A1 (en) 2000-09-21 2003-07-30 Method and apparatus for write cache flush and fill mechanisms

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/667,405 Division US6658533B1 (en) 2000-09-21 2000-09-21 Method and apparatus for write cache flush and fill mechanisms

Publications (1)

Publication Number Publication Date
US20040024971A1 true US20040024971A1 (en) 2004-02-05

Family

ID=29550420

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/667,405 Expired - Lifetime US6658533B1 (en) 2000-09-21 2000-09-21 Method and apparatus for write cache flush and fill mechanisms
US10/631,353 Abandoned US20040024971A1 (en) 2000-09-21 2003-07-30 Method and apparatus for write cache flush and fill mechanisms

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/667,405 Expired - Lifetime US6658533B1 (en) 2000-09-21 2000-09-21 Method and apparatus for write cache flush and fill mechanisms

Country Status (1)

Country Link
US (2) US6658533B1 (en)

Cited By (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040230746A1 (en) * 2003-05-15 2004-11-18 Olds Edwin S. Adaptive resource controlled write-back aging for a data storage device
US20050055512A1 (en) * 2003-09-05 2005-03-10 Kishi Gregory Tad Apparatus, system, and method flushing data from a cache to secondary storage
US20050210318A1 (en) * 2004-03-22 2005-09-22 Dell Products L.P. System and method for drive recovery following a drive failure
US20050270973A1 (en) * 2004-06-07 2005-12-08 Raev Kaloyan V Cluster architecture communications
US20060064549A1 (en) * 2004-09-23 2006-03-23 Michael Wintergerst Cache eviction
US20060129546A1 (en) * 2004-12-14 2006-06-15 Bernhard Braun Fast channel architecture
US20060129981A1 (en) * 2004-12-14 2006-06-15 Jan Dostert Socket-like communication API for Java
US20060129512A1 (en) * 2004-12-14 2006-06-15 Bernhard Braun Socket-like communication API for C
US20060143609A1 (en) * 2004-12-28 2006-06-29 Georgi Stanev System and method for managing memory of Java session objects
US20060143392A1 (en) * 2004-12-28 2006-06-29 Petev Petio G First in first out eviction implementation
US20060143388A1 (en) * 2004-12-28 2006-06-29 Michael Wintergerst Programming models for eviction policies
US20060143256A1 (en) * 2004-12-28 2006-06-29 Galin Galchev Cache region concept
US20060143290A1 (en) * 2004-12-28 2006-06-29 Jan Dostert Session monitoring using shared memory
US20060143393A1 (en) * 2004-12-28 2006-06-29 Petev Petio G Least frequently used eviction implementation
US20060143217A1 (en) * 2004-12-28 2006-06-29 Georgi Stanev Session management within a multi-tiered enterprise network
US20060143608A1 (en) * 2004-12-28 2006-06-29 Jan Dostert Thread monitoring using shared memory
US20060143525A1 (en) * 2004-12-28 2006-06-29 Frank Kilian Shared memory based monitoring for application servers
US20060143385A1 (en) * 2004-12-28 2006-06-29 Michael Wintergerst Storage plug-in based on shared closures
US20060143387A1 (en) * 2004-12-28 2006-06-29 Petev Petio G Programming models for storage plug-ins
US20060143359A1 (en) * 2004-12-28 2006-06-29 Jan Dostert Virtual machine monitoring
US20060143389A1 (en) * 2004-12-28 2006-06-29 Frank Kilian Main concept for common cache management
US20060143399A1 (en) * 2004-12-28 2006-06-29 Petev Petio G Least recently used eviction implementation
US20060143360A1 (en) * 2004-12-28 2006-06-29 Petev Petio G Distributed cache architecture
US20060143595A1 (en) * 2004-12-28 2006-06-29 Jan Dostert Virtual machine monitoring using shared memory
US20060143398A1 (en) * 2004-12-23 2006-06-29 Stefan Rau Method and apparatus for least recently used (LRU) software cache
US20060149827A1 (en) * 2004-12-30 2006-07-06 Randolf Werner Implementation of application management operations
US20060150197A1 (en) * 2004-12-30 2006-07-06 Randolf Werner Connection of clients for management of systems
US20060155742A1 (en) * 2004-12-28 2006-07-13 Georgi Stanev System and method for serializing Java objects over shared closures
US20060167980A1 (en) * 2004-12-29 2006-07-27 Randolf Werner Interface for external system management
US20060176893A1 (en) * 2005-02-07 2006-08-10 Yoon-Jin Ku Method of dynamic queue management for stable packet forwarding and network processor element therefor
US20060248036A1 (en) * 2005-04-29 2006-11-02 Georgi Stanev Internal persistence of session state information
US20060248200A1 (en) * 2005-04-29 2006-11-02 Georgi Stanev Shared memory implementations for session data within a multi-tiered enterprise network
US20060248350A1 (en) * 2005-04-29 2006-11-02 Georgi Stanev Persistent storage implementations for session data within a multi-tiered enterprise network
US20060248283A1 (en) * 2005-04-29 2006-11-02 Galin Galchev System and method for monitoring threads in a clustered server architecture
US20060248198A1 (en) * 2005-04-29 2006-11-02 Galin Galchev Flexible failover configuration
US20060248119A1 (en) * 2005-04-29 2006-11-02 Georgi Stanev External persistence of session state information
US20060248131A1 (en) * 2005-04-29 2006-11-02 Dirk Marwinski Cache isolation model
US20060282509A1 (en) * 2005-06-09 2006-12-14 Frank Kilian Application server architecture
US20070067469A1 (en) * 2005-07-19 2007-03-22 Oliver Luik System and method for a pluggable protocol handler
US20070156869A1 (en) * 2005-12-30 2007-07-05 Galin Galchev Load balancing algorithm for servicing client requests
US20070162912A1 (en) * 2005-12-28 2007-07-12 Frank Kilian Cluster communication manager
US20080005480A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc Predicting accesses to non-requested data
US20080005475A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc Hot data zones
US20080005478A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc Dynamic adaptive flushing of cached data
US20080005466A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc 2D dynamic adaptive data caching
US20080005464A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc Wave flushing of cached writeback data to a storage array
US20080070678A1 (en) * 2004-08-19 2008-03-20 Igt Gaming system having multiple gaming machines which provide bonus awards
US20080144381A1 (en) * 2006-12-18 2008-06-19 Samsung Electronics Co., Ltd Flash memory device and method of changing block size in the same using address shifting
US20080163063A1 (en) * 2006-12-29 2008-07-03 Sap Ag Graphical user interface system and method for presenting information related to session and cache objects
US7418560B2 (en) 2004-09-23 2008-08-26 Sap Ag Centralized cache storage for runtime systems
US20090063731A1 (en) * 2007-09-05 2009-03-05 Gower Kevin C Method for Supporting Partial Cache Line Read and Write Operations to a Memory Module to Reduce Read and Write Data Traffic on a Memory Channel
US20090063923A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System and Method for Performing Error Correction at a Memory Device Level that is Transparent to a Memory Channel
US20090063729A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System for Supporting Partial Cache Line Read Operations to a Memory Module to Reduce Read Data Traffic on a Memory Channel
US20090063787A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C Buffered Memory Module with Multiple Memory Device Data Interface Ports Supporting Double the Memory Capacity
US20090063761A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C Buffered Memory Module Supporting Two Independent Memory Channels
US20090063730A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System for Supporting Partial Cache Line Write Operations to a Memory Module to Reduce Write Data Traffic on a Memory Channel
US20090063922A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System for Performing Error Correction Operations in a Memory Hub Device of a Memory Module
US20090063784A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System for Enhancing the Memory Bandwidth Available Through a Memory Module
US7512737B2 (en) 2004-12-28 2009-03-31 Sap Ag Size based eviction implementation
US20090190427A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Enable a Memory Hub Device to Manage Thermal Conditions at a Memory Device Level Transparent to a Memory Controller
US20090193290A1 (en) * 2008-01-24 2009-07-30 Arimilli Ravi K System and Method to Use Cache that is Embedded in a Memory Hub to Replace Failed Memory Cells in a Memory Subsystem
US20090193203A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Reduce Latency by Running a Memory Channel Frequency Fully Asynchronous from a Memory Device Frequency
US20090193315A1 (en) * 2008-01-24 2009-07-30 Gower Kevin C System for a Combined Error Correction Code and Cyclic Redundancy Check Code for a Memory Channel
US20090193200A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Support a Full Asynchronous Interface within a Memory Hub Device
US20090193201A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Increase the Overall Bandwidth of a Memory Channel By Allowing the Memory Channel to Operate at a Frequency Independent from a Memory Device Frequency
US20090198867A1 (en) * 2008-01-31 2009-08-06 Guy Lynn Guthrie Method for chaining multiple smaller store queue entries for more efficient store queue usage
US7591006B2 (en) 2004-12-29 2009-09-15 Sap Ag Security for external system management
US20090313426A1 (en) * 2008-06-12 2009-12-17 Seagate Technology, Llc Buffer Management for Increased Write Speed in Large Sector Data Storage Device
US20090310412A1 (en) * 2008-06-17 2009-12-17 Jun-Ho Jang Methods of data management in non-volatile memory devices and related non-volatile memory systems
US20100205369A1 (en) * 2008-12-30 2010-08-12 Rasilient Systems, Inc. Methods and Systems for Storing Data Blocks of Multi-Streams and Multi-User Applications
US7831634B2 (en) 2005-04-29 2010-11-09 Sap Ag Initializing a cache region using a generated cache region configuration structure
US20100318879A1 (en) * 2009-06-11 2010-12-16 Samsung Electronics Co., Ltd. Storage device with flash memory and data storage method
US20110004709A1 (en) * 2007-09-05 2011-01-06 Gower Kevin C Method for Enhancing the Memory Bandwidth Available Through a Memory Module
US7899983B2 (en) 2007-08-31 2011-03-01 International Business Machines Corporation Buffered memory module supporting double the memory device data width in the same physical space as a conventional memory module
US20110078393A1 (en) * 2009-09-29 2011-03-31 Silicon Motion, Inc. Memory device and data access method
US7930469B2 (en) 2008-01-24 2011-04-19 International Business Machines Corporation System to provide memory system power reduction without reducing overall memory system performance
US8266383B1 (en) * 2009-09-28 2012-09-11 Nvidia Corporation Cache miss processing using a defer/replay mechanism
US8281014B2 (en) 2004-12-28 2012-10-02 Sap Ag Session lifecycle management within a multi-tiered enterprise network
WO2012134683A3 (en) * 2011-03-31 2013-03-28 Intel Corporation Activity alignment algorithm by masking traffic flows
US9032165B1 (en) * 2013-04-30 2015-05-12 Amazon Technologies, Inc. Systems and methods for scheduling write requests for a solid state storage device
US9275003B1 (en) * 2007-10-02 2016-03-01 Sandia Corporation NIC atomic operation unit with caching and bandwidth mitigation
WO2016160159A1 (en) * 2015-03-27 2016-10-06 Intel Corporation Memory controller for multi-level system memory with coherency unit
US9602636B1 (en) 2009-09-09 2017-03-21 Amazon Technologies, Inc. Stateless packet segmentation and processing
US10303373B2 (en) * 2017-06-12 2019-05-28 Seagate Technology Llc Prioritizing commands in a data storage device
WO2021159608A1 (en) * 2020-02-13 2021-08-19 苏州浪潮智能科技有限公司 Protocol buffers-based mirror cache method

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7610556B2 (en) * 2001-12-28 2009-10-27 Microsoft Corporation Dialog manager for interactive dialog with computer user
US7039764B1 (en) * 2002-01-17 2006-05-02 Nokia Corporation Near-perfect, fixed-time searching algorithm using hashing, LRU and cam-based caching
US7174346B1 (en) 2003-07-31 2007-02-06 Google, Inc. System and method for searching an extended database
US7467131B1 (en) 2003-09-30 2008-12-16 Google Inc. Method and system for query data caching and optimization in a search engine system
US7840557B1 (en) 2004-05-12 2010-11-23 Google Inc. Search engine cache control
US8533402B1 (en) * 2004-09-13 2013-09-10 The Mathworks, Inc. Caching and decaching distributed arrays across caches in a parallel processing environment
US7360112B2 (en) * 2005-02-07 2008-04-15 International Business Machines Corporation Detection and recovery of dropped writes in storage devices
GB0507912D0 (en) * 2005-04-20 2005-05-25 Ibm Disk drive and method for protecting data writes in a disk drive
US8627002B2 (en) * 2006-10-12 2014-01-07 International Business Machines Corporation Method to increase performance of non-contiguously written sectors
KR20100021868A (en) * 2008-08-18 2010-02-26 삼성전자주식회사 Buffer cache management method for flash memory device
JP4632180B2 (en) * 2008-10-15 2011-02-16 Tdk株式会社 MEMORY CONTROLLER, FLASH MEMORY SYSTEM HAVING MEMORY CONTROLLER, AND FLASH MEMORY CONTROL METHOD
JP4582232B2 (en) * 2008-09-30 2010-11-17 Tdk株式会社 MEMORY CONTROLLER, FLASH MEMORY SYSTEM HAVING MEMORY CONTROLLER, AND FLASH MEMORY CONTROL METHOD
US8214579B2 (en) * 2008-09-30 2012-07-03 Tdk Corporation Memory controller, flash memory system with memory controller, and method of controlling flash memory
US8352685B2 (en) * 2010-08-20 2013-01-08 Apple Inc. Combining write buffer with dynamically adjustable flush metrics
CN102736987A (en) * 2011-04-15 2012-10-17 鸿富锦精密工业(深圳)有限公司 Monitoring data caching method and monitoring data caching system
CN105144120B (en) * 2013-03-28 2018-10-23 慧与发展有限责任合伙企业 The data from cache line are stored to main memory based on storage address
CN107577614B (en) * 2013-06-29 2020-10-16 华为技术有限公司 Data writing method and memory system
KR102149222B1 (en) * 2013-09-26 2020-10-14 삼성전자주식회사 Method and Apparatus for Copying Data using Cache
US10951705B1 (en) 2014-12-05 2021-03-16 EMC IP Holding Company LLC Write leases for distributed file systems
US10021212B1 (en) 2014-12-05 2018-07-10 EMC IP Holding Company LLC Distributed file systems on content delivery networks
US10445296B1 (en) 2014-12-05 2019-10-15 EMC IP Holding Company LLC Reading from a site cache in a distributed file system
US10423507B1 (en) 2014-12-05 2019-09-24 EMC IP Holding Company LLC Repairing a site cache in a distributed file system
US10430385B1 (en) 2014-12-05 2019-10-01 EMC IP Holding Company LLC Limited deduplication scope for distributed file systems
US10452619B1 (en) 2014-12-05 2019-10-22 EMC IP Holding Company LLC Decreasing a site cache capacity in a distributed file system
US10936494B1 (en) * 2014-12-05 2021-03-02 EMC IP Holding Company LLC Site cache manager for a distributed file system
US10740016B2 (en) * 2016-11-11 2020-08-11 Scale Computing, Inc. Management of block storage devices based on access frequency wherein migration of block is based on maximum and minimum heat values of data structure that maps heat values to block identifiers, said block identifiers are also mapped to said heat values in first data structure
US20180173800A1 (en) * 2016-12-20 2018-06-21 Allen Chang Data promotion
US10565109B2 (en) * 2017-09-05 2020-02-18 International Business Machines Corporation Asynchronous update of metadata tracks in response to a cache hit generated via an I/O operation over a bus interface
US11347703B1 (en) 2017-12-08 2022-05-31 Palantir Technologies Inc. System and methods for object version tracking and read-time/write-time data federation
US10514865B2 (en) * 2018-04-24 2019-12-24 EMC IP Holding Company LLC Managing concurrent I/O operations
KR20200029085A (en) * 2018-09-07 2020-03-18 에스케이하이닉스 주식회사 Data Storage Device and Operation Method Thereof, Storage System Having the Same
KR102435253B1 (en) 2020-06-30 2022-08-24 에스케이하이닉스 주식회사 Memory controller and operating method thereof
KR102495910B1 (en) 2020-04-13 2023-02-06 에스케이하이닉스 주식회사 Storage device and operating method thereof
KR102406449B1 (en) * 2020-06-25 2022-06-08 에스케이하이닉스 주식회사 Storage device and operating method thereof
US11755476B2 (en) 2020-04-13 2023-09-12 SK Hynix Inc. Memory controller, storage device including the memory controller, and method of operating the memory controller and the storage device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5555391A (en) * 1993-12-23 1996-09-10 Unisys Corporation System and method for storing partial blocks of file data in a file cache system by merging partial updated blocks with file block to be written
US5845325A (en) * 1987-10-02 1998-12-01 Sun Microsystems, Inc. Virtual address write back cache with address reassignment and cache block flush
US5895488A (en) * 1997-02-24 1999-04-20 Eccs, Inc. Cache flushing methods and apparatus
US5909699A (en) * 1994-02-28 1999-06-01 Intel Corporation Method and apparatus for supporting read, write, and invalidation operations to memory which maintain cache consistency
US6356485B1 (en) * 1999-02-13 2002-03-12 Integrated Device Technology, Inc. Merging write cycles by comparing at least a portion of the respective write cycle addresses
US6401175B1 (en) * 1999-10-01 2002-06-04 Sun Microsystems, Inc. Shared write buffer for use by multiple processor units
US6412045B1 (en) * 1995-05-23 2002-06-25 Lsi Logic Corporation Method for transferring data from a host computer to a storage media using selectable caching strategies
US6728838B2 (en) * 2000-08-21 2004-04-27 Texas Instruments Incorporated Cache operation based on range of addresses

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5845325A (en) * 1987-10-02 1998-12-01 Sun Microsystems, Inc. Virtual address write back cache with address reassignment and cache block flush
US5555391A (en) * 1993-12-23 1996-09-10 Unisys Corporation System and method for storing partial blocks of file data in a file cache system by merging partial updated blocks with file block to be written
US5909699A (en) * 1994-02-28 1999-06-01 Intel Corporation Method and apparatus for supporting read, write, and invalidation operations to memory which maintain cache consistency
US6412045B1 (en) * 1995-05-23 2002-06-25 Lsi Logic Corporation Method for transferring data from a host computer to a storage media using selectable caching strategies
US5895488A (en) * 1997-02-24 1999-04-20 Eccs, Inc. Cache flushing methods and apparatus
US6356485B1 (en) * 1999-02-13 2002-03-12 Integrated Device Technology, Inc. Merging write cycles by comparing at least a portion of the respective write cycle addresses
US6401175B1 (en) * 1999-10-01 2002-06-04 Sun Microsystems, Inc. Shared write buffer for use by multiple processor units
US6728838B2 (en) * 2000-08-21 2004-04-27 Texas Instruments Incorporated Cache operation based on range of addresses

Cited By (155)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE44128E1 (en) * 2003-05-15 2013-04-02 Seagate Technology Llc Adaptive resource controlled write-back aging for a data storage device
US20040230746A1 (en) * 2003-05-15 2004-11-18 Olds Edwin S. Adaptive resource controlled write-back aging for a data storage device
US7310707B2 (en) * 2003-05-15 2007-12-18 Seagate Technology Llc Adaptive resource controlled write-back aging for a data storage device
US20050055512A1 (en) * 2003-09-05 2005-03-10 Kishi Gregory Tad Apparatus, system, and method flushing data from a cache to secondary storage
US7085895B2 (en) * 2003-09-05 2006-08-01 International Business Machines Corporation Apparatus, system, and method flushing data from a cache to secondary storage
US20050210318A1 (en) * 2004-03-22 2005-09-22 Dell Products L.P. System and method for drive recovery following a drive failure
US20050270973A1 (en) * 2004-06-07 2005-12-08 Raev Kaloyan V Cluster architecture communications
US20080070678A1 (en) * 2004-08-19 2008-03-20 Igt Gaming system having multiple gaming machines which provide bonus awards
US7418560B2 (en) 2004-09-23 2008-08-26 Sap Ag Centralized cache storage for runtime systems
US7590803B2 (en) * 2004-09-23 2009-09-15 Sap Ag Cache eviction
US20060064549A1 (en) * 2004-09-23 2006-03-23 Michael Wintergerst Cache eviction
US20060129512A1 (en) * 2004-12-14 2006-06-15 Bernhard Braun Socket-like communication API for C
US20060129981A1 (en) * 2004-12-14 2006-06-15 Jan Dostert Socket-like communication API for Java
US7600217B2 (en) * 2004-12-14 2009-10-06 Sap Ag Socket-like communication API for Java
US20060129546A1 (en) * 2004-12-14 2006-06-15 Bernhard Braun Fast channel architecture
US20060143398A1 (en) * 2004-12-23 2006-06-29 Stefan Rau Method and apparatus for least recently used (LRU) software cache
US20060155742A1 (en) * 2004-12-28 2006-07-13 Georgi Stanev System and method for serializing Java objects over shared closures
US7543302B2 (en) 2004-12-28 2009-06-02 Sap Ag System and method for serializing java objects over shared closures
US20060143387A1 (en) * 2004-12-28 2006-06-29 Petev Petio G Programming models for storage plug-ins
US20060143359A1 (en) * 2004-12-28 2006-06-29 Jan Dostert Virtual machine monitoring
US20060143389A1 (en) * 2004-12-28 2006-06-29 Frank Kilian Main concept for common cache management
US20060143399A1 (en) * 2004-12-28 2006-06-29 Petev Petio G Least recently used eviction implementation
US20060143360A1 (en) * 2004-12-28 2006-06-29 Petev Petio G Distributed cache architecture
US20060143595A1 (en) * 2004-12-28 2006-06-29 Jan Dostert Virtual machine monitoring using shared memory
US20060143525A1 (en) * 2004-12-28 2006-06-29 Frank Kilian Shared memory based monitoring for application servers
US20100268881A1 (en) * 2004-12-28 2010-10-21 Galin Galchev Cache region concept
US7886294B2 (en) 2004-12-28 2011-02-08 Sap Ag Virtual machine monitoring
US20060143608A1 (en) * 2004-12-28 2006-06-29 Jan Dostert Thread monitoring using shared memory
US8799359B2 (en) 2004-12-28 2014-08-05 Sap Ag Session management within a multi-tiered enterprise network
US20060143217A1 (en) * 2004-12-28 2006-06-29 Georgi Stanev Session management within a multi-tiered enterprise network
US9009409B2 (en) 2004-12-28 2015-04-14 Sap Se Cache region concept
US7694065B2 (en) 2004-12-28 2010-04-06 Sap Ag Distributed cache architecture
US7689989B2 (en) 2004-12-28 2010-03-30 Sap Ag Thread monitoring using shared memory
US7971001B2 (en) 2004-12-28 2011-06-28 Sap Ag Least recently used eviction implementation
US20090282196A1 (en) * 2004-12-28 2009-11-12 Sap Ag. First in first out eviction implementation
US20060143393A1 (en) * 2004-12-28 2006-06-29 Petev Petio G Least frequently used eviction implementation
US20060143290A1 (en) * 2004-12-28 2006-06-29 Jan Dostert Session monitoring using shared memory
US7512737B2 (en) 2004-12-28 2009-03-31 Sap Ag Size based eviction implementation
US20060143256A1 (en) * 2004-12-28 2006-06-29 Galin Galchev Cache region concept
US7996615B2 (en) 2004-12-28 2011-08-09 Sap Ag Cache region concept
US8015561B2 (en) 2004-12-28 2011-09-06 Sap Ag System and method for managing memory of Java session objects
US7840760B2 (en) 2004-12-28 2010-11-23 Sap Ag Shared closure eviction implementation
US20060143388A1 (en) * 2004-12-28 2006-06-29 Michael Wintergerst Programming models for eviction policies
US8204931B2 (en) 2004-12-28 2012-06-19 Sap Ag Session management within a multi-tiered enterprise network
US8281014B2 (en) 2004-12-28 2012-10-02 Sap Ag Session lifecycle management within a multi-tiered enterprise network
US7562138B2 (en) 2004-12-28 2009-07-14 Sap Shared memory based monitoring for application servers
US7552284B2 (en) 2004-12-28 2009-06-23 Sap Ag Least frequently used eviction implementation
US20060143385A1 (en) * 2004-12-28 2006-06-29 Michael Wintergerst Storage plug-in based on shared closures
US20060143392A1 (en) * 2004-12-28 2006-06-29 Petev Petio G First in first out eviction implementation
US7523263B2 (en) 2004-12-28 2009-04-21 Michael Wintergerst Storage plug-in based on shared closures
US7523196B2 (en) 2004-12-28 2009-04-21 Sap Ag Session monitoring using shared memory
US20060143609A1 (en) * 2004-12-28 2006-06-29 Georgi Stanev System and method for managing memory of Java session objects
US7437516B2 (en) 2004-12-28 2008-10-14 Sap Ag Programming models for eviction policies
US7451275B2 (en) 2004-12-28 2008-11-11 Sap Ag Programming models for storage plug-ins
US10007608B2 (en) 2004-12-28 2018-06-26 Sap Se Cache region concept
US7591006B2 (en) 2004-12-29 2009-09-15 Sap Ag Security for external system management
US7917629B2 (en) 2004-12-29 2011-03-29 Sap Ag Interface for external system management
US20060167980A1 (en) * 2004-12-29 2006-07-27 Randolf Werner Interface for external system management
US8024743B2 (en) 2004-12-30 2011-09-20 Sap Ag Connection of clients for management of systems
US7593917B2 (en) 2004-12-30 2009-09-22 Sap Ag Implementation of application management operations
US20060149827A1 (en) * 2004-12-30 2006-07-06 Randolf Werner Implementation of application management operations
US20060150197A1 (en) * 2004-12-30 2006-07-06 Randolf Werner Connection of clients for management of systems
US20060176893A1 (en) * 2005-02-07 2006-08-10 Yoon-Jin Ku Method of dynamic queue management for stable packet forwarding and network processor element therefor
US20060248036A1 (en) * 2005-04-29 2006-11-02 Georgi Stanev Internal persistence of session state information
US7761435B2 (en) 2005-04-29 2010-07-20 Sap Ag External persistence of session state information
US20060248131A1 (en) * 2005-04-29 2006-11-02 Dirk Marwinski Cache isolation model
US20060248119A1 (en) * 2005-04-29 2006-11-02 Georgi Stanev External persistence of session state information
US7831634B2 (en) 2005-04-29 2010-11-09 Sap Ag Initializing a cache region using a generated cache region configuration structure
US8589562B2 (en) 2005-04-29 2013-11-19 Sap Ag Flexible failover configuration
US7853698B2 (en) 2005-04-29 2010-12-14 Sap Ag Internal persistence of session state information
US8762547B2 (en) 2005-04-29 2014-06-24 Sap Ag Shared memory implementations for session data within a multi-tiered enterprise network
US9432240B2 (en) 2005-04-29 2016-08-30 Sap Se Flexible failover configuration
US20060248198A1 (en) * 2005-04-29 2006-11-02 Galin Galchev Flexible failover configuration
US8024566B2 (en) 2005-04-29 2011-09-20 Sap Ag Persistent storage implementations for session data within a multi-tiered enterprise network
US20060248200A1 (en) * 2005-04-29 2006-11-02 Georgi Stanev Shared memory implementations for session data within a multi-tiered enterprise network
US20060248283A1 (en) * 2005-04-29 2006-11-02 Galin Galchev System and method for monitoring threads in a clustered server architecture
US7581066B2 (en) 2005-04-29 2009-08-25 Sap Ag Cache isolation model
US20060248350A1 (en) * 2005-04-29 2006-11-02 Georgi Stanev Persistent storage implementations for session data within a multi-tiered enterprise network
US7689660B2 (en) 2005-06-09 2010-03-30 Sap Ag Application server architecture
US20060282509A1 (en) * 2005-06-09 2006-12-14 Frank Kilian Application server architecture
US7966412B2 (en) 2005-07-19 2011-06-21 Sap Ag System and method for a pluggable protocol handler
US20070067469A1 (en) * 2005-07-19 2007-03-22 Oliver Luik System and method for a pluggable protocol handler
US20070162912A1 (en) * 2005-12-28 2007-07-12 Frank Kilian Cluster communication manager
US7831600B2 (en) 2005-12-28 2010-11-09 Sap Ag Cluster communication manager
US20070156869A1 (en) * 2005-12-30 2007-07-05 Galin Galchev Load balancing algorithm for servicing client requests
US8707323B2 (en) 2005-12-30 2014-04-22 Sap Ag Load balancing algorithm for servicing client requests
US7761659B2 (en) 2006-06-30 2010-07-20 Seagate Technology Llc Wave flushing of cached writeback data to a storage array
US7590800B2 (en) 2006-06-30 2009-09-15 Seagate Technology Llc 2D dynamic adaptive data caching
US20080005464A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc Wave flushing of cached writeback data to a storage array
US7743216B2 (en) 2006-06-30 2010-06-22 Seagate Technology Llc Predicting accesses to non-requested data
US20080005480A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc Predicting accesses to non-requested data
US8234457B2 (en) 2006-06-30 2012-07-31 Seagate Technology Llc Dynamic adaptive flushing of cached data
US20080005475A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc Hot data zones
US20080005478A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc Dynamic adaptive flushing of cached data
US20080005466A1 (en) * 2006-06-30 2008-01-03 Seagate Technology Llc 2D dynamic adaptive data caching
US8363519B2 (en) 2006-06-30 2013-01-29 Seagate Technology Llc Hot data zones
US7949819B2 (en) * 2006-12-18 2011-05-24 Samsung Electronics Co., Ltd. Flash memory device and method of changing block size in the same using address shifting
US20080144381A1 (en) * 2006-12-18 2008-06-19 Samsung Electronics Co., Ltd Flash memory device and method of changing block size in the same using address shifting
US20080163063A1 (en) * 2006-12-29 2008-07-03 Sap Ag Graphical user interface system and method for presenting information related to session and cache objects
US7584308B2 (en) 2007-08-31 2009-09-01 International Business Machines Corporation System for supporting partial cache line write operations to a memory module to reduce write data traffic on a memory channel
US20090063730A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System for Supporting Partial Cache Line Write Operations to a Memory Module to Reduce Write Data Traffic on a Memory Channel
US7818497B2 (en) 2007-08-31 2010-10-19 International Business Machines Corporation Buffered memory module supporting two independent memory channels
US8082482B2 (en) 2007-08-31 2011-12-20 International Business Machines Corporation System for performing error correction operations in a memory hub device of a memory module
US7861014B2 (en) 2007-08-31 2010-12-28 International Business Machines Corporation System for supporting partial cache line read operations to a memory module to reduce read data traffic on a memory channel
US7865674B2 (en) 2007-08-31 2011-01-04 International Business Machines Corporation System for enhancing the memory bandwidth available through a memory module
US8086936B2 (en) 2007-08-31 2011-12-27 International Business Machines Corporation Performing error correction at a memory device level that is transparent to a memory channel
US20090063923A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System and Method for Performing Error Correction at a Memory Device Level that is Transparent to a Memory Channel
US7899983B2 (en) 2007-08-31 2011-03-01 International Business Machines Corporation Buffered memory module supporting double the memory device data width in the same physical space as a conventional memory module
US20090063784A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System for Enhancing the Memory Bandwidth Available Through a Memory Module
US20090063729A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System for Supporting Partial Cache Line Read Operations to a Memory Module to Reduce Read Data Traffic on a Memory Channel
US20090063787A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C Buffered Memory Module with Multiple Memory Device Data Interface Ports Supporting Double the Memory Capacity
US20090063761A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C Buffered Memory Module Supporting Two Independent Memory Channels
US20090063922A1 (en) * 2007-08-31 2009-03-05 Gower Kevin C System for Performing Error Correction Operations in a Memory Hub Device of a Memory Module
US7840748B2 (en) 2007-08-31 2010-11-23 International Business Machines Corporation Buffered memory module with multiple memory device data interface ports supporting double the memory capacity
US8019919B2 (en) 2007-09-05 2011-09-13 International Business Machines Corporation Method for enhancing the memory bandwidth available through a memory module
US7558887B2 (en) 2007-09-05 2009-07-07 International Business Machines Corporation Method for supporting partial cache line read and write operations to a memory module to reduce read and write data traffic on a memory channel
US20110004709A1 (en) * 2007-09-05 2011-01-06 Gower Kevin C Method for Enhancing the Memory Bandwidth Available Through a Memory Module
US20090063731A1 (en) * 2007-09-05 2009-03-05 Gower Kevin C Method for Supporting Partial Cache Line Read and Write Operations to a Memory Module to Reduce Read and Write Data Traffic on a Memory Channel
US9275003B1 (en) * 2007-10-02 2016-03-01 Sandia Corporation NIC atomic operation unit with caching and bandwidth mitigation
US7930469B2 (en) 2008-01-24 2011-04-19 International Business Machines Corporation System to provide memory system power reduction without reducing overall memory system performance
US7770077B2 (en) 2008-01-24 2010-08-03 International Business Machines Corporation Using cache that is embedded in a memory hub to replace failed memory cells in a memory subsystem
US20090193200A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Support a Full Asynchronous Interface within a Memory Hub Device
US20090193315A1 (en) * 2008-01-24 2009-07-30 Gower Kevin C System for a Combined Error Correction Code and Cyclic Redundancy Check Code for a Memory Channel
US7930470B2 (en) 2008-01-24 2011-04-19 International Business Machines Corporation System to enable a memory hub device to manage thermal conditions at a memory device level transparent to a memory controller
US7925825B2 (en) 2008-01-24 2011-04-12 International Business Machines Corporation System to support a full asynchronous interface within a memory hub device
US8140936B2 (en) 2008-01-24 2012-03-20 International Business Machines Corporation System for a combined error correction code and cyclic redundancy check code for a memory channel
US7925826B2 (en) 2008-01-24 2011-04-12 International Business Machines Corporation System to increase the overall bandwidth of a memory channel by allowing the memory channel to operate at a frequency independent from a memory device frequency
US20090193203A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Reduce Latency by Running a Memory Channel Frequency Fully Asynchronous from a Memory Device Frequency
US20090193290A1 (en) * 2008-01-24 2009-07-30 Arimilli Ravi K System and Method to Use Cache that is Embedded in a Memory Hub to Replace Failed Memory Cells in a Memory Subsystem
US20090193201A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Increase the Overall Bandwidth of a Memory Channel By Allowing the Memory Channel to Operate at a Frequency Independent from a Memory Device Frequency
US20090190427A1 (en) * 2008-01-24 2009-07-30 Brittain Mark A System to Enable a Memory Hub Device to Manage Thermal Conditions at a Memory Device Level Transparent to a Memory Controller
US7925824B2 (en) 2008-01-24 2011-04-12 International Business Machines Corporation System to reduce latency by running a memory channel frequency fully asynchronous from a memory device frequency
US8166246B2 (en) * 2008-01-31 2012-04-24 International Business Machines Corporation Chaining multiple smaller store queue entries for more efficient store queue usage
US20090198867A1 (en) * 2008-01-31 2009-08-06 Guy Lynn Guthrie Method for chaining multiple smaller store queue entries for more efficient store queue usage
US9298393B2 (en) 2008-06-12 2016-03-29 Seagate Technology Llc Buffer management for increased write speed in large sector data storage device
US20090313426A1 (en) * 2008-06-12 2009-12-17 Seagate Technology, Llc Buffer Management for Increased Write Speed in Large Sector Data Storage Device
US20090310412A1 (en) * 2008-06-17 2009-12-17 Jun-Ho Jang Methods of data management in non-volatile memory devices and related non-volatile memory systems
CN101639808A (en) * 2008-06-17 2010-02-03 三星电子株式会社 Methods of data management in non-volatile memory devices and related non-volatile memory systems
US8392662B2 (en) * 2008-06-17 2013-03-05 Samsung Electronics Co., Ltd. Methods of data management in non-volatile memory devices and related non-volatile memory systems
KR101497074B1 (en) * 2008-06-17 2015-03-05 삼성전자주식회사 Non-volatile memory system and data manage method thereof
US8312217B2 (en) * 2008-12-30 2012-11-13 Rasilient Systems, Inc. Methods and systems for storing data blocks of multi-streams and multi-user applications
US20100205369A1 (en) * 2008-12-30 2010-08-12 Rasilient Systems, Inc. Methods and Systems for Storing Data Blocks of Multi-Streams and Multi-User Applications
US20100318879A1 (en) * 2009-06-11 2010-12-16 Samsung Electronics Co., Ltd. Storage device with flash memory and data storage method
US9037776B2 (en) * 2009-06-11 2015-05-19 Samsung Electronics Co., Ltd. Storage device with flash memory and data storage method
US9602636B1 (en) 2009-09-09 2017-03-21 Amazon Technologies, Inc. Stateless packet segmentation and processing
US8266383B1 (en) * 2009-09-28 2012-09-11 Nvidia Corporation Cache miss processing using a defer/replay mechanism
US8417901B2 (en) 2009-09-29 2013-04-09 Silicon Motion, Inc. Combining write commands to overlapping addresses or to a specific page
US20110078393A1 (en) * 2009-09-29 2011-03-31 Silicon Motion, Inc. Memory device and data access method
US8650427B2 (en) 2011-03-31 2014-02-11 Intel Corporation Activity alignment algorithm by masking traffic flows
WO2012134683A3 (en) * 2011-03-31 2013-03-28 Intel Corporation Activity alignment algorithm by masking traffic flows
US9032165B1 (en) * 2013-04-30 2015-05-12 Amazon Technologies, Inc. Systems and methods for scheduling write requests for a solid state storage device
US9483189B2 (en) 2013-04-30 2016-11-01 Amazon Technologies Inc. Systems and methods for scheduling write requests for a solid state storage device
WO2016160159A1 (en) * 2015-03-27 2016-10-06 Intel Corporation Memory controller for multi-level system memory with coherency unit
US10303373B2 (en) * 2017-06-12 2019-05-28 Seagate Technology Llc Prioritizing commands in a data storage device
WO2021159608A1 (en) * 2020-02-13 2021-08-19 苏州浪潮智能科技有限公司 Protocol buffers-based mirror cache method

Also Published As

Publication number Publication date
US6658533B1 (en) 2003-12-02

Similar Documents

Publication Publication Date Title
US6658533B1 (en) Method and apparatus for write cache flush and fill mechanisms
US5958040A (en) Adaptive stream buffers
KR100278328B1 (en) Cache miss buffer
US7284102B2 (en) System and method of re-ordering store operations within a processor
US5577227A (en) Method for decreasing penalty resulting from a cache miss in multi-level cache system
JP3875738B2 (en) Method for accessing tags and data arrays separated in a load / store unit with a load buffer and apparatus having the arrays
US6105111A (en) Method and apparatus for providing a cache management technique
US7676632B2 (en) Partial cache way locking
JP3323212B2 (en) Data prefetching method and apparatus
JP3816586B2 (en) Method and system for generating prefetch instructions
JP3888508B2 (en) Cache data management method
US5555392A (en) Method and apparatus for a line based non-blocking data cache
US20030033461A1 (en) Data processing system having an adaptive priority controller
US6078992A (en) Dirty line cache
US20070094450A1 (en) Multi-level cache architecture having a selective victim cache
EP0019358B1 (en) Hierarchical data storage system
JPH06243039A (en) Method for operating order in cache memory system and microprocessor unit
KR20060017881A (en) Method and apparatus for dynamic prefetch buffer configuration and replacement
JP4218820B2 (en) Cache system including direct mapped cache and full associative buffer, its control method and recording medium
US7237067B2 (en) Managing a multi-way associative cache
US6715035B1 (en) Cache for processing data in a memory controller and a method of use thereof to reduce first transfer latency
JP2001297037A (en) Smart cache
US6959363B2 (en) Cache memory operation
JP3763579B2 (en) Apparatus and method for reducing read miss latency by predicting instruction read first
EP0470738A1 (en) Cache memory system and operating method

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION