WO2004059491A1 - Cache victim sector tag buffer - Google Patents

Cache victim sector tag buffer Download PDF

Info

Publication number
WO2004059491A1
WO2004059491A1 PCT/CN2002/000935 CN0200935W WO2004059491A1 WO 2004059491 A1 WO2004059491 A1 WO 2004059491A1 CN 0200935 W CN0200935 W CN 0200935W WO 2004059491 A1 WO2004059491 A1 WO 2004059491A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
sector
sub
line
replaced
Prior art date
Application number
PCT/CN2002/000935
Other languages
French (fr)
Inventor
Chunrong Lai
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to AU2002357420A priority Critical patent/AU2002357420A1/en
Priority to PCT/CN2002/000935 priority patent/WO2004059491A1/en
Priority to US10/365,636 priority patent/US7000082B2/en
Publication of WO2004059491A1 publication Critical patent/WO2004059491A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating

Definitions

  • the present invention is directed to computer cache memory. More particularly, the present invention is directed to a cache memory having sectors and a victim sector tag buffer.
  • One way to increase the speed of a computer memory system is to improve the memory hierarchy design of the computer memory system.
  • Computer memory systems typically include different levels of memory, including fast cache memory, slower main memory, and even slower disk memory.
  • Improved designs of cache memory increase the likelihood of a cache memory "hit" which avoids the time penalty of having to retrieve data from main memory.
  • sector cache With sector cache, a cache "line" is divided into sub-sectors.
  • sector cache is found on the
  • Pentium 4 processor from Intel Corp.
  • the Pentium 4 processor includes an L2 cache which has a 128-byte long cache line that is divided into two 64-byte sub-sectors.
  • Fig. I is a block diagram of a computer system that includes a cache in accordance with one embodiment of the present invention.
  • Fig. 2 provides an example of the storage of sub-sector tags in a victim sector tag buffer in accordance with one embodiment of the present invention.
  • Fig. 3 illustrates a sequence of streaming accesses that are handled by a victim sector tag buffer in accordance with one embodiment of the present invention.
  • FIG. I is a block diagram of a computer system 40 that includes a cache 10 in accordance with one embodiment of the present invention.
  • Computer system 40 includes a processor 20, cache 10 and a memory bus 24.
  • Processor 20 can be any type of general purpose processor.
  • Cache 10 may be integrated within processor 20, or external to processor 20 as shown in Fig. 1.
  • Memory bus 24 connects processor 20 and cache 10 to the remaining memory sub-system of computer system 40.
  • Memory that may be coupled to memory bus 24 may include additional cache memory, random access memory (“RAM”), read-only memory (“ROM”), disk-drive memory, or any type of memory that may be present in a computer system.
  • RAM random access memory
  • ROM read-only memory
  • disk-drive memory or any type of memory that may be present in a computer system.
  • Cache 10 includes a cache data RAM 16.
  • Cache data RAM 16 stores cache data that is received either from processor 20, or from memory coupled to memory bus 24.
  • the data stored in cache data RAM 16 is stored in the form of cache "lines", which are blocks of data. Each cache line is divided into multiple sub- sectors (i.e., sub-sector 22 and sub-sector 24).
  • Cache 10 further includes a cache tag RAM 12.
  • Cache tag RAM 12 stores "tags" or identifiers of each line stored in cache data RAM 16, and the corresponding location in cache data RAM 16 where the line is stored.
  • the first line in cache data RAM 16 may have a tag of "A” and may be stored in location 0200.
  • the second line in cache data RAM 16 may have a tag of "B" and may be stored in location 0400.
  • Cache 10 further includes a valid bits module 14.
  • Valid bits module 14 stores a "valid" bit for each sub-sector of each line stored in cache data RAM 16. The valid bit indicates whether the corresponding sub-sector includes valid or invalid data.
  • Cache 10 further includes a VST buffer 18.
  • VST buffer 18 stores entries which indicate when a sub-sector of a line stored in cache data RAM 16, which is marked as an invalid sector by valid bits module 14, actually stores valid data which can be used by processor 20.
  • Cache data RAM 16, Cache tag RAM 12 and valid bits module 14 generally operate as the prior art equivalent modules that implement a sub-sector cache system, h general, this operation begins when processor 20 requests a sub-sector of a line of data stored in memory.
  • the memory request is processed by cache 10 by first identifying the tag of the line requested. The presence of the tag is searched in cache tag RAM 12. If the desired tag exists, the valid bit for the requested sub-sector of the line is queried in valid bits module 14. If the requested sub-sector is valid, then that sub-sector is retrieved from cache data RAM 16 and sent to processor 20.
  • a cache miss may occur if either the desired tag is not found in cache tag RAM 12 (i.e., the desired line is not in cache data RAM 16), or the requested sub- sector is invalid.
  • a cache miss occurs, one of the lines in cache data RAM 16 is designated as a "replaced line", and each sub-sector of the replaced line is marked as "invalid” in valid bits module 14 (and can be referred to as "replaced sub-sectors").
  • the requested sub-sector is then retrieved from memory bus 24 and stored in place of the corresponding sub-sector of the replaced line.
  • the corresponding cache tag and valid bit is also updated.
  • the remaining sub-sectors of the replaced line are not changed, but in prior art systems they remain unusable because these sub-sectors remain marked as invalid in valid bits module 14.
  • VST buffer 18 stores the sub- sector tags of recently replaced lines that include usable data.
  • Fig. 2 provides an example of the storage of sub-sector tags in VST 18 in accordance with one embodiment of the present invention.
  • tag A cache line identified at 101, includes two valid sub- sectors (identified by the two "V"s)
  • processor 20 requests the first sub-sector of tag B cache line.
  • Tag B is not stored in cache data RAM 16. Therefore, tag A cache line is designated as the replaced line and both sub-sectors are marked as invalid, The first sub-sector of tag B cache line is then retrieved and stored in cache data RAM 16 in place of tag A cache line.
  • tag B cache line has valid data in its first sub-sector, and invalid data in its second sub-sector.
  • the data in the second sub-sector is in fact valid data of the second sub-sector of tag A cache line. Consequently, an entry 112 is stored in VST buffer 18 that indicates that the second half sub-sector of tag B cache line includes valid data for tag A.
  • processor 20 requests the second sub-sector of tag A cache line.
  • the first check of cache tag RAM 12 results initially in a cache miss because tag A cache line was replaced by tag B cache line at box 110.
  • VST buffer 18 is then queried, and entry 112 indicates that the data is available at the second half of tag B cache line. Consequently, the requested data is retrieved from tag B cache line (indicated by shaded portion of 111) and a cache miss is avoided.
  • VST buffer 18 can be queried before the requested cache line tag is searched in cache tag RAM 12.
  • the existence of VST buffer 18 in accordance with embodiments of the present invention prevents some cache misses, thus increasing the efficiency of cache 10.
  • VST buffer 18 buffers the sector tags that have been replaced out of the cache recently, so that valid data still stored in the cache can be used.
  • simulation studies were done using a cache hierarchy of 8KB direct level 1 ("DL1"), a cache line size of 32-byte, an 8-way associate level 2 (“L2") cache size of 512KB, the L2 using a least recently used (“LRU”) replacement policy with a 128-byte long cache line, and a 64-byte long sub cache line.
  • DL1 direct level 1
  • L2 8-way associate level 2
  • LRU least recently used
  • VST buffer All extra actions related with the VST buffer, including insert update and remove, are performed when there is a cache miss (whole cache line miss or sub-sector miss), so the VST buffer will not influence the cache hit penalty.
  • the efficiency of the VST buffer can be computed from the following formula:
  • cache misses save rate (cache misses of sector cache - cache misses of sector cache with VST buffer)/(cache misses of sector cache - cache misses of non- sector cache)]
  • non-sector cache is a 512KB size, 64-byte cache line size, 8-way associative L2 cache with LRU replacement policy.
  • Several benchmarks are used for the evaluation: “mesa”, “art” and “ammp” from the Spec2K organization, and a commercial-like workload "LVCSR" which is a speech recognition system.
  • LVCSR commercial-like workload
  • Table 1 cache misses save rate of the vict m sector tag buffer
  • VST buffer can be implemented using the following software or hardware code of Table 2:
  • Table 2 Victim sector tag buffer handling code when cache misses
  • Embodiments of the present invention also provide advantages over prior art victim buffer systems when a number of sitesaming (or sequential) accesses are going to the cache. With prior art victim buffer systems, many cache lines will be evicted which will thrash the victim buffer.
  • Fig. 3 illustrates a sequence of siteaming accesses that are handled by a VST buffer in accordance with one embodiment of the present invention.
  • the VSJ buffer (assuming a 1/2 sector cache) is empty.
  • a VST buffer entry is created.
  • the VST buffer entry is disabled.
  • an additional buffer entry in the VST buffer is created in the same space as the previous VST buffer entry, without influencing other VST buffer entries. Therefore, as shown, the VST buffer is not thrashed during the siteaming of instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method of operating a sub-sector cache includes receiving a request for a first sub-sector of a first cache line. The method further includes identifying a first replaced line in a cache data RAM, the first replaced line including a plurality of replaced subsectors. The method further includes storing the first sub-sector in the cache data RAM in place of a first replaced sub-sectors and storing an identifier of at least a second replaced sub-sector in a victim sector tag buffer.

Description

CACHE VICTIM SECTOR TAG BUFFER
FIELD OF THE INVENTION
[0001] The present invention is directed to computer cache memory. More particularly, the present invention is directed to a cache memory having sectors and a victim sector tag buffer.
BACKGROUND INFORMATION
[0002] Advances in computer processor speeds increasingly highlight a growing gap between the relatively high speed of the computer processors and the relatively low speed of computer memory systems. If a computer processor is constantly waiting for data from the memory system, the speed of the processor cannot always be utilized.
[0003] One way to increase the speed of a computer memory system is to improve the memory hierarchy design of the computer memory system. Computer memory systems typically include different levels of memory, including fast cache memory, slower main memory, and even slower disk memory. Improved designs of cache memory increase the likelihood of a cache memory "hit" which avoids the time penalty of having to retrieve data from main memory.
[0004] One improved type of cache memory is sector cache. With sector cache, a cache "line" is divided into sub-sectors. One example of sector cache is found on the
Pentium 4 processor from Intel Corp. The Pentium 4 processor includes an L2 cache which has a 128-byte long cache line that is divided into two 64-byte sub-sectors.
[0005] With sector cache, a cache line miss results in all sub-sectors of the cache line being marked as "invalid" using an invalid bit. However, only a single sub-sector is read on a miss. Therefore, the remaining sub-sectors of the line continue to have invalid or unusable data that takes up space in the cache memory.
[0006] Based on the foregoing, there is a need for an improved cache memory system having sub-sectors.
B TEF DESCRIPTION OF THE DRAWINGS [0007] Fig. I is a block diagram of a computer system that includes a cache in accordance with one embodiment of the present invention.
[0008] Fig. 2 provides an example of the storage of sub-sector tags in a victim sector tag buffer in accordance with one embodiment of the present invention. [0009] Fig. 3 illustrates a sequence of streaming accesses that are handled by a victim sector tag buffer in accordance with one embodiment of the present invention.
DETAILED DESCRIPTION
[0010] One embodiment of the present invention is a cache that includes a victim sector tag ("VST") buffer. The VST buffer identifies sub-sectors of replaced lines that include valid data, despite the presence of an "invalid" flag for that sub-sector. [0011] Fig. I is a block diagram of a computer system 40 that includes a cache 10 in accordance with one embodiment of the present invention. Computer system 40 includes a processor 20, cache 10 and a memory bus 24. Processor 20 can be any type of general purpose processor. Cache 10 may be integrated within processor 20, or external to processor 20 as shown in Fig. 1. Memory bus 24 connects processor 20 and cache 10 to the remaining memory sub-system of computer system 40. Memory that may be coupled to memory bus 24 may include additional cache memory, random access memory ("RAM"), read-only memory ("ROM"), disk-drive memory, or any type of memory that may be present in a computer system.
[0012] Cache 10 includes a cache data RAM 16. Cache data RAM 16 stores cache data that is received either from processor 20, or from memory coupled to memory bus 24. In one embodiment, the data stored in cache data RAM 16 is stored in the form of cache "lines", which are blocks of data. Each cache line is divided into multiple sub- sectors (i.e., sub-sector 22 and sub-sector 24).
[0013] Cache 10 further includes a cache tag RAM 12. Cache tag RAM 12 stores "tags" or identifiers of each line stored in cache data RAM 16, and the corresponding location in cache data RAM 16 where the line is stored. For example, the first line in cache data RAM 16 may have a tag of "A" and may be stored in location 0200. Further the second line in cache data RAM 16 may have a tag of "B" and may be stored in location 0400.
[0014] Cache 10 further includes a valid bits module 14. Valid bits module 14 stores a "valid" bit for each sub-sector of each line stored in cache data RAM 16. The valid bit indicates whether the corresponding sub-sector includes valid or invalid data. [0015] Cache 10 further includes a VST buffer 18. VST buffer 18 stores entries which indicate when a sub-sector of a line stored in cache data RAM 16, which is marked as an invalid sector by valid bits module 14, actually stores valid data which can be used by processor 20. [0016] Cache data RAM 16, Cache tag RAM 12 and valid bits module 14 generally operate as the prior art equivalent modules that implement a sub-sector cache system, h general, this operation begins when processor 20 requests a sub-sector of a line of data stored in memory. The memory request is processed by cache 10 by first identifying the tag of the line requested. The presence of the tag is searched in cache tag RAM 12. If the desired tag exists, the valid bit for the requested sub-sector of the line is queried in valid bits module 14. If the requested sub-sector is valid, then that sub-sector is retrieved from cache data RAM 16 and sent to processor 20. [0017] A cache miss may occur if either the desired tag is not found in cache tag RAM 12 (i.e., the desired line is not in cache data RAM 16), or the requested sub- sector is invalid. When a cache miss occurs, one of the lines in cache data RAM 16 is designated as a "replaced line", and each sub-sector of the replaced line is marked as "invalid" in valid bits module 14 (and can be referred to as "replaced sub-sectors"). The requested sub-sector is then retrieved from memory bus 24 and stored in place of the corresponding sub-sector of the replaced line. The corresponding cache tag and valid bit is also updated. The remaining sub-sectors of the replaced line are not changed, but in prior art systems they remain unusable because these sub-sectors remain marked as invalid in valid bits module 14.
[0018] In one embodiment of the present invention, VST buffer 18 stores the sub- sector tags of recently replaced lines that include usable data. Fig. 2 provides an example of the storage of sub-sector tags in VST 18 in accordance with one embodiment of the present invention. [0019] At box 100, tag A cache line, identified at 101, includes two valid sub- sectors (identified by the two "V"s)
[0020] At box 110, processor 20 requests the first sub-sector of tag B cache line. Tag B is not stored in cache data RAM 16. Therefore, tag A cache line is designated as the replaced line and both sub-sectors are marked as invalid, The first sub-sector of tag B cache line is then retrieved and stored in cache data RAM 16 in place of tag A cache line. As identified at 111, tag B cache line has valid data in its first sub-sector, and invalid data in its second sub-sector. However, the data in the second sub-sector is in fact valid data of the second sub-sector of tag A cache line. Consequently, an entry 112 is stored in VST buffer 18 that indicates that the second half sub-sector of tag B cache line includes valid data for tag A.
[0021] At box 120, processor 20 requests the second sub-sector of tag A cache line.The first check of cache tag RAM 12 results initially in a cache miss because tag A cache line was replaced by tag B cache line at box 110. However, VST buffer 18 is then queried, and entry 112 indicates that the data is available at the second half of tag B cache line. Consequently, the requested data is retrieved from tag B cache line (indicated by shaded portion of 111) and a cache miss is avoided. [0022] In other embodiments, VST buffer 18 can be queried before the requested cache line tag is searched in cache tag RAM 12. [0023] The existence of VST buffer 18 in accordance with embodiments of the present invention prevents some cache misses, thus increasing the efficiency of cache 10. Unlike the traditional data buffers, VST buffer 18 buffers the sector tags that have been replaced out of the cache recently, so that valid data still stored in the cache can be used. [0024] i order to provide an example of the advantages of embodiments of the present invention, simulation studies were done using a cache hierarchy of 8KB direct level 1 ("DL1"), a cache line size of 32-byte, an 8-way associate level 2 ("L2") cache size of 512KB, the L2 using a least recently used ("LRU") replacement policy with a 128-byte long cache line, and a 64-byte long sub cache line. All extra actions related with the VST buffer, including insert update and remove, are performed when there is a cache miss (whole cache line miss or sub-sector miss), so the VST buffer will not influence the cache hit penalty. The efficiency of the VST buffer can be computed from the following formula:
[cache misses save rate = (cache misses of sector cache - cache misses of sector cache with VST buffer)/(cache misses of sector cache - cache misses of non- sector cache)] [0025] Where "non-sector cache" is a 512KB size, 64-byte cache line size, 8-way associative L2 cache with LRU replacement policy. Several benchmarks are used for the evaluation: "mesa", "art" and "ammp" from the Spec2K organization, and a commercial-like workload "LVCSR" which is a speech recognition system. [0026] The following cache misses save rate of a VST buffer of Table I in accordance with one embodiment of the present invention was obtained with an LRU replaced VST buffer:
Figure imgf000007_0001
Table 1: cache misses save rate of the vict m sector tag buffer
[0027] One embodiment of a VST buffer can be implemented using the following software or hardware code of Table 2:
Figure imgf000008_0001
Table 2: Victim sector tag buffer handling code when cache misses
[0028] Embodiments of the present invention also provide advantages over prior art victim buffer systems when a number of stieaming (or sequential) accesses are going to the cache. With prior art victim buffer systems, many cache lines will be evicted which will thrash the victim buffer.
[0029] Fig. 3 illustrates a sequence of stieaming accesses that are handled by a VST buffer in accordance with one embodiment of the present invention. At 200, the VSJ buffer (assuming a 1/2 sector cache) is empty. At 210, after a "read addl" instruction, a VST buffer entry is created. At 220, after a "read addl+subsectorsize" instruction, the VST buffer entry is disabled. Finally, at 230, after a "read addl+subsectorsize*2" instruction, an additional buffer entry in the VST buffer is created in the same space as the previous VST buffer entry, without influencing other VST buffer entries. Therefore, as shown, the VST buffer is not thrashed during the stieaming of instructions. [0030] Several embodiments of the present invention are specifically illustrated and/or described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.

Claims

WHAT IS LATM D TS.
1. A method of operating a sub-sector cache comprising: receiving a request for a first sub-sector of a first cache line; identifying a first replaced line in a cache data random access memory (RAM), the first replaced line comprising a plurality of replaced sub-sectors; storing the first sub-sector in the cache data RAM in place of a first replaced sub-sector; and storing an identifier of at least a second replaced sub-sector in a victim sector tag buffer.
2.The method of claim 1, wherein the first sub-sector is retrieved from a memory subsystem.
3. The method of claim 1, wherein the second replaced sub-sector comprises valid data of a second cache line.
4. The method of claim 3, further comprising: receiving a request for a second sub-sector of the second cache line; retrieving the identifier from the victim sector tag buffer; and retrieving the second sub-sector from the first cache line in the cache data RAM based on the identifier.
5. The method claim 1, further comprising: m-u-king the replaced sub-sectors as invalid.
6.The method of claim 1, wherein the first sub-sector of the first cache line is not stored in the cache data RAM when the request is received.
7.The method of claim 1, wherein the identifier comprises a tag of the first cache line.
8. The method of claim 7, further comprising disabling the identifier.
9.A cache system comprising: a sector cache data random access memory (RAM); a cache tag RAM coupled to the cache data RAM; and a victim sector tag buffer coupled to the cache data RAM.
1 0. The cache system of claim 9, further comprising: a valid bits module coupled to the cache data RAM.
11. The cache system of claim 9, wherein said victim sector tag buffer is configured to store an identity of a sub-sector of a replaced line.
12.The cache system of claim 10, wherein the sub-sector ofthe replaced line is marked as invalid in said valid bits module.
13. The cache system of claim 10, wherein the identity comprises a tag of a cache line stored in said cache data RAM.
14.The cache system of claim 11, wherein said victim sector tag buffer is further configured to disable the identity.
15. A computer system comprising: a processor; a cache system coupled to said processor; and a memory bus coupled to said cache system; wherein the cache system comprises: a sector cache data random access memory (RAM); a cache tag RAM coupled to the cache data RAM; and a victim sector tag buffer coupled to the cache data RAM.
16. The computer system of claim 15, said cache system further comprising: a valid bits module coupled to the cache data RAM.
17.The computer system of claim 15, wherein said victim sector tag buffer is configured to store an identity of a sub-sector of a replaced line.
18. The computer system of claim 17, wherein the sub-sector of the replaced line is marked as invalid in said valid bits module.
19.The computer system of claim 17, wherein the identity comprises a tag of a cache line stored in said cache data RAM.
20.The computer system of claim 17, wherein said victim sector tag buffer is further configured to disable the identity to prevent thrashing.
2 LA method of storing data in a cache comprising: designating a first cache line stored in a cache data random access memory (RAM), as a replaced line, the first cache line having a first replaced sub-sector and a second replaced sub-sector; retrieving a new sub-sector from a memory subsystem; storing the new sub-sector in place ofthe first replaced sub-sector; and storing an identity ofthe second replaced sub-sector in a victim sector tag buffer.
22.The method of claim 21, wherein the new sub-sector forms a second cache line.
23. The method of claim 21, wherein the identifier comprises a tag of the first cache line.
4.The method of claim 21, further comprising disabling the identifier.
PCT/CN2002/000935 2002-12-30 2002-12-30 Cache victim sector tag buffer WO2004059491A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU2002357420A AU2002357420A1 (en) 2002-12-30 2002-12-30 Cache victim sector tag buffer
PCT/CN2002/000935 WO2004059491A1 (en) 2002-12-30 2002-12-30 Cache victim sector tag buffer
US10/365,636 US7000082B2 (en) 2002-12-30 2003-02-13 Cache victim sector tag buffer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2002/000935 WO2004059491A1 (en) 2002-12-30 2002-12-30 Cache victim sector tag buffer

Publications (1)

Publication Number Publication Date
WO2004059491A1 true WO2004059491A1 (en) 2004-07-15

Family

ID=32602088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2002/000935 WO2004059491A1 (en) 2002-12-30 2002-12-30 Cache victim sector tag buffer

Country Status (3)

Country Link
US (1) US7000082B2 (en)
AU (1) AU2002357420A1 (en)
WO (1) WO2004059491A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7197605B2 (en) * 2002-12-30 2007-03-27 Intel Corporation Allocating cache lines
US10223026B2 (en) * 2013-09-30 2019-03-05 Vmware, Inc. Consistent and efficient mirroring of nonvolatile memory state in virtualized environments where dirty bit of page table entries in non-volatile memory are not cleared until pages in non-volatile memory are remotely mirrored
US10140212B2 (en) 2013-09-30 2018-11-27 Vmware, Inc. Consistent and efficient mirroring of nonvolatile memory state in virtualized environments by remote mirroring memory addresses of nonvolatile memory to which cached lines of the nonvolatile memory have been flushed
KR20150106132A (en) * 2014-03-11 2015-09-21 삼성전자주식회사 Method and apparatus for controlling a cache memory of electronic device
US20180189179A1 (en) * 2016-12-30 2018-07-05 Qualcomm Incorporated Dynamic memory banks

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0817067A2 (en) * 1996-07-01 1998-01-07 Sun Microsystems, Inc. Integrated processor/memory device with victim data cache
US20020042860A1 (en) * 2000-10-05 2002-04-11 Yoshiki Murakami Cache system
US20020188809A1 (en) * 1999-01-19 2002-12-12 Arm Limited Memory control within data processing systems

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4797814A (en) * 1986-05-01 1989-01-10 International Business Machines Corporation Variable address mode cache
US5692152A (en) * 1994-06-29 1997-11-25 Exponential Technology, Inc. Master-slave cache system with de-coupled data and tag pipelines and loop-back
US5893147A (en) * 1994-12-22 1999-04-06 Intel Corporation Method and apparatus for distinguishing system memory data from alternative memory data in a shared cache memory
US5845324A (en) * 1995-04-28 1998-12-01 Unisys Corporation Dual bus network cache controller system having rapid invalidation cycles and reduced latency for cache access
US6199142B1 (en) * 1996-07-01 2001-03-06 Sun Microsystems, Inc. Processor/memory device with integrated CPU, main memory, and full width cache and associated method
JP3985889B2 (en) * 2001-08-08 2007-10-03 株式会社ルネサステクノロジ Semiconductor device
US7073026B2 (en) * 2002-11-26 2006-07-04 Advanced Micro Devices, Inc. Microprocessor including cache memory supporting multiple accesses per cycle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0817067A2 (en) * 1996-07-01 1998-01-07 Sun Microsystems, Inc. Integrated processor/memory device with victim data cache
US20020188809A1 (en) * 1999-01-19 2002-12-12 Arm Limited Memory control within data processing systems
US20020042860A1 (en) * 2000-10-05 2002-04-11 Yoshiki Murakami Cache system

Also Published As

Publication number Publication date
AU2002357420A1 (en) 2004-07-22
US20040128447A1 (en) 2004-07-01
US7000082B2 (en) 2006-02-14

Similar Documents

Publication Publication Date Title
EP1654660B1 (en) A method of data caching
US7380047B2 (en) Apparatus and method for filtering unused sub-blocks in cache memories
US10133678B2 (en) Method and apparatus for memory management
EP1573555B1 (en) Page descriptors for prefetching and memory management
US7380065B2 (en) Performance of a cache by detecting cache lines that have been reused
US20090106494A1 (en) Allocating space in dedicated cache ways
JP3096414B2 (en) Computer for storing address tags in directories
US9582282B2 (en) Prefetching using a prefetch lookup table identifying previously accessed cache lines
US20030135696A1 (en) Pseudo least-recently-used (PLRU) replacement method for a multi-node snoop filter
EP1532531A1 (en) Method and apparatus for multithreaded cache with simplified implementation of cache replacement policy
JPS638848A (en) Cache tag look-aside
US5809526A (en) Data processing system and method for selective invalidation of outdated lines in a second level memory in response to a memory request initiated by a store operation
US6959363B2 (en) Cache memory operation
US7254681B2 (en) Cache victim sector tag buffer
US20020174304A1 (en) Performance improvement of a write instruction of a non-inclusive hierarchical cache memory unit
US7472226B1 (en) Methods involving memory caches
US6792498B2 (en) Memory system with mechanism for assisting a cache memory
US7237084B2 (en) Method and program product for avoiding cache congestion by offsetting addresses while allocating memory
US7000082B2 (en) Cache victim sector tag buffer
US6397298B1 (en) Cache memory having a programmable cache replacement scheme
US10922230B2 (en) System and method for identifying pendency of a memory access request at a cache entry
US6601155B2 (en) Hot way caches: an energy saving technique for high performance caches
KR101976320B1 (en) Last level cache memory and data management method thereof
JPS63284649A (en) Cache memory control system
JP2000066954A (en) Replacing method for cache memory and cache memory using the method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP