US20060044934A1 - Cluster based non-volatile memory translation layer - Google Patents

Cluster based non-volatile memory translation layer Download PDF

Info

Publication number
US20060044934A1
US20060044934A1 US10/933,017 US93301704A US2006044934A1 US 20060044934 A1 US20060044934 A1 US 20060044934A1 US 93301704 A US93301704 A US 93301704A US 2006044934 A1 US2006044934 A1 US 2006044934A1
Authority
US
United States
Prior art keywords
cluster
address
logical
frequently updated
physical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/933,017
Inventor
Wanmo Wong
Mark Jahn
Frank Sepulveda
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Priority to US10/933,017 priority Critical patent/US20060044934A1/en
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JAHN, MARK, SEPULVEDA, FRANK, WONG, WANMO
Publication of US20060044934A1 publication Critical patent/US20060044934A1/en
Priority to US12/372,405 priority patent/US8375157B2/en
Priority to US13/764,213 priority patent/US8595424B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages

Definitions

  • the present invention relates generally to integrated circuits and in particular the present invention relates to sector address translation of non-volatile memory devices.
  • RAM random-access memory
  • ROM read-only memory
  • An EEPROM electrically erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM comprise a large number of memory cells having electrically isolated gates (floating gates). Data is stored in the memory cells in the form of charge on the floating gates. Charge is transported to or removed from the floating gates by specialized programming and erase operations, respectively.
  • Flash memory is a type of EEPROM that can be erased and reprogrammed in blocks instead of one byte at a time.
  • a typical Flash memory comprises a memory array, which includes a large number of memory cells. Each of the memory cells includes a floating gate field-effect transistor capable of holding a charge. The data in a cell is determined by the presence or absence of the charge in the floating gate.
  • the cells are usually grouped into sections called “erase blocks.”
  • the memory cells of a Flash memory array are typically arranged into a “NOR” architecture (the cells arranged in an array of rows and columns, each cell directly coupled to a bitline) or a “NAND” architecture (cells coupled into “strings” of cells, such that each cell is coupled indirectly to a bitline and requires activating the other cells of the string for access).
  • Each of the cells within an erase block can be electrically programmed in a random basis by charging the floating gate. The charge can be removed from the floating gate by a block erase operation, wherein all floating gate memory cells in the erase block are erased in a single operation.
  • Erase block management provides an abstraction layer for this to the host, allowing the Flash device to appear as a freely rewrite-able device.
  • Erase block management also allows for load leveling of the internal floating gate memory cells to help prevent write fatigue failure. Write fatigue is where the floating gate memory cell, after repetitive writes and erasures, no longer properly erases and removes charge from the floating gate. Load leveling procedures increase the mean time between failure of the erase block and Flash memory device as a whole.
  • the host interface and/or erase block management routines additionally allow the Flash memory device to appear as a read/write mass storage device (i.e., a magnetic disk) to the host, storing data in the Flash memory in 512-byte sectors.
  • the erase block management routines along with the address translation layer provide the necessary linkage between the host and the internal Flash memory device erase block array; logically mapping logical sectors to physical sectors on the Flash device and managing block erasure.
  • the various embodiments relate to non-volatile memory devices and memory subsystems that utilize cluster based logical block/sector to physical block/sector address translation.
  • the translation of logical blocks/sectors to the physical blocks/sectors by a controller and/or software/firmware is necessary for a non-volatile memory to appear as a freely rewriteable device to the system or processor that it is coupled to.
  • the controller or firmware responsible for this translation is called the translation layer (TL).
  • Embodiments of the present invention utilize cluster based address translation to translate logical block addresses to physical block addresses, wherein each cluster contains a plurality of sequentially addressed logical blocks. Cluster address translation closely represents the actual data storage use of the file system and its logical block use/grouping.
  • variable cluster granularity allows the non-volatile memory storage to closely match its use and the data that will be stored in it.
  • a specially formatted cluster is utilized for frequently updated sectors/logical blocks, where the cluster stores a single sector/logical block and new sequential physical sectors/blocks of the cluster is written in turn with each new update of the logical block and the previous physical block holding the old data invalidated until the entire cluster has been used. This allows multiple updates of a logical sector without having to move and invalidate/erase the cluster containing the old data.
  • the invention provides a system comprising a host coupled to a non-volatile memory device, wherein the system is adapted to store logical blocks of data in the non-volatile memory device, where the logical blocks are grouped in plurality of clusters, each cluster containing a plurality of sequentially addressed logical blocks.
  • the invention provides a system comprising a host coupled to a non-volatile memory subsystem, wherein the non-volatile memory subsystem comprises a plurality of non-volatile memory devices, and wherein the system is adapted to store logical blocks of data in the non-volatile memory subsystem, where the logical blocks are grouped in plurality of clusters, each cluster containing a plurality of sequentially addressed logical blocks.
  • the invention provides a Flash memory device comprising a memory array having a plurality of floating gate memory cells arranged in a plurality of clusters, wherein each cluster contains a plurality of sequentially addressed sectors.
  • the invention provides a method of operating a non-volatile memory comprising storing logical blocks in clusters of sequentially addressed logical blocks in a non-volatile memory.
  • the invention provides a method of translating a logical block address to a physical address in a non-volatile memory comprising looking up a logical block address in a cluster address translation table to translate a logical cluster address to a cluster physical address, wherein each cluster of the non-volatile memory contains a plurality of sequentially addressed logical blocks, and determining the physical block address offset for the logical block address within the physical cluster.
  • the invention provides a method of translating a logical block address to a physical address in a non-volatile memory comprising scanning a non-volatile memory on physical cluster address basis to locate a logical cluster address associated with a physical cluster, wherein each cluster of the non-volatile memory contains a plurality of sequentially addressed logical blocks, and determining the physical block address offset for the logical block address within the physical cluster.
  • FIGS. 1A and 1B detail memory systems with memory and memory controllers in accordance with embodiments of the present invention.
  • FIGS. 2A and 2B detail encoding of logical address blocks/sectors in Flash memory arrays in accordance with embodiments of the present invention.
  • FIG. 3A details a block diagram of a logical block address translation in a memory system of the prior art.
  • FIGS. 3B and 3C detail block diagrams of cluster based logical block address translation in accordance with embodiments of the present invention.
  • FIG. 4 details detail a flowchart of cluster based logical block address translation in accordance with embodiments of the present invention.
  • a non-volatile memory of the present invention manages logical block address translation in a cluster based approach.
  • the cluster based address translation approach allows a non-volatile or Flash memory embodiment of the present invention to translate cluster based logical block addresses to physical block addresses, wherein each cluster contains a plurality of sequentially addressed logical blocks. This allows for a smaller logical cluster to physical cluster address translation RAM look up table or faster physical scan of the physical cluster addresses of the non-volatile memory device or subsystem resulting in an improved performance.
  • variable cluster granularity an adjustable number of blocks/sectors per cluster
  • a specially formatted cluster is utilized for frequently updated sectors/logical blocks, where the special cluster stores a single sector/logical block and new sequential physical sectors/blocks of the cluster is written in turn with each new update of the logical block and the previous physical block holding the old data invalidated until the entire cluster has been used. This allows multiple updates of a logical sector without having to move then invalidate and/or erase the cluster containing the old data.
  • EBM Erase Block Management
  • an internal state machine typically under the control of an internal state machine, an external memory controller, or software driver, provides an abstraction layer for this to the host (a system, a processor or an external memory controller), allowing the non-volatile device to appear as a freely rewriteable device, including, but not limited to, managing the logical address to physical address translation mapping with the translation layer, the assignment of erased and available erase blocks for utilization, and the scheduling erase blocks that have been used and closed out for block erasure.
  • Erase block management also allows for load leveling of the internal floating gate memory cells to help prevent write fatigue failure.
  • Write fatigue is where the floating gate memory cell, after repetitive writes and erasures, no longer properly erases and removes charge from the floating gate.
  • Load leveling procedures increase the mean time between failure of the erase block and non-volatile/Flash memory device as a whole.
  • Flash memory array architectures are the “NAND” and “NOR” architectures, so called for the resemblance which the basic memory cell configuration of each architecture has to a basic NAND or NOR gate circuit, respectively.
  • Other types of non-volatile memory include, but are not limited to, Polymer Memory, Ferroelectric Random Access Memory (FeRAM), Ovionics Unified Memory (OUM), Nitride Read Only Memory (NROM), and Magnetoresistive Random Access Memory (MRAM).
  • the floating gate memory cells of the memory array are arranged in a matrix.
  • the gates of each floating gate memory cell of the array matrix are connected by rows to word select lines (word lines) and their drains are connected to column bit lines.
  • the source of each floating gate memory cell is typically connected to a common source line.
  • the NOR architecture floating gate memory array is accessed by a row decoder activating a row of floating gate memory cells by selecting the word line connected to their gates.
  • the row of selected memory cells then place their stored data values on the column bit lines by flowing a differing current if in a programmed state or not programmed state from the connected source line to the connected column bit lines.
  • a NAND Flash memory array architecture also arranges its array of floating gate memory cells in a matrix such that the gates of each floating gate memory cell of the array are connected by rows to word lines. However each memory cell is not directly connected to a source line and a column bit line. Instead, the memory cells of the array are arranged together in strings, typically of 8, 16, or more each, where the memory cells in the string are connected together in series, source to drain, between a common source line and a column bit line.
  • the NAND architecture floating gate memory array is then accessed by a row decoder activating a row of floating gate memory cells by selecting the word select line connected to their gates. In addition, the word lines connected to the gates of the unselected memory cells of each string are also driven.
  • the unselected memory cells of each string are typically driven by a higher gate voltage so as to operate them as pass transistors and allowing them to pass current in a manner that is unrestricted by their stored data values.
  • Current then flows from the source line to the column bit line through each floating gate memory cell of the series connected string, restricted only by the memory cells of each string that are selected to be read. Thereby placing the current encoded stored data values of the row of selected memory cells on the column bit lines.
  • DOS disk Operating System
  • a sector (of a magnetic disk drive) is the smallest unit of storage that the DOS operating system supports.
  • a logical block or sector (referred to herein as a logical block) has come to mean 512 bytes of information for DOS and most other operating systems in existence.
  • Flash and other non-volatile memory systems that emulate the storage characteristics of hard disk drives are preferably structured to support storage in 512 byte blocks along with additional storage for overhead associated with mass storage, such as ECC bits, status flags for the sector or erase block, and/or redundant bits.
  • the controller and/or software routines additionally allow the Flash memory device or a memory subsystem of Flash memory devices to appear as a read/write mass storage device (i.e., a magnetic disk) to the host by conforming the interface to the Flash memory to be identical to a standard interface for a conventional magnetic hard disk drive.
  • a read/write mass storage device i.e., a magnetic disk
  • PCMCIA Personal Computer Memory Card International Association
  • CF Compact Flash
  • MMC Multimedia Card
  • Flash memory device or Flash memory card including one or more Flash memory array chips
  • PCMCIA-ATA Personal Computer Memory Card International Association—Advanced Technology Attachment
  • USB Universal Serial Bus
  • firmware or ROM routines are stored on a variety of machine usable storage mediums that include, but are not limited to, a non-volatile Flash memory, a read only memory (ROM), an electrically erasable programmable read only memory (EEPROM), a one time programmable (OTP) device, a complex programmable logic device (CPLD), a memory controller, an application specific integrated circuit (ASIC), a CD-ROM, a magnetic media disk, etc.
  • FIG. 1A is a simplified diagram of a system 100 that incorporates a Flash memory device 104 embodiment of the present invention.
  • the Flash memory 104 is coupled to a processor 102 with an address/data bus 106 .
  • a control state machine 110 directs internal operation of the Flash memory device; managing the Flash memory array 108 and updating control registers and tables 114 .
  • the Flash memory array 108 contains floating gate memory cells arranged in a sequence of erase blocks 116 , 118 .
  • Each erase block 116 , 118 contains a series of physical pages or rows 120 , each page containing physical storage for one or more logical sectors or blocks 124 (shown here for illustration purposes as a single logical sector/block 124 per physical page/row 120 ) that contain a user data space and a control/overhead data space.
  • the overhead data space contains overhead information for operation of the logical block 124 , such as an error correction code (not shown), status flags, or an erase block management data field area (not shown).
  • the user data space in each logical block 124 is typically 512 bytes long.
  • Flash memory 104 interfaces to the Flash memory 104 and formats for the erase blocks 116 , 118 , physical pages 120 , and logical sectors/blocks 124 are possible and should be apparent to those skilled in the art with benefit of the present disclosure. It is also noted that additional Flash memory devices 104 may be incorporated into the system 100 as required.
  • the logical blocks are arranged in clusters and address translation of the logical block address to physical block address in the Flash memory 104 utilizes cluster based address translation incorporating embodiments of the present invention.
  • FIG. 1B is a simplified diagram of another system 150 that incorporates a Flash memory system (also know as a memory subsystem) 160 embodiment of the present invention.
  • the Flash memory system 160 such as a memory system or Flash memory card, is coupled to a processor 152 with an address 154 , control 156 , and data bus 158 .
  • a memory controller 166 Internal to the Flash memory system 160 , a memory controller 166 directs internal operation of the Flash memory system 160 ; managing the Flash memory devices 162 , directing data accesses, updating internal control registers and tables (not shown), and/or directing operation of other possible hardware systems (not shown) of the Flash memory system 160 .
  • the memory controller 166 is coupled to and controls one or more Flash memory devices 162 via an internal control bus 164 .
  • the logical blocks 124 of the one or more Flash memory devices 162 are arranged in clusters and the memory controller 166 has an internal cluster based address translation layer (not shown) that incorporates embodiments of the present invention.
  • the memory controller 166 may optionally incorporate a small local embedded processor to help manage the Flash memory system 160 . It is noted that other architectures of Flash memory systems 160 , external interfaces 154 , 156 , 158 , and manners of coupling the memory controller 166 to the Flash memory devices 162 , such as directly coupled individual control busses and signal lines, are possible and should be apparent to those skilled in the art with benefit of the present disclosure.
  • the Flash memory devices 162 each contain a sequence of erase blocks 116 , 118 in their internal memory arrays.
  • Each erase block 116 , 118 contains a series of physical pages 120 , each physical page 120 having one or more logical sectors or blocks 124 that contain a user data space and a control/overhead data space (shown here for illustration purposes as a single logical sector/block 124 per physical page/row 120 ).
  • the overhead data space can contain an ECC code (not shown) and other overhead information for operation of the logical block 120 , such as status flags, or an erase block management data field area (not shown).
  • FIGS. 2A and 2B detail encoding 200 , 220 of user data into sector/logical blocks of a Flash memory array.
  • user data 212 and header/overhead data 214 is shown in a memory array 202 (or into an erase block N 202 of a memory array), where a single 512-byte logical block is encoded in each physical page/row 210 of the memory array 202 .
  • the memory array 202 contains a series of rows 210 , each row containing a logical block having a user data area 204 and an overhead data area 206 .
  • user data 226 and header/overhead data 228 is shown in a memory array 222 (or into an erase block N 222 of a memory array), where a multiple logical blocks 232 are encoded in each physical page/row 230 of the memory array 222 .
  • many memories support multiple logical sectors or logical block 232 within a single physical row page 230 .
  • NAND architecture Flash memories typically utilize this approach due to their generally higher memory cell density and larger row page sizes.
  • the memory row 230 contains multiple logical blocks/sectors 232 , each logical block 232 having a user data area 226 and an overhead data/block header section 228 .
  • 2B contains 2112 bytes of data (4 ⁇ 512 bytes user data+4 ⁇ 8 bytes ECC+32 bytes for overhead) and is formatted to contain four logical blocks 232 having a user data area 226 of 512-bytes each.
  • the four logical sectors 232 are typically sequentially addressed N, N+1, N+2, and N+3, where N is a base logical sector address for the row page 230 . It is noted that the row pages 210 and 230 of FIGS. 2A and 2B are for illustration purposes and that other row page sector formats of differing data sizes, numbers of logical blocks/sectors, and relative positioning of sectors are possible.
  • the array is divided into a plurality of individually erasable groups of memory cells called erase blocks, which are each typically further divided into a plurality of 512-byte physical blocks.
  • the non-volatile memory is formatted to conform to the data structures and management data fields/tables of the file system or memory structure being represented.
  • Each physical block of the memory array also may contain a header or overhead data area that typically includes various data used in the management of the physical block. This management data can include such items as the status of the physical block (valid, erased/available, or to be erased/invalid) and an error correction code (ECC) for the data of the logical block.
  • the header typically also includes an identifier that identifies the logical block address for the physical block.
  • the translation layer in conjunction with the erase block management manages the storage of logical blocks in non-volatile memory devices or a non-volatile memory subsystem.
  • the client of a translation layer is typically the file system or operating system of an associated system or processor.
  • the goal of the translation layer/EBM layer is to make the non-volatile memory appear as a freely rewriteable device or magnetic disk/hard drive, allowing the client to read and write logical blocks to be coupled non-volatile memory. It is noted that other translation layers can allow the direct reading and writing of data to a non-volatile memory without presenting the non-volatile memory as a formatted file system.
  • FIG. 3A details a simplified block diagram of a prior art lookup table address translation system 300 .
  • a logical block address 302 of a logical block read/write access request is submitted to the address translation layer (not shown, but can be either a firmware routine executing on a processor of a system, address translation hardware of a memory controller or in a control circuit internal to the memory itself) which translates it to a physical block address by reference to a lookup table 304 , which is typically held in RAM.
  • the address translation system 300 then uses the translated logical address to access the indicated physical block from a row 308 of a non-volatile memory array 306 .
  • the physical blocks 308 of the memory array 306 would be scanned by the address translation system 300 for a header that contained the matching logical block address 302 .
  • Embodiments of the present invention utilize cluster based logical block/sector to physical block/sector address translation in non-volatile memory devices and memory subsystems.
  • cluster based addressing and address translation the non-volatile memory device or non-volatile memory subsystem is divided into a plurality of sequentially addressed clusters, wherein each cluster contains a plurality of sequentially addressed logical blocks or sectors.
  • a cluster contains 4 sequential logical blocks.
  • Address translation to translate logical block addresses to physical block addresses is then performed by a table lookup of the logical cluster address of the cluster containing the logical block and returns the base physical address of the cluster in the non-volatile memory. An address offset from the cluster base address or a short physical scan can then be used to access the requested logical block, which is sequentially addressed within the cluster.
  • Cluster address translation allows close matching of data storage use, in addition, the reduced number of base cluster addresses allows the use of a smaller lookup table that contains only the cluster addresses, allowing a smaller RAM footprint.
  • Physical scanning address translation of the non-volatile memory is also improved by cluster based addressing because of a reduced number of base addresses required to be scanned (logical blocks not on the dividing boundary between clusters/containing the cluster header can be skipped over, permitting the physical scanning to be reduced by a function of cluster granularity).
  • an individual logical block address is translated to an exact physical block location by taking the logical block address and integer dividing it by the total number of clusters. The result of the integer division is then used to index into the cluster address lookup table. The remainder value is the index to the sector/block (the sector number of the sequential sectors of the cluster) within the selected cluster. The remainder value is multiplied by 512 (512-byte per sector/block) to get the physical address of the sector/block within the non-volatile memory.
  • the division can be done by simply masking off one or more of the least significant bits of the logical block address (the part of the binary address that relates to the address of the logical block within the cluster) to get the index into the cluster address translation lookup table to retrieve the associated physical cluster base address. The most significant bits can then be masked off to get an index to the logical block in the cluster.
  • FIG. 3B details a simplified block diagram of a cluster based lookup table address translation system 320 of an embodiment of the present invention.
  • a logical block address 322 of a logical block read/write access request is submitted to the cluster based address translation layer (not shown) which translates it to a physical cluster address by reference to a cluster address lookup table 324 .
  • a logical block address index to the selected logical block within the cluster is also generated.
  • the address translation system 320 uses the translated cluster address and the logical block index to access the indicated physical block from a row 328 of a non-volatile memory array 326 .
  • the physical clusters of the memory array 326 would be scanned by the address translation system 320 to locate the logical cluster address that contained the matching logical block address 322 .
  • the cluster granularity is adjustable and is selected upon memory device formatting or during system design and implementation, allowing for an adjustable number of blocks/sectors per cluster. This allows the non-volatile memory storage to be adjusted to closely match the data type and access usage it will be used for, the physical row size of the non-volatile memory for convenient accessing, the size of the cluster lookup table, and/or the scan time of the physical cluster scan.
  • a type of specially formatted cluster is utilized to store frequently updated sectors/logical blocks. This allows the cluster based translation layer to avoid the drawback of having to frequently copy, update and invalidate/erase a cluster containing an often updated sector/logical block, potentially causing excessive wear on the non-volatile memory and premature write fatigue failure of the part.
  • the special frequently updated sector cluster also known as a page of logical blocks or single sector cluster
  • the cluster stores a time-wise sequence of a single sector/logical block. A new sequence of physical sectors/blocks of the cluster is written in turn with each new update of the stored logical block and the previous physical block holding the old data is invalidated.
  • the address translation layer simply selects the most recently written/not invalid block of the single sector cluster.
  • FIG. 3C details a simplified block diagram of a cluster based lookup table address translation system 340 of an embodiment of the present invention that incorporates frequently updated sector cluster addressing.
  • a logical block address 342 of a logical block read/write access request is submitted to the cluster based address translation layer (not shown) which, if it is not a frequently updated sector/logical block, translates it to a physical cluster address by reference to a cluster address lookup table 344 .
  • a logical block address index to the selected logical block within the cluster is also generated.
  • the address translation system 340 uses the translated cluster address and the logical block index to access the indicated physical block from a row 348 of a non-volatile memory array 346 .
  • the address lookup is done on a separate logical block address lookup table 350 that only handles address translation for frequently updated logical blocks/sectors.
  • the address translation system 340 uses the translated cluster address the physical address from the frequently updated logical block/sector address lookup table 350 to access the indicated cluster/page of logical blocks 352 and select the most recently updated logical block from it, allowing the frequently updated logical blocks to be managed on a separate basis.
  • the data written to the non-volatile memory is typically all placed in standard clusters and then is promoted to be stored in a frequently updated sector/page of logical blocks cluster 352 upon reaching a threshold level of updates.
  • the threshold level of updates can also be limited in time by ageing the last update, so that promotion only happens to logical blocks that have been recently updated on a frequent basis. It is noted that in one embodiment logical blocks could also be designated to be frequently updated when initially written to the non-volatile memory by the client system.
  • frequently updated logical blocks can also be demoted to a standard cluster storage if they haven't been updated recently or their number of recent updates falls below a moving average threshold level. This allows the specialized frequently updated single sector clusters to be minimized and utilized only on those sectors/blocks that require them.
  • the frequently updated sectors/blocks are stored individually in a non-cluster basis and not in specialized frequently updated single sector clusters.
  • FIG. 4 details a state transition diagram 400 for a cluster based address translation system incorporating frequently updated single sector clusters for non-volatile memory devices of the present invention.
  • a logical block address 402 of a logical block read/write access request is submitted to the cluster based address translation layer (not shown) which looks it up in a cluster based address translation table 404 . If it is not a frequently updated sector/logical block 406 , the address translation system then uses the translated cluster address to access the indicated physical block 408 . If the logical block address is for a frequently updated logical block/sector, the address lookup 410 is done on a separate logical block address lookup table that only handles address translation for frequently updated logical blocks/sectors. The address translation system then uses the translated cluster address the physical address from the frequently updated logical block/sector address lookup table to access 412 the indicated frequently updated single sector cluster/page of logical blocks.
  • Embodiments of the present invention utilize cluster based address translation to translate logical block addresses to physical block addresses, wherein each cluster contains a plurality of sequentially addressed logical blocks. This allows the use of a smaller RAM table for the address translation lookup and/or faster scanning of the memory device or memory subsystem for the matching cluster address.
  • variable cluster granularity an adjustable number of blocks/sectors per cluster
  • a specially formatted cluster is utilized for frequently updated sectors/logical blocks, where the cluster stores a single sector/logical block and new sequential physical sectors/blocks of the cluster is written in turn with each new update of the logical block and the previous physical block holding the old data invalidated until the entire cluster has been used. This allows multiple updates of a logical sector without having to move and invalidate/erase the cluster containing the old data, reducing the process of memory cell write fatigue.

Abstract

An improved non-volatile memory and logical block to physical block address translation method utilizing a cluster based addressing scheme is detailed. The translation of logical blocks/sectors to the physical blocks/sectors is necessary for a non-volatile memory to appear as a freely rewriteable device to a system or processor. Embodiments of the present invention utilize cluster based address translation to translate logical block addresses to physical block addresses, wherein each cluster contains a plurality of sequentially addressed logical blocks. This allows the use of a smaller RAM table for the address translation lookup and/or faster scanning of the memory device or memory subsystem for the matching cluster address. In one embodiment, a specially formatted cluster is utilized for frequently updated sectors/logical blocks, where the cluster stores a single logical block and a new sequential physical block of the cluster is written in turn with each update.

Description

    TECHNICAL FIELD OF THE INVENTION
  • The present invention relates generally to integrated circuits and in particular the present invention relates to sector address translation of non-volatile memory devices.
  • BACKGROUND OF THE INVENTION
  • Memory devices are typically provided as internal storage areas in the computer. The term memory identifies data storage that comes in the form of integrated circuit chips. There are several different types of memory used in modern electronics, one common type is RAM (random-access memory). RAM is characteristically found in use as main memory in a computer environment. RAM refers to read and write memory; that is, you can both write data into RAM and read data from RAM. This is in contrast to ROM, which permits you only to read data. Most RAM is volatile, which means that it requires a steady flow of electricity to maintain its contents. As soon as the power is turned off, whatever data was in RAM is lost.
  • Computers almost always contain a small amount of read-only memory (ROM) that holds instructions for starting up the computer. Unlike RAM, ROM cannot be written to. An EEPROM (electrically erasable programmable read-only memory) is a special type non-volatile ROM that can be erased by exposing it to an electrical charge. EEPROM comprise a large number of memory cells having electrically isolated gates (floating gates). Data is stored in the memory cells in the form of charge on the floating gates. Charge is transported to or removed from the floating gates by specialized programming and erase operations, respectively.
  • Yet another type of non-volatile memory is a Flash memory. A Flash memory is a type of EEPROM that can be erased and reprogrammed in blocks instead of one byte at a time. A typical Flash memory comprises a memory array, which includes a large number of memory cells. Each of the memory cells includes a floating gate field-effect transistor capable of holding a charge. The data in a cell is determined by the presence or absence of the charge in the floating gate. The cells are usually grouped into sections called “erase blocks.” The memory cells of a Flash memory array are typically arranged into a “NOR” architecture (the cells arranged in an array of rows and columns, each cell directly coupled to a bitline) or a “NAND” architecture (cells coupled into “strings” of cells, such that each cell is coupled indirectly to a bitline and requires activating the other cells of the string for access). Each of the cells within an erase block can be electrically programmed in a random basis by charging the floating gate. The charge can be removed from the floating gate by a block erase operation, wherein all floating gate memory cells in the erase block are erased in a single operation.
  • Because all the cells in an erase block of a Flash memory device must be erased all at once, one cannot directly rewrite a Flash memory cell without first engaging in a block erase operation. Erase block management (EBM) provides an abstraction layer for this to the host, allowing the Flash device to appear as a freely rewrite-able device. Erase block management also allows for load leveling of the internal floating gate memory cells to help prevent write fatigue failure. Write fatigue is where the floating gate memory cell, after repetitive writes and erasures, no longer properly erases and removes charge from the floating gate. Load leveling procedures increase the mean time between failure of the erase block and Flash memory device as a whole.
  • In many modern Flash memory devices implementations, the host interface and/or erase block management routines additionally allow the Flash memory device to appear as a read/write mass storage device (i.e., a magnetic disk) to the host, storing data in the Flash memory in 512-byte sectors. As stated above, the erase block management routines along with the address translation layer provide the necessary linkage between the host and the internal Flash memory device erase block array; logically mapping logical sectors to physical sectors on the Flash device and managing block erasure.
  • To accomplish this mapping of a logical sector to a physical sector in the Flash memory of the prior art, either a table is kept in RAM or the physical sectors are scanned for the physical sector that contains the requested logical sector address. With the data storage capacity of modern Flash memories increasing issues are being caused with the size of the required RAM table and/or the time required to scan the Flash memory for the requested sector. This is particularly an important issue in resource limited handheld or embedded devices.
  • For the reasons stated above, and for other reasons stated below which will become apparent to those skilled in the art upon reading and understanding the present specification, there is a need in the art for a Flash memory device and/or erase block management with improved logical to physical sector mapping.
  • SUMMARY OF THE INVENTION
  • The above-mentioned problems with logical to physical sector mapping and other problems are addressed by the present invention and will be understood by reading and studying the following specification.
  • The various embodiments relate to non-volatile memory devices and memory subsystems that utilize cluster based logical block/sector to physical block/sector address translation. As stated above, the translation of logical blocks/sectors to the physical blocks/sectors by a controller and/or software/firmware is necessary for a non-volatile memory to appear as a freely rewriteable device to the system or processor that it is coupled to. The controller or firmware responsible for this translation is called the translation layer (TL). Embodiments of the present invention utilize cluster based address translation to translate logical block addresses to physical block addresses, wherein each cluster contains a plurality of sequentially addressed logical blocks. Cluster address translation closely represents the actual data storage use of the file system and its logical block use/grouping. This allows the use of a smaller RAM table for the address translation lookup and/or faster scanning of the memory device or memory subsystem for the matching cluster address. In one embodiment, variable cluster granularity (an adjustable number of blocks/sectors per cluster) allows the non-volatile memory storage to closely match its use and the data that will be stored in it. In another embodiment of the present invention, a specially formatted cluster is utilized for frequently updated sectors/logical blocks, where the cluster stores a single sector/logical block and new sequential physical sectors/blocks of the cluster is written in turn with each new update of the logical block and the previous physical block holding the old data invalidated until the entire cluster has been used. This allows multiple updates of a logical sector without having to move and invalidate/erase the cluster containing the old data.
  • For one embodiment, the invention provides a system comprising a host coupled to a non-volatile memory device, wherein the system is adapted to store logical blocks of data in the non-volatile memory device, where the logical blocks are grouped in plurality of clusters, each cluster containing a plurality of sequentially addressed logical blocks.
  • In another embodiment, the invention provides a system comprising a host coupled to a non-volatile memory subsystem, wherein the non-volatile memory subsystem comprises a plurality of non-volatile memory devices, and wherein the system is adapted to store logical blocks of data in the non-volatile memory subsystem, where the logical blocks are grouped in plurality of clusters, each cluster containing a plurality of sequentially addressed logical blocks.
  • In yet another embodiment, the invention provides a Flash memory device comprising a memory array having a plurality of floating gate memory cells arranged in a plurality of clusters, wherein each cluster contains a plurality of sequentially addressed sectors.
  • In a further embodiment, the invention provides a method of operating a non-volatile memory comprising storing logical blocks in clusters of sequentially addressed logical blocks in a non-volatile memory.
  • In yet a further embodiment, the invention provides a method of translating a logical block address to a physical address in a non-volatile memory comprising looking up a logical block address in a cluster address translation table to translate a logical cluster address to a cluster physical address, wherein each cluster of the non-volatile memory contains a plurality of sequentially addressed logical blocks, and determining the physical block address offset for the logical block address within the physical cluster.
  • In another embodiment, the invention provides a method of translating a logical block address to a physical address in a non-volatile memory comprising scanning a non-volatile memory on physical cluster address basis to locate a logical cluster address associated with a physical cluster, wherein each cluster of the non-volatile memory contains a plurality of sequentially addressed logical blocks, and determining the physical block address offset for the logical block address within the physical cluster.
  • Further embodiments of the invention include methods and apparatus of varying scope.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGS. 1A and 1B detail memory systems with memory and memory controllers in accordance with embodiments of the present invention.
  • FIGS. 2A and 2B detail encoding of logical address blocks/sectors in Flash memory arrays in accordance with embodiments of the present invention.
  • FIG. 3A details a block diagram of a logical block address translation in a memory system of the prior art.
  • FIGS. 3B and 3C detail block diagrams of cluster based logical block address translation in accordance with embodiments of the present invention.
  • FIG. 4 details detail a flowchart of cluster based logical block address translation in accordance with embodiments of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • In the following detailed description of the preferred embodiments, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific preferred embodiments in which the inventions may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical and electrical changes may be made without departing from the spirit and scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the claims.
  • To overcome the reliance on conventional logical block to physical block RAM address translation tables or physical block scans with the above detailed issues of large RAM footprints and/or time consuming physical scans a non-volatile memory of the present invention manages logical block address translation in a cluster based approach. The cluster based address translation approach allows a non-volatile or Flash memory embodiment of the present invention to translate cluster based logical block addresses to physical block addresses, wherein each cluster contains a plurality of sequentially addressed logical blocks. This allows for a smaller logical cluster to physical cluster address translation RAM look up table or faster physical scan of the physical cluster addresses of the non-volatile memory device or subsystem resulting in an improved performance. As stated above, the translation of logical blocks/sectors to the physical blocks/sectors by a controller and/or software/firmware is necessary for a non-volatile memory to appear as a freely rewriteable device to the system or processor that it is coupled to. The controller or firmware responsible for this translation is called the translation layer (TL). In one embodiment, variable cluster granularity (an adjustable number of blocks/sectors per cluster) allows the non-volatile memory storage to closely match the access types and data that will be stored in it. In another embodiment of the present invention, a specially formatted cluster is utilized for frequently updated sectors/logical blocks, where the special cluster stores a single sector/logical block and new sequential physical sectors/blocks of the cluster is written in turn with each new update of the logical block and the previous physical block holding the old data invalidated until the entire cluster has been used. This allows multiple updates of a logical sector without having to move then invalidate and/or erase the cluster containing the old data.
  • As stated above, because all the cells in an erase block of a non-volatile memory device, and in particular, a Flash memory device are generally erased all at once, one cannot directly rewrite a memory cell without first engaging in a block erase operation. Erase Block Management (EBM), typically under the control of an internal state machine, an external memory controller, or software driver, provides an abstraction layer for this to the host (a system, a processor or an external memory controller), allowing the non-volatile device to appear as a freely rewriteable device, including, but not limited to, managing the logical address to physical address translation mapping with the translation layer, the assignment of erased and available erase blocks for utilization, and the scheduling erase blocks that have been used and closed out for block erasure. Erase block management also allows for load leveling of the internal floating gate memory cells to help prevent write fatigue failure. Write fatigue is where the floating gate memory cell, after repetitive writes and erasures, no longer properly erases and removes charge from the floating gate. Load leveling procedures increase the mean time between failure of the erase block and non-volatile/Flash memory device as a whole.
  • As stated above, two common types of Flash memory array architectures are the “NAND” and “NOR” architectures, so called for the resemblance which the basic memory cell configuration of each architecture has to a basic NAND or NOR gate circuit, respectively. Other types of non-volatile memory include, but are not limited to, Polymer Memory, Ferroelectric Random Access Memory (FeRAM), Ovionics Unified Memory (OUM), Nitride Read Only Memory (NROM), and Magnetoresistive Random Access Memory (MRAM).
  • In the NOR Flash memory array architecture, the floating gate memory cells of the memory array are arranged in a matrix. The gates of each floating gate memory cell of the array matrix are connected by rows to word select lines (word lines) and their drains are connected to column bit lines. The source of each floating gate memory cell is typically connected to a common source line. The NOR architecture floating gate memory array is accessed by a row decoder activating a row of floating gate memory cells by selecting the word line connected to their gates. The row of selected memory cells then place their stored data values on the column bit lines by flowing a differing current if in a programmed state or not programmed state from the connected source line to the connected column bit lines.
  • A NAND Flash memory array architecture also arranges its array of floating gate memory cells in a matrix such that the gates of each floating gate memory cell of the array are connected by rows to word lines. However each memory cell is not directly connected to a source line and a column bit line. Instead, the memory cells of the array are arranged together in strings, typically of 8, 16, or more each, where the memory cells in the string are connected together in series, source to drain, between a common source line and a column bit line. The NAND architecture floating gate memory array is then accessed by a row decoder activating a row of floating gate memory cells by selecting the word select line connected to their gates. In addition, the word lines connected to the gates of the unselected memory cells of each string are also driven. However, the unselected memory cells of each string are typically driven by a higher gate voltage so as to operate them as pass transistors and allowing them to pass current in a manner that is unrestricted by their stored data values. Current then flows from the source line to the column bit line through each floating gate memory cell of the series connected string, restricted only by the memory cells of each string that are selected to be read. Thereby placing the current encoded stored data values of the row of selected memory cells on the column bit lines.
  • Many of the modern computer operating systems, such as “DOS” (Disk Operating System), were developed to support the physical characteristics of hard drive structures; supporting file structures based on heads, cylinders and sectors. The DOS software stores and retrieves data based on these physical attributes. Magnetic hard disk drives operate by storing polarities on magnetic material. This material is able to be rewritten quickly and as often as desired. These characteristics have allowed DOS to develop a file structure that stores files at a given location, which is updated by a rewrite of that location as information is changed. Essentially all locations in DOS are viewed as fixed and do not change over the life of the disk drive being used therewith, and are easily updated by rewrites of the smallest supported block of this structure. A sector (of a magnetic disk drive) is the smallest unit of storage that the DOS operating system supports. In particular, a logical block or sector (referred to herein as a logical block) has come to mean 512 bytes of information for DOS and most other operating systems in existence. Flash and other non-volatile memory systems that emulate the storage characteristics of hard disk drives are preferably structured to support storage in 512 byte blocks along with additional storage for overhead associated with mass storage, such as ECC bits, status flags for the sector or erase block, and/or redundant bits.
  • In many modern Flash memory device implementations, the controller and/or software routines additionally allow the Flash memory device or a memory subsystem of Flash memory devices to appear as a read/write mass storage device (i.e., a magnetic disk) to the host by conforming the interface to the Flash memory to be identical to a standard interface for a conventional magnetic hard disk drive. This allows the Flash memory device to appear as a block read/write mass storage device or disk. This approach has been codified by the Personal Computer Memory Card International Association (PCMCIA), Compact Flash (CF), and Multimedia Card (MMC) standardization committees, which have each promulgated a standard for supporting Flash memory systems or Flash memory “cards” with a hard disk drive protocol. A Flash memory device or Flash memory card (including one or more Flash memory array chips) whose interface meets these standards can be plugged into a host system having a standard DOS or compatible operating system with a Personal Computer Memory Card International Association—Advanced Technology Attachment (PCMCIA-ATA) or standard ATA interface. Other additional Flash memory based mass storage devices of differing low level formats and interfaces also exist, such as Universal Serial Bus (USB) Flash drives.
  • The software routines that initialize and operate a device, such as a memory controller or a non-volatile memory device or subsystem are collectively referred to as firmware or ROM after the non-volatile read only memory (ROM) machine usable storage device on which such routines have historically been stored. It is noted that such firmware or ROM routines are stored on a variety of machine usable storage mediums that include, but are not limited to, a non-volatile Flash memory, a read only memory (ROM), an electrically erasable programmable read only memory (EEPROM), a one time programmable (OTP) device, a complex programmable logic device (CPLD), a memory controller, an application specific integrated circuit (ASIC), a CD-ROM, a magnetic media disk, etc.
  • FIG. 1A is a simplified diagram of a system 100 that incorporates a Flash memory device 104 embodiment of the present invention. In the system 100 of FIG. 1A, the Flash memory 104 is coupled to a processor 102 with an address/data bus 106. Internally to the Flash memory device, a control state machine 110 directs internal operation of the Flash memory device; managing the Flash memory array 108 and updating control registers and tables 114. The Flash memory array 108 contains floating gate memory cells arranged in a sequence of erase blocks 116, 118. Each erase block 116, 118 contains a series of physical pages or rows 120, each page containing physical storage for one or more logical sectors or blocks 124 (shown here for illustration purposes as a single logical sector/block 124 per physical page/row 120) that contain a user data space and a control/overhead data space. The overhead data space contains overhead information for operation of the logical block 124, such as an error correction code (not shown), status flags, or an erase block management data field area (not shown). The user data space in each logical block 124 is typically 512 bytes long. It is noted that other interfaces to the Flash memory 104 and formats for the erase blocks 116, 118, physical pages 120, and logical sectors/blocks 124 are possible and should be apparent to those skilled in the art with benefit of the present disclosure. It is also noted that additional Flash memory devices 104 may be incorporated into the system 100 as required. In FIG. 1A, the logical blocks are arranged in clusters and address translation of the logical block address to physical block address in the Flash memory 104 utilizes cluster based address translation incorporating embodiments of the present invention.
  • FIG. 1B is a simplified diagram of another system 150 that incorporates a Flash memory system (also know as a memory subsystem) 160 embodiment of the present invention. In the system 150 of FIG. 1B, the Flash memory system 160, such as a memory system or Flash memory card, is coupled to a processor 152 with an address 154, control 156, and data bus 158. Internal to the Flash memory system 160, a memory controller 166 directs internal operation of the Flash memory system 160; managing the Flash memory devices 162, directing data accesses, updating internal control registers and tables (not shown), and/or directing operation of other possible hardware systems (not shown) of the Flash memory system 160. The memory controller 166 is coupled to and controls one or more Flash memory devices 162 via an internal control bus 164. The logical blocks 124 of the one or more Flash memory devices 162 are arranged in clusters and the memory controller 166 has an internal cluster based address translation layer (not shown) that incorporates embodiments of the present invention. The memory controller 166 may optionally incorporate a small local embedded processor to help manage the Flash memory system 160. It is noted that other architectures of Flash memory systems 160, external interfaces 154, 156, 158, and manners of coupling the memory controller 166 to the Flash memory devices 162, such as directly coupled individual control busses and signal lines, are possible and should be apparent to those skilled in the art with benefit of the present disclosure.
  • The Flash memory devices 162 each contain a sequence of erase blocks 116, 118 in their internal memory arrays. Each erase block 116, 118 contains a series of physical pages 120, each physical page 120 having one or more logical sectors or blocks 124 that contain a user data space and a control/overhead data space (shown here for illustration purposes as a single logical sector/block 124 per physical page/row 120). The overhead data space can contain an ECC code (not shown) and other overhead information for operation of the logical block 120, such as status flags, or an erase block management data field area (not shown).
  • FIGS. 2A and 2B detail encoding 200, 220 of user data into sector/logical blocks of a Flash memory array. In FIG. 2A, user data 212 and header/overhead data 214 is shown in a memory array 202 (or into an erase block N 202 of a memory array), where a single 512-byte logical block is encoded in each physical page/row 210 of the memory array 202. The memory array 202 contains a series of rows 210, each row containing a logical block having a user data area 204 and an overhead data area 206.
  • In FIG. 2B, user data 226 and header/overhead data 228 is shown in a memory array 222 (or into an erase block N 222 of a memory array), where a multiple logical blocks 232 are encoded in each physical page/row 230 of the memory array 222. As stated above, many memories support multiple logical sectors or logical block 232 within a single physical row page 230. In particular, NAND architecture Flash memories typically utilize this approach due to their generally higher memory cell density and larger row page sizes. The memory row 230 contains multiple logical blocks/sectors 232, each logical block 232 having a user data area 226 and an overhead data/block header section 228. In an example implementation, the row page 232 of FIG. 2B contains 2112 bytes of data (4×512 bytes user data+4×8 bytes ECC+32 bytes for overhead) and is formatted to contain four logical blocks 232 having a user data area 226 of 512-bytes each. The four logical sectors 232 are typically sequentially addressed N, N+1, N+2, and N+3, where N is a base logical sector address for the row page 230. It is noted that the row pages 210 and 230 of FIGS. 2A and 2B are for illustration purposes and that other row page sector formats of differing data sizes, numbers of logical blocks/sectors, and relative positioning of sectors are possible.
  • As stated above, in an erase block based non-volatile memory, the array is divided into a plurality of individually erasable groups of memory cells called erase blocks, which are each typically further divided into a plurality of 512-byte physical blocks. Before use, the non-volatile memory is formatted to conform to the data structures and management data fields/tables of the file system or memory structure being represented. Each physical block of the memory array also may contain a header or overhead data area that typically includes various data used in the management of the physical block. This management data can include such items as the status of the physical block (valid, erased/available, or to be erased/invalid) and an error correction code (ECC) for the data of the logical block. In addition, the header typically also includes an identifier that identifies the logical block address for the physical block.
  • As previously stated, the translation layer in conjunction with the erase block management manages the storage of logical blocks in non-volatile memory devices or a non-volatile memory subsystem. The client of a translation layer is typically the file system or operating system of an associated system or processor. The goal of the translation layer/EBM layer is to make the non-volatile memory appear as a freely rewriteable device or magnetic disk/hard drive, allowing the client to read and write logical blocks to be coupled non-volatile memory. It is noted that other translation layers can allow the direct reading and writing of data to a non-volatile memory without presenting the non-volatile memory as a formatted file system.
  • As stated above, in prior art memory systems, the address translation layer translates the accessed logical blocks to a physical block address through the use of a lookup table or, alternatively, through a scan of the physical blocks of the non-volatile memory system or device. FIG. 3A details a simplified block diagram of a prior art lookup table address translation system 300. In FIG. 3A, a logical block address 302 of a logical block read/write access request is submitted to the address translation layer (not shown, but can be either a firmware routine executing on a processor of a system, address translation hardware of a memory controller or in a control circuit internal to the memory itself) which translates it to a physical block address by reference to a lookup table 304, which is typically held in RAM. The address translation system 300 then uses the translated logical address to access the indicated physical block from a row 308 of a non-volatile memory array 306. In a prior art physical scan address translation system, the physical blocks 308 of the memory array 306 would be scanned by the address translation system 300 for a header that contained the matching logical block address 302.
  • Embodiments of the present invention utilize cluster based logical block/sector to physical block/sector address translation in non-volatile memory devices and memory subsystems. In cluster based addressing and address translation, the non-volatile memory device or non-volatile memory subsystem is divided into a plurality of sequentially addressed clusters, wherein each cluster contains a plurality of sequentially addressed logical blocks or sectors. In one example embodiment, a cluster contains 4 sequential logical blocks. Address translation to translate logical block addresses to physical block addresses is then performed by a table lookup of the logical cluster address of the cluster containing the logical block and returns the base physical address of the cluster in the non-volatile memory. An address offset from the cluster base address or a short physical scan can then be used to access the requested logical block, which is sequentially addressed within the cluster.
  • Cluster address translation allows close matching of data storage use, in addition, the reduced number of base cluster addresses allows the use of a smaller lookup table that contains only the cluster addresses, allowing a smaller RAM footprint. Physical scanning address translation of the non-volatile memory is also improved by cluster based addressing because of a reduced number of base addresses required to be scanned (logical blocks not on the dividing boundary between clusters/containing the cluster header can be skipped over, permitting the physical scanning to be reduced by a function of cluster granularity).
  • In one embodiment of the present invention, an individual logical block address is translated to an exact physical block location by taking the logical block address and integer dividing it by the total number of clusters. The result of the integer division is then used to index into the cluster address lookup table. The remainder value is the index to the sector/block (the sector number of the sequential sectors of the cluster) within the selected cluster. The remainder value is multiplied by 512 (512-byte per sector/block) to get the physical address of the sector/block within the non-volatile memory. In another embodiment, where the total number of clusters is a power of 2, the division can be done by simply masking off one or more of the least significant bits of the logical block address (the part of the binary address that relates to the address of the logical block within the cluster) to get the index into the cluster address translation lookup table to retrieve the associated physical cluster base address. The most significant bits can then be masked off to get an index to the logical block in the cluster.
  • FIG. 3B details a simplified block diagram of a cluster based lookup table address translation system 320 of an embodiment of the present invention. In FIG. 3B, a logical block address 322 of a logical block read/write access request is submitted to the cluster based address translation layer (not shown) which translates it to a physical cluster address by reference to a cluster address lookup table 324. A logical block address index to the selected logical block within the cluster is also generated. The address translation system 320 then uses the translated cluster address and the logical block index to access the indicated physical block from a row 328 of a non-volatile memory array 326. In a cluster based physical scan address translation system of an embodiment of the present invention, the physical clusters of the memory array 326 would be scanned by the address translation system 320 to locate the logical cluster address that contained the matching logical block address 322.
  • In one embodiment, the cluster granularity is adjustable and is selected upon memory device formatting or during system design and implementation, allowing for an adjustable number of blocks/sectors per cluster. This allows the non-volatile memory storage to be adjusted to closely match the data type and access usage it will be used for, the physical row size of the non-volatile memory for convenient accessing, the size of the cluster lookup table, and/or the scan time of the physical cluster scan.
  • In another embodiment of the present invention, a type of specially formatted cluster is utilized to store frequently updated sectors/logical blocks. This allows the cluster based translation layer to avoid the drawback of having to frequently copy, update and invalidate/erase a cluster containing an often updated sector/logical block, potentially causing excessive wear on the non-volatile memory and premature write fatigue failure of the part. In the special frequently updated sector cluster (also known as a page of logical blocks or single sector cluster), the cluster stores a time-wise sequence of a single sector/logical block. A new sequence of physical sectors/blocks of the cluster is written in turn with each new update of the stored logical block and the previous physical block holding the old data is invalidated. This may continue until the entire cluster has been used up, allowing for multiple updates of a logical sector without having to move the cluster and invalidate/erase the cluster containing the old data. In accessing the stored logical block, the address translation layer simply selects the most recently written/not invalid block of the single sector cluster.
  • FIG. 3C details a simplified block diagram of a cluster based lookup table address translation system 340 of an embodiment of the present invention that incorporates frequently updated sector cluster addressing. In FIG. 3C, a logical block address 342 of a logical block read/write access request is submitted to the cluster based address translation layer (not shown) which, if it is not a frequently updated sector/logical block, translates it to a physical cluster address by reference to a cluster address lookup table 344. A logical block address index to the selected logical block within the cluster is also generated. The address translation system 340 then uses the translated cluster address and the logical block index to access the indicated physical block from a row 348 of a non-volatile memory array 346. If the logical block address is for a frequently updated logical block/sector, the address lookup is done on a separate logical block address lookup table 350 that only handles address translation for frequently updated logical blocks/sectors. The address translation system 340 then uses the translated cluster address the physical address from the frequently updated logical block/sector address lookup table 350 to access the indicated cluster/page of logical blocks 352 and select the most recently updated logical block from it, allowing the frequently updated logical blocks to be managed on a separate basis.
  • In a cluster based lookup table address translation system that incorporates frequently updated sector cluster addressing, the data written to the non-volatile memory is typically all placed in standard clusters and then is promoted to be stored in a frequently updated sector/page of logical blocks cluster 352 upon reaching a threshold level of updates. The threshold level of updates can also be limited in time by ageing the last update, so that promotion only happens to logical blocks that have been recently updated on a frequent basis. It is noted that in one embodiment logical blocks could also be designated to be frequently updated when initially written to the non-volatile memory by the client system. In another embodiment of the present invention, frequently updated logical blocks can also be demoted to a standard cluster storage if they haven't been updated recently or their number of recent updates falls below a moving average threshold level. This allows the specialized frequently updated single sector clusters to be minimized and utilized only on those sectors/blocks that require them. In an alternative embodiment, the frequently updated sectors/blocks are stored individually in a non-cluster basis and not in specialized frequently updated single sector clusters.
  • FIG. 4 details a state transition diagram 400 for a cluster based address translation system incorporating frequently updated single sector clusters for non-volatile memory devices of the present invention. As shown in FIG. 4, a logical block address 402 of a logical block read/write access request is submitted to the cluster based address translation layer (not shown) which looks it up in a cluster based address translation table 404. If it is not a frequently updated sector/logical block 406, the address translation system then uses the translated cluster address to access the indicated physical block 408. If the logical block address is for a frequently updated logical block/sector, the address lookup 410 is done on a separate logical block address lookup table that only handles address translation for frequently updated logical blocks/sectors. The address translation system then uses the translated cluster address the physical address from the frequently updated logical block/sector address lookup table to access 412 the indicated frequently updated single sector cluster/page of logical blocks.
  • It is noted that other cluster based address translation apparatuses and methods incorporating embodiments of the present invention are possible and will be apparent to those skilled in the art with the benefit of this disclosure.
  • CONCLUSION
  • An improved non-volatile memory and logical block to physical block address translation utilizing a cluster based addressing scheme has been detailed that enhances operation and helps minimize write fatigue of the memory cells of the non-volatile memory device. Embodiments of the present invention utilize cluster based address translation to translate logical block addresses to physical block addresses, wherein each cluster contains a plurality of sequentially addressed logical blocks. This allows the use of a smaller RAM table for the address translation lookup and/or faster scanning of the memory device or memory subsystem for the matching cluster address. In one embodiment, variable cluster granularity (an adjustable number of blocks/sectors per cluster) allows the non-volatile memory storage to closely match its application and the data that will be stored in it. In another embodiment of the present invention, a specially formatted cluster is utilized for frequently updated sectors/logical blocks, where the cluster stores a single sector/logical block and new sequential physical sectors/blocks of the cluster is written in turn with each new update of the logical block and the previous physical block holding the old data invalidated until the entire cluster has been used. This allows multiple updates of a logical sector without having to move and invalidate/erase the cluster containing the old data, reducing the process of memory cell write fatigue.
  • Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that any arrangement, which is calculated to achieve the same purpose, may be substituted for the specific embodiment shown. This application is intended to cover any adaptations or variations of the present invention. Therefore, it is manifestly intended that this invention be limited only by the claims and the equivalents thereof.

Claims (64)

1. A Flash memory device comprising:
a memory array having a plurality of floating gate memory cells arranged in a plurality of clusters, wherein each cluster contains a plurality of sequentially addressed sectors.
2. The Flash memory device of claim 1, further comprising:
a control circuit, wherein the control circuit is adapted to access a sector from the memory array by translating a logical address of the sector to a physical sector address of the memory array in reference to the logical cluster address of the physical cluster the sector is stored within.
3. The Flash memory device of claim 1, wherein the Flash memory device is one of a NAND Flash memory device and a NOR Flash memory device.
4. The Flash memory device of claim 1, wherein the Flash memory device is adapted to access logical blocks of data in the memory array utilizing a cluster address translation lookup table to retrieve the physical address in the memory array of the cluster containing the accessed logical block.
5. The Flash memory device of claim 1, wherein the Flash memory device is adapted to store one or more frequently updated logical blocks separately from non-frequently updated logical blocks.
6. The Flash memory device of claim 5, wherein the Flash memory device is adapted to store the one or more frequently updated logical blocks in one or more frequently updated single sector clusters, where each one or more frequently updated single sector clusters contains a plurality of physical blocks for storage of a single logical block, such that each update of the stored logical block is written to a new unused physical block of the cluster.
7. The Flash memory device of claim 5, wherein the Flash memory device is adapted to access the one or more frequently updated logical blocks from the memory array utilizing a separate frequently updated logical block address translation lookup table to translate the logical address to a physical address.
8. A Flash memory subsystem comprising:
a plurality of Flash memory devices, wherein each Flash memory device contains a memory array having a plurality of floating gate memory cells arranged in a plurality of clusters, wherein each cluster contains a plurality of sequentially addressed sectors.
9. The Flash memory subsystem of claim 8, further comprising:
a memory controller coupled the plurality of Flash memory devices, wherein the memory controller is adapted to access a sector from the plurality of Flash memory devices by translating a logical address of the sector to a physical sector address in the plurality of Flash memory devices in reference to the logical cluster address of the physical cluster the sector is stored within.
10. The Flash memory subsystem of claim 8, wherein each of the Flash memory devices is one of a NAND Flash memory device and a NOR Flash memory device.
11. The Flash memory subsystem of claim 8, wherein the Flash memory subsystem is adapted to access logical blocks of data in the plurality of Flash memory devices utilizing a cluster address translation lookup table to retrieve the physical address in the plurality of Flash memory devices of the cluster containing the accessed logical block.
12. The Flash memory subsystem of claim 8, wherein the Flash memory subsystem is adapted to store one or more frequently updated logical blocks separately from non-frequently updated logical blocks.
13. The Flash memory subsystem of claim 12, wherein the Flash memory subsystem is adapted to store the one or more frequently updated logical blocks in one or more frequently updated single sector clusters, where each one or more frequently updated single sector clusters contains a plurality of physical blocks for storage of a single logical block, such that each update of the stored logical block is written to a new unused physical block of the cluster.
14. The Flash memory subsystem of claim 12, wherein the Flash memory subsystem is adapted to access the one or more frequently updated logical blocks from the plurality of Flash memory devices utilizing a separate frequently updated logical block address translation lookup table to translate the logical address to a physical address.
15. A system comprising:
a host coupled to a non-volatile memory device, wherein the system is adapted to store logical blocks of data in the non-volatile memory device, where the logical blocks are grouped in plurality of clusters, each cluster containing a plurality of sequentially addressed logical blocks.
16. The system of claim 15, wherein the non-volatile memory device is adapted to appear as a rewriteable storage device.
17. The system of claim 15, wherein the host is one of a processor or a memory controller.
18. The system of claim 15, wherein the non-volatile memory is one of a NAND Flash memory device, a NOR Flash memory device, a Polymer memory device, a Ferroelectric Random Access Memory (FeRAM) device, an Ovionics Unified Memory (OUM) device, a Nitride Read Only Memory (NROM) device, and a Magnetoresistive Random Access Memory (MRAM) device.
19. The system of claim 15, wherein the system is adapted to access logical blocks of data in the non-volatile memory device utilizing a cluster address translation lookup table to retrieve the physical address in the non-volatile memory of the cluster containing the accessed logical block.
20. The system of claim 15, wherein the system is adapted to access logical blocks of data in the non-volatile memory device utilizing a scan of the clusters of the non-volatile memory to locate the cluster that has the required logical base address and contains the accessed logical block.
21. The system of claim 15, wherein the system is adapted to store one or more frequently updated logical blocks separately from non-frequently updated logical blocks.
22. The system of claim 21, wherein the system is adapted to store the one or more frequently updated logical blocks in one or more frequently updated single sector clusters, where each one or more frequently updated single sector clusters contains a plurality of physical blocks for storage of a single logical block, such that each update of the stored logical block is written to a new unused physical block of the cluster.
23. The system of claim 21, wherein the system is adapted to access the one or more frequently updated logical blocks from the non-volatile memory device utilizing a separate frequently updated logical block address translation lookup table to translate the logical address to a physical address.
24. The system of claim 21, wherein the system is adapted to promote frequently updated logical blocks to be stored in a frequently updated single sector cluster.
25. The system of claim 21, wherein the system is adapted to demote frequently updated logical blocks from being stored in a frequently updated single sector cluster to being stored in a conventional cluster containing sequentially addressed logical blocks.
26. The system of claim 15, wherein the non-volatile memory device is a non-volatile memory subsystem, the non-volatile memory subsystem comprising a plurality of non-volatile memory devices.
27. The system of claim 26, wherein the non-volatile memory device subsystem further comprises a memory controller.
28. A system comprising:
a host coupled to a non-volatile memory subsystem, wherein the non-volatile memory subsystem comprises a plurality of non-volatile memory devices; and
wherein the system is adapted to store logical blocks of data in the non-volatile memory subsystem, where the logical blocks are grouped in plurality of clusters, each cluster containing a plurality of sequentially addressed logical blocks.
29. The system of claim 28, wherein the host is one of a processor or a memory controller.
30. The system of claim 28, wherein each of the non-volatile memory devices are one of a NAND Flash memory device, a NOR Flash memory device, a Polymer memory device, a Ferroelectric Random Access Memory (FeRAM) device, an Ovionics Unified Memory (OUM) device, a Nitride Read Only Memory (NROM) device, and a Magnetoresistive Random Access Memory (MRAM) device.
31. The system of claim 28, wherein the system is adapted to access logical blocks of data in the non-volatile memory subsystem utilizing a cluster address translation lookup table to retrieve the physical address in the non-volatile memory subsystem of the cluster containing the accessed logical block.
32. The system of claim 31, wherein an index into the cluster address translation lookup table to retrieve the physical cluster address is generated by integer dividing the logical block address by the total number of clusters.
33. The system of claim 32, wherein the logical block within the physical cluster is selected using the remainder of the integer division of the logical block address by the total number of clusters.
34. The system of claim 31, wherein the total number of clusters is a power of two and an index into the cluster address translation lookup table to retrieve the physical cluster address is generated by a binary mask of one or more of the least significant bits of the logical block address.
35. The system of claim 34, wherein the logical block within the physical cluster is selected by masking off one or more of the most significant bits of the logical block address.
36. The system of claim 28, wherein the system is adapted to store one or more frequently updated logical blocks separately from non-frequently updated logical blocks.
37. The system of claim 36, wherein the system is adapted to store the one or more frequently updated logical blocks in one or more frequently updated single sector clusters, where each one or more frequently updated single sector clusters contains a plurality of physical blocks for storage of a single logical block, such that each update of the stored logical block is written to a new unused physical block of the cluster.
38. The system of claim 36, wherein the system is adapted to access the one or more frequently updated logical blocks from the non-volatile memory subsystem utilizing a separate frequently updated logical block address translation lookup table to translate the logical address to a physical address.
39. A method of operating a non-volatile memory comprising:
storing logical blocks in clusters of sequentially addressed logical blocks in a non-volatile memory.
40. The method of claim 39, wherein the non-volatile memory is one of a non-volatile memory device, a non-volatile memory array, and a non-volatile memory subsystem.
41. The method of claim 39, wherein storing logical blocks in clusters of sequentially addressed logical blocks in a non-volatile memory further comprises translating a logical address of the logical block to a physical block address by using a cluster address translation lookup table to retrieve the physical address of the physical cluster the logical block is to be stored within
42. The method of claim 41, further comprising:
generating an index into the cluster address translation lookup table to retrieve the physical cluster address by integer dividing the logical block address by the total number of clusters.
43. The method of claim 42, further comprising:
selecting the logical block within the physical cluster using the remainder of the integer division of the logical block address by the total number of clusters.
44. The method of claim 41, further comprising:
generating an index into the cluster address translation lookup table to retrieve the physical cluster address by a binary mask of one or more of the least significant bits of the logical block address, wherein the total number of clusters is a power of two.
45. The method of claim 44, further comprising:
selecting the logical block within the physical cluster by masking off one or more of the most significant bits of the logical block address.
46. The method of claim 39, further comprising:
storing one or more frequently updated logical blocks separately from non-frequently updated logical blocks.
47. The method of claim 46, wherein storing one or more frequently updated logical blocks separately from non-frequently updated logical blocks further comprises storing one or more frequently updated logical blocks in one or more frequently updated single sector clusters, where each one or more frequently updated single sector clusters contains a plurality of physical blocks for storage of a single logical block, such that each update of the stored logical block is written to a new unused physical block of the cluster.
48. The method of claim 46, wherein storing one or more frequently updated logical blocks separately from non-frequently updated logical blocks further comprises translating a logical block address to a physical block address for the one or more frequently updated logical blocks utilizing a frequently updated logical block address translation lookup table.
49. The method of claim 46, wherein storing one or more frequently updated logical blocks separately from non-frequently updated logical blocks further comprises promoting logical blocks that are frequently updated to be stored in a frequently updated single sector cluster.
50. The method of claim 46, wherein storing one or more frequently updated logical blocks separately from non-frequently updated logical blocks further comprises demoting logical blocks that are not frequently updated from being stored in a frequently updated single sector cluster to being stored in a conventional cluster containing sequentially addressed logical blocks.
51. A method of operating a non-volatile memory comprising:
accessing logical blocks in a non-volatile memory by reference to a logical cluster address, wherein each cluster contains a plurality of sequentially addressed logical blocks.
52. The method of claim 51, wherein accessing logical blocks in a non-volatile memory by reference to a logical cluster address further comprises translating a logical address of the logical block to a physical block address by using a cluster address translation lookup table to retrieve the physical address of the physical cluster the logical block is stored within
53. The method of claim 52, further comprising:
generating an index into the cluster address translation lookup table to retrieve the physical cluster address by integer dividing the logical block address by the total number of clusters.
54. The method of claim 52, further comprising:
generating an index into the cluster address translation lookup table to retrieve the physical cluster address by a binary mask of one or more of the least significant bits of the logical block address, wherein the total number of clusters is a power of two.
55. The method of claim 51, further comprising:
accessing one or more frequently updated logical blocks separately from non-frequently updated logical blocks.
56. The method of claim 55, wherein accessing one or more frequently updated logical blocks separately from non-frequently updated logical blocks further comprises accessing one or more frequently updated logical blocks in one or more frequently updated single sector clusters, where each one or more frequently updated single sector clusters contains a plurality of physical blocks for storage of a single logical block, such that each update of the stored logical block is written to a new unused physical block of the cluster.
57. The method of claim 55, wherein accessing one or more frequently updated logical blocks separately from non-frequently updated logical blocks further comprises translating a logical block address to a physical block address for the one or more frequently updated logical blocks utilizing a frequently updated logical block address translation lookup table.
58. A method of translating a logical block address to a physical address in a non-volatile memory comprising:
looking up a logical block address in a cluster address translation table to translate a logical cluster address to a cluster physical address, wherein each cluster of the non-volatile memory contains a plurality of sequentially addressed logical blocks; and
determining the physical block address offset for the logical block address within the physical cluster.
59. The method of claim 58, further comprising:
generating an index into the cluster address translation lookup table to translate the physical cluster address by integer dividing the logical block address by the total number of clusters.
60. The method of claim 58, further comprising:
generating an index into the cluster address translation lookup table to translate the physical cluster address by applying binary mask of one or more of the least significant bits of the logical block address, wherein the total number of clusters is a power of two.
61. The method of claim 58, further comprising:
looking up the addresses of one or more frequently updated logical blocks separately from non-frequently updated logical blocks.
62. The method of claim 61, wherein looking up the addresses of one or more frequently updated logical blocks separately from non-frequently updated logical blocks further comprises looking up the addresses of one or more frequently updated logical blocks within one or more frequently updated single sector clusters, where each one or more frequently updated single sector clusters contains a plurality of physical blocks for storage of a single logical block, such that each update of the stored logical block is written to a new unused physical block of the cluster.
63. The method of claim 61, wherein looking up the addresses of one or more frequently updated logical blocks separately from non-frequently updated logical blocks further comprises looking up the addresses of one or more frequently updated logical blocks by translating a logical block address to a physical block address for the one or more frequently updated logical blocks utilizing a frequently updated logical block address translation lookup table.
64. A method of translating a logical block address to a physical address in a non-volatile memory comprising:
scanning a non-volatile memory on physical cluster address basis to locate a logical cluster address associated with a physical cluster, wherein each cluster of the non-volatile memory contains a plurality of sequentially addressed logical blocks; and
determining the physical block address offset for the logical block address within the physical cluster.
US10/933,017 2004-09-02 2004-09-02 Cluster based non-volatile memory translation layer Abandoned US20060044934A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/933,017 US20060044934A1 (en) 2004-09-02 2004-09-02 Cluster based non-volatile memory translation layer
US12/372,405 US8375157B2 (en) 2004-09-02 2009-02-17 Cluster based non-volatile memory translation layer
US13/764,213 US8595424B2 (en) 2004-09-02 2013-02-11 Cluster based non-volatile memory translation layer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/933,017 US20060044934A1 (en) 2004-09-02 2004-09-02 Cluster based non-volatile memory translation layer

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/372,405 Continuation US8375157B2 (en) 2004-09-02 2009-02-17 Cluster based non-volatile memory translation layer

Publications (1)

Publication Number Publication Date
US20060044934A1 true US20060044934A1 (en) 2006-03-02

Family

ID=35942865

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/933,017 Abandoned US20060044934A1 (en) 2004-09-02 2004-09-02 Cluster based non-volatile memory translation layer
US12/372,405 Active 2024-09-11 US8375157B2 (en) 2004-09-02 2009-02-17 Cluster based non-volatile memory translation layer
US13/764,213 Active US8595424B2 (en) 2004-09-02 2013-02-11 Cluster based non-volatile memory translation layer

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/372,405 Active 2024-09-11 US8375157B2 (en) 2004-09-02 2009-02-17 Cluster based non-volatile memory translation layer
US13/764,213 Active US8595424B2 (en) 2004-09-02 2013-02-11 Cluster based non-volatile memory translation layer

Country Status (1)

Country Link
US (3) US20060044934A1 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050219906A1 (en) * 2004-01-27 2005-10-06 Macronix International Co., Ltd. Operation scheme for programming charge trapping non-volatile memory
US20050237801A1 (en) * 2004-04-26 2005-10-27 Macronix International Co., Ltd. Operation scheme with charge balancing for charge trapping non-volatile memory
US20050237809A1 (en) * 2004-04-26 2005-10-27 Macronix International Co., Ltd. Operation scheme with high work function gate and charge balancing for charge trapping non-volatile memory
US20050237816A1 (en) * 2004-04-26 2005-10-27 Macronix International Co., Ltd. Operation scheme for spectrum shift in charge trapping non-volatile memory
US20050237815A1 (en) * 2004-04-26 2005-10-27 Macronix International Co., Ltd. Operation scheme with charge balancing erase for charge trapping non-volatile memory
US20050281085A1 (en) * 2004-06-17 2005-12-22 Macronix International Co., Ltd. Operation scheme for programming charge trapping non-volatile memory
US20060224817A1 (en) * 2005-03-31 2006-10-05 Atri Sunil R NOR flash file allocation
US20060281331A1 (en) * 2004-11-29 2006-12-14 Macronix International Co., Ltd. Charge trapping dielectric structure for non-volatile memory
US20070133307A1 (en) * 2005-12-06 2007-06-14 Macronix International Co., Ltd. Methods to resolve hard-to-erase condition in charge trapping non-volatile memory
US20080103656A1 (en) * 2006-10-26 2008-05-01 Spx Corporation Universal serial bus memory device for use in a vehicle diagnostic device
US20080235486A1 (en) * 2007-03-20 2008-09-25 Micron Technology, Inc. Non-volatile memory devices, systems including same and associated methods
US20090240903A1 (en) * 2008-03-20 2009-09-24 Dell Products L.P. Methods and Apparatus for Translating a System Address
US20100085821A1 (en) * 2008-10-06 2010-04-08 Samsung Electronics Co., Ltd. Operation method of non-volatile memory
US20100325467A1 (en) * 2006-03-31 2010-12-23 Takashi Oshima Memory system and controller
US20110267899A1 (en) * 2010-04-29 2011-11-03 Hyung-Gon Kim Non-volatile memory device and non-volatile memory system having the same
US8209471B2 (en) 2008-03-01 2012-06-26 Kabushiki Kaisha Toshiba Memory system
WO2012166535A2 (en) * 2011-05-31 2012-12-06 Micron Technology, Inc. Apparatus including memory system controllers and related methods
US20130080731A1 (en) * 2011-09-28 2013-03-28 Ping-Yi Hsu Method and apparatus for performing memory management
US20150067297A1 (en) * 2013-08-29 2015-03-05 International Business Machines Corporation Direct memory access (dma) address translation with a consecutive count field
US20150149712A1 (en) * 2008-10-13 2015-05-28 Micron Technology, Inc. Translation layer in a solid state storage device
US20160011782A1 (en) * 2013-02-27 2016-01-14 Hitachi, Ltd. Semiconductor storage
CN105786406A (en) * 2016-02-26 2016-07-20 湖南国科微电子股份有限公司 CE NAND Flash page model establishing method supporting multi-channel main control concurrence and page model
US20170090782A1 (en) * 2015-09-30 2017-03-30 Apacer Technology Inc. Writing management method and writing management system for solid state drive
CN106557273A (en) * 2015-09-30 2017-04-05 宇瞻科技股份有限公司 The data managing method of solid state hard disc, write management system and its method
US20170293553A1 (en) * 2016-04-06 2017-10-12 Sandisk Technologies Inc. Memory erase management
CN108563590A (en) * 2018-06-28 2018-09-21 北京智芯微电子科技有限公司 OTP controller based on piece FLASH memory and control method
US20190146831A1 (en) * 2017-11-10 2019-05-16 Advanced Micro Devices, Inc. Thread switch for accesses to slow memory
CN114442911A (en) * 2020-11-06 2022-05-06 戴尔产品有限公司 System and method for asynchronous input/output scanning and aggregation for solid state drives
US11461225B2 (en) * 2019-04-05 2022-10-04 Buffalo Inc. Storage device, control method of storage device, and storage medium

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080155175A1 (en) * 2006-12-26 2008-06-26 Sinclair Alan W Host System That Manages a LBA Interface With Flash Memory
TWI395100B (en) * 2009-01-13 2013-05-01 Innostor Technology Corp Method for processing data of flash memory by separating levels and flash memory device thereof
US8443167B1 (en) 2009-12-16 2013-05-14 Western Digital Technologies, Inc. Data storage device employing a run-length mapping table and a single address mapping table
US8194340B1 (en) 2010-03-18 2012-06-05 Western Digital Technologies, Inc. Disk drive framing write data with in-line mapping data during write operations
US8687306B1 (en) 2010-03-22 2014-04-01 Western Digital Technologies, Inc. Systems and methods for improving sequential data rate performance using sorted data zones
US9330715B1 (en) 2010-03-22 2016-05-03 Western Digital Technologies, Inc. Mapping of shingled magnetic recording media
US8856438B1 (en) 2011-12-09 2014-10-07 Western Digital Technologies, Inc. Disk drive with reduced-size translation table
US8693133B1 (en) 2010-03-22 2014-04-08 Western Digital Technologies, Inc. Systems and methods for improving sequential data rate performance using sorted data zones for butterfly format
US8699185B1 (en) 2012-12-10 2014-04-15 Western Digital Technologies, Inc. Disk drive defining guard bands to support zone sequentiality when butterfly writing shingled data tracks
US8667248B1 (en) 2010-08-31 2014-03-04 Western Digital Technologies, Inc. Data storage device using metadata and mapping table to identify valid user data on non-volatile media
US8954664B1 (en) 2010-10-01 2015-02-10 Western Digital Technologies, Inc. Writing metadata files on a disk
US8756361B1 (en) 2010-10-01 2014-06-17 Western Digital Technologies, Inc. Disk drive modifying metadata cached in a circular buffer when a write operation is aborted
US8793429B1 (en) 2011-06-03 2014-07-29 Western Digital Technologies, Inc. Solid-state drive with reduced power up time
US8756382B1 (en) 2011-06-30 2014-06-17 Western Digital Technologies, Inc. Method for file based shingled data storage utilizing multiple media types
US9213493B1 (en) 2011-12-16 2015-12-15 Western Digital Technologies, Inc. Sorted serpentine mapping for storage drives
US8819367B1 (en) 2011-12-19 2014-08-26 Western Digital Technologies, Inc. Accelerated translation power recovery
US8612706B1 (en) 2011-12-21 2013-12-17 Western Digital Technologies, Inc. Metadata recovery in a disk drive
US9304906B2 (en) * 2013-09-10 2016-04-05 Kabushiki Kaisha Toshiba Memory system, controller and control method of memory
US8953269B1 (en) 2014-07-18 2015-02-10 Western Digital Technologies, Inc. Management of data objects in a data object zone
US9875055B1 (en) 2014-08-04 2018-01-23 Western Digital Technologies, Inc. Check-pointing of metadata
US9846552B2 (en) * 2014-11-13 2017-12-19 Toshiba Memory Corporation Memory device and storage system having the same
US10503657B2 (en) 2015-10-07 2019-12-10 Samsung Electronics Co., Ltd. DIMM SSD Addressing performance techniques
US10031674B2 (en) * 2015-10-07 2018-07-24 Samsung Electronics Co., Ltd. DIMM SSD addressing performance techniques
GB2551756B (en) 2016-06-29 2019-12-11 Advanced Risc Mach Ltd Apparatus and method for performing segment-based address translation

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5590300A (en) * 1991-03-05 1996-12-31 Zitel Corporation Cache memory utilizing address translation table
US5930832A (en) * 1996-06-07 1999-07-27 International Business Machines Corporation Apparatus to guarantee TLB inclusion for store operations
US6029463A (en) * 1995-12-22 2000-02-29 Thermoprodukter Ab Method and apparatus for cooling or condensing mediums
US6069638A (en) * 1997-06-25 2000-05-30 Micron Electronics, Inc. System for accelerated graphics port address remapping interface to main memory
US6249853B1 (en) * 1997-06-25 2001-06-19 Micron Electronics, Inc. GART and PTES defined by configuration registers
US6330654B1 (en) * 1999-08-26 2001-12-11 Micron Technology, Inc. Memory cache with sequential page indicators
US6346946B1 (en) * 1998-10-23 2002-02-12 Micron Technology, Inc. Graphics controller embedded in a core logic unit
US6449679B2 (en) * 1999-02-26 2002-09-10 Micron Technology, Inc. RAM controller interface device for RAM compatibility (memory translator hub)
US20030046482A1 (en) * 2001-08-28 2003-03-06 International Business Machines Corporation Data management in flash memory
US6625715B1 (en) * 1999-12-30 2003-09-23 Intel Corporation System and method for translation buffer accommodating multiple page sizes
US20040030847A1 (en) * 2002-08-06 2004-02-12 Tremaine Robert B. System and method for using a compressed main memory based on degree of compressibility
US20050286336A1 (en) * 1989-04-13 2005-12-29 Eliyahou Harari Flash EEprom system
US7334108B1 (en) * 2004-01-30 2008-02-19 Nvidia Corporation Multi-client virtual address translation system with translation units of variable-range size

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5706472A (en) * 1995-02-23 1998-01-06 Powerquest Corporation Method for manipulating disk partitions
US6026463A (en) 1997-09-10 2000-02-15 Micron Electronics, Inc. Method for improving data transfer rates for user data stored on a disk storage device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050286336A1 (en) * 1989-04-13 2005-12-29 Eliyahou Harari Flash EEprom system
US5590300A (en) * 1991-03-05 1996-12-31 Zitel Corporation Cache memory utilizing address translation table
US6029463A (en) * 1995-12-22 2000-02-29 Thermoprodukter Ab Method and apparatus for cooling or condensing mediums
US5930832A (en) * 1996-06-07 1999-07-27 International Business Machines Corporation Apparatus to guarantee TLB inclusion for store operations
US6418523B2 (en) * 1997-06-25 2002-07-09 Micron Electronics, Inc. Apparatus comprising a translation lookaside buffer for graphics address remapping of virtual addresses
US6069638A (en) * 1997-06-25 2000-05-30 Micron Electronics, Inc. System for accelerated graphics port address remapping interface to main memory
US6249853B1 (en) * 1997-06-25 2001-06-19 Micron Electronics, Inc. GART and PTES defined by configuration registers
US6346946B1 (en) * 1998-10-23 2002-02-12 Micron Technology, Inc. Graphics controller embedded in a core logic unit
US6449679B2 (en) * 1999-02-26 2002-09-10 Micron Technology, Inc. RAM controller interface device for RAM compatibility (memory translator hub)
US6330654B1 (en) * 1999-08-26 2001-12-11 Micron Technology, Inc. Memory cache with sequential page indicators
US6625715B1 (en) * 1999-12-30 2003-09-23 Intel Corporation System and method for translation buffer accommodating multiple page sizes
US20030046482A1 (en) * 2001-08-28 2003-03-06 International Business Machines Corporation Data management in flash memory
US20040030847A1 (en) * 2002-08-06 2004-02-12 Tremaine Robert B. System and method for using a compressed main memory based on degree of compressibility
US7334108B1 (en) * 2004-01-30 2008-02-19 Nvidia Corporation Multi-client virtual address translation system with translation units of variable-range size

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7151692B2 (en) * 2004-01-27 2006-12-19 Macronix International Co., Ltd. Operation scheme for programming charge trapping non-volatile memory
US20050219906A1 (en) * 2004-01-27 2005-10-06 Macronix International Co., Ltd. Operation scheme for programming charge trapping non-volatile memory
US20050237801A1 (en) * 2004-04-26 2005-10-27 Macronix International Co., Ltd. Operation scheme with charge balancing for charge trapping non-volatile memory
US20050237809A1 (en) * 2004-04-26 2005-10-27 Macronix International Co., Ltd. Operation scheme with high work function gate and charge balancing for charge trapping non-volatile memory
US20050237816A1 (en) * 2004-04-26 2005-10-27 Macronix International Co., Ltd. Operation scheme for spectrum shift in charge trapping non-volatile memory
US20050237815A1 (en) * 2004-04-26 2005-10-27 Macronix International Co., Ltd. Operation scheme with charge balancing erase for charge trapping non-volatile memory
US7133313B2 (en) 2004-04-26 2006-11-07 Macronix International Co., Ltd. Operation scheme with charge balancing for charge trapping non-volatile memory
US7164603B2 (en) 2004-04-26 2007-01-16 Yen-Hao Shih Operation scheme with high work function gate and charge balancing for charge trapping non-volatile memory
US7209390B2 (en) 2004-04-26 2007-04-24 Macronix International Co., Ltd. Operation scheme for spectrum shift in charge trapping non-volatile memory
US20050281085A1 (en) * 2004-06-17 2005-12-22 Macronix International Co., Ltd. Operation scheme for programming charge trapping non-volatile memory
US7190614B2 (en) * 2004-06-17 2007-03-13 Macronix International Co., Ltd. Operation scheme for programming charge trapping non-volatile memory
US20060281331A1 (en) * 2004-11-29 2006-12-14 Macronix International Co., Ltd. Charge trapping dielectric structure for non-volatile memory
US7879738B2 (en) 2004-11-29 2011-02-01 Macronix International Co., Ltd. Charge trapping dielectric structure for non-volatile memory
US20060224817A1 (en) * 2005-03-31 2006-10-05 Atri Sunil R NOR flash file allocation
US7242622B2 (en) 2005-12-06 2007-07-10 Macronix International Co., Ltd. Methods to resolve hard-to-erase condition in charge trapping non-volatile memory
US20070253258A1 (en) * 2005-12-06 2007-11-01 Macronix International Co., Ltd. Methods to resolve hard-to-erase condition in charge trapping non-volatile memory
US7355897B2 (en) 2005-12-06 2008-04-08 Macronix International Co., Ltd. Methods to resolve hard-to-erase condition in charge trapping non-volatile memory
US20070133307A1 (en) * 2005-12-06 2007-06-14 Macronix International Co., Ltd. Methods to resolve hard-to-erase condition in charge trapping non-volatile memory
US20100325467A1 (en) * 2006-03-31 2010-12-23 Takashi Oshima Memory system and controller
US8145831B2 (en) * 2006-03-31 2012-03-27 Kabushiki Kaisha Toshiba Memory system and controller with mode for direct access memory
US20080103656A1 (en) * 2006-10-26 2008-05-01 Spx Corporation Universal serial bus memory device for use in a vehicle diagnostic device
US20130166139A1 (en) * 2006-10-26 2013-06-27 Service Solutions U.S. Llc Universal Serial Bus Memory Device for Use in a Vehicle Diagnostic Device
US8386116B2 (en) * 2006-10-26 2013-02-26 Service Solutions U.S., Llc Universal serial bus memory device for use in a vehicle diagnostic device
US9014907B2 (en) * 2006-10-26 2015-04-21 Bosch Automotive Service Solutions Inc. Universal serial bus memory device for use in a vehicle diagnostic device
US10037153B2 (en) 2007-03-20 2018-07-31 Micron Technology, Inc. Memory device, electronic system, and methods associated with modifying data and a file of a memory device
US20080235486A1 (en) * 2007-03-20 2008-09-25 Micron Technology, Inc. Non-volatile memory devices, systems including same and associated methods
US20110161613A1 (en) * 2007-03-20 2011-06-30 Micron Technology, Inc. Memory device, electronic system, and methods associated with modifying data and a file of a memory device
US9075814B2 (en) 2007-03-20 2015-07-07 Micron Technology, Inc. Memory device, electronic system, and methods associated with modifying data and a file of a memory device
US7917479B2 (en) * 2007-03-20 2011-03-29 Micron Technology, Inc. Non-volatile memory devices, systems including same and associated methods
US8655927B2 (en) 2007-03-20 2014-02-18 Micron Technology, Inc. Memory device, electronic system, and methods associated with modifying data and a file of a memory device
US8209471B2 (en) 2008-03-01 2012-06-26 Kabushiki Kaisha Toshiba Memory system
US8661191B2 (en) 2008-03-01 2014-02-25 Kabushiki Kaisha Toshiba Memory system
US20090240903A1 (en) * 2008-03-20 2009-09-24 Dell Products L.P. Methods and Apparatus for Translating a System Address
US20100085821A1 (en) * 2008-10-06 2010-04-08 Samsung Electronics Co., Ltd. Operation method of non-volatile memory
US8234438B2 (en) 2008-10-06 2012-07-31 Samsung Electronics Co., Ltd. Operation method of non-volatile memory
US9176868B2 (en) * 2008-10-13 2015-11-03 Micron Technology, Inc. Translation layer in a solid state storage device
US9405679B2 (en) 2008-10-13 2016-08-02 Micron Technology, Inc. Determining a location of a memory device in a solid state device
US20150149712A1 (en) * 2008-10-13 2015-05-28 Micron Technology, Inc. Translation layer in a solid state storage device
US8576638B2 (en) * 2010-04-29 2013-11-05 Samsung Electronics Co., Ltd. Non-volatile memory device and non-volatile memory system having the same
US20110267899A1 (en) * 2010-04-29 2011-11-03 Hyung-Gon Kim Non-volatile memory device and non-volatile memory system having the same
WO2012166535A2 (en) * 2011-05-31 2012-12-06 Micron Technology, Inc. Apparatus including memory system controllers and related methods
US9076528B2 (en) 2011-05-31 2015-07-07 Micron Technology, Inc. Apparatus including memory management control circuitry and related methods for allocation of a write block cluster
WO2012166535A3 (en) * 2011-05-31 2013-01-24 Micron Technology, Inc. Apparatus including memory system controllers and related methods
US9747029B2 (en) 2011-05-31 2017-08-29 Micron Technology, Inc. Apparatus including memory management control circuitry and related methods for allocation of a write block cluster
CN103034589A (en) * 2011-09-28 2013-04-10 联发科技股份有限公司 Method and apparatus for performing memory management
US20130080731A1 (en) * 2011-09-28 2013-03-28 Ping-Yi Hsu Method and apparatus for performing memory management
US20160011782A1 (en) * 2013-02-27 2016-01-14 Hitachi, Ltd. Semiconductor storage
US20150067224A1 (en) * 2013-08-29 2015-03-05 International Business Machines Corporation Direct memory access (dma) address translation with a consecutive count field
US20150067297A1 (en) * 2013-08-29 2015-03-05 International Business Machines Corporation Direct memory access (dma) address translation with a consecutive count field
US9317442B2 (en) * 2013-08-29 2016-04-19 International Business Machines Corporation Direct memory access (DMA) address translation with a consecutive count field
US9348759B2 (en) * 2013-08-29 2016-05-24 International Business Machines Corporation Direct memory access (DMA) address translation with a consecutive count field
CN106557273A (en) * 2015-09-30 2017-04-05 宇瞻科技股份有限公司 The data managing method of solid state hard disc, write management system and its method
US20170090782A1 (en) * 2015-09-30 2017-03-30 Apacer Technology Inc. Writing management method and writing management system for solid state drive
CN105786406A (en) * 2016-02-26 2016-07-20 湖南国科微电子股份有限公司 CE NAND Flash page model establishing method supporting multi-channel main control concurrence and page model
US20170293553A1 (en) * 2016-04-06 2017-10-12 Sandisk Technologies Inc. Memory erase management
US10114743B2 (en) * 2016-04-06 2018-10-30 Sandisk Technologies Inc. Memory erase management
US20190146831A1 (en) * 2017-11-10 2019-05-16 Advanced Micro Devices, Inc. Thread switch for accesses to slow memory
US11294710B2 (en) * 2017-11-10 2022-04-05 Advanced Micro Devices, Inc. Thread switch for accesses to slow memory
CN108563590A (en) * 2018-06-28 2018-09-21 北京智芯微电子科技有限公司 OTP controller based on piece FLASH memory and control method
US11461225B2 (en) * 2019-04-05 2022-10-04 Buffalo Inc. Storage device, control method of storage device, and storage medium
CN114442911A (en) * 2020-11-06 2022-05-06 戴尔产品有限公司 System and method for asynchronous input/output scanning and aggregation for solid state drives

Also Published As

Publication number Publication date
US20090154254A1 (en) 2009-06-18
US8595424B2 (en) 2013-11-26
US20130151765A1 (en) 2013-06-13
US8375157B2 (en) 2013-02-12

Similar Documents

Publication Publication Date Title
US8595424B2 (en) Cluster based non-volatile memory translation layer
US7752381B2 (en) Version based non-volatile memory translation layer
US7509474B2 (en) Robust index storage for non-volatile memory
US7944748B2 (en) Erase block data splitting
US8296498B2 (en) Method and system for virtual fast access non-volatile RAM
US10037153B2 (en) Memory device, electronic system, and methods associated with modifying data and a file of a memory device
JP4834676B2 (en) System and method using on-chip non-volatile memory write cache
JP4787266B2 (en) Scratch pad block
US20180107592A1 (en) Reconstruction of address mapping in a host of a storage system
US20150154109A1 (en) Memory System Controller Including a Multi-Resolution Internal Cache

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WONG, WANMO;JAHN, MARK;SEPULVEDA, FRANK;REEL/FRAME:015765/0847

Effective date: 20040901

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION