US20120221767A1 - Efficient buffering for a system having non-volatile memory - Google Patents

Efficient buffering for a system having non-volatile memory Download PDF

Info

Publication number
US20120221767A1
US20120221767A1 US13/036,460 US201113036460A US2012221767A1 US 20120221767 A1 US20120221767 A1 US 20120221767A1 US 201113036460 A US201113036460 A US 201113036460A US 2012221767 A1 US2012221767 A1 US 2012221767A1
Authority
US
United States
Prior art keywords
write
command
nvm
control circuitry
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/036,460
Inventor
Daniel J. Post
Nir Wakrat
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/036,460 priority Critical patent/US20120221767A1/en
Application filed by Apple Inc filed Critical Apple Inc
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: POST, DANIEL J., WAKRAT, NIR
Priority to TW101106297A priority patent/TWI503822B/en
Priority to TW101134054A priority patent/TWI537952B/en
Priority to PCT/US2012/026752 priority patent/WO2012118743A1/en
Priority to KR1020120019692A priority patent/KR20120098505A/en
Priority to EP12157312A priority patent/EP2492817A3/en
Priority to CN201210114701.9A priority patent/CN102750226B/en
Priority to KR1020120020909A priority patent/KR101624007B1/en
Publication of US20120221767A1 publication Critical patent/US20120221767A1/en
Priority to US14/693,260 priority patent/US9703700B2/en
Priority to US15/630,376 priority patent/US9996457B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/14Protection against unauthorised use of memory or access to memory
    • G06F12/1408Protection against unauthorised use of memory or access to memory by using cryptography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/78Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data
    • G06F21/79Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure storage of data in semiconductor storage media, e.g. directly-addressable memories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0613Improving I/O performance in relation to throughput
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/068Hybrid storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1052Security improvement
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • NAND flash memory as well as other types of non-volatile memories (“NVMs”), are commonly used for mass storage.
  • consumer electronics such as portable media players often include flash memory to store music, videos, and other media.
  • a file system of a device may perform multiple small synchronous input/outputs (“I/Os”), die parallelism cannot be achieved in some cases. This consequently may reduce system bandwidth, and increase the latency for programming the pages of a NVM.
  • I/Os small synchronous input/outputs
  • the latency for programming a NVM can be improved by buffering one or more I/Os prior to programming the I/Os to the NVM.
  • I/Os can instead be directly programmed to the NVM, indiscriminate buffering of all I/Os received from a file system may actually slow down the programming process.
  • a system can select to buffer writes where the buffering will increase system performance, and select to program writes directly to a NVM where the buffering would be a net performance burden.
  • a system can use heuristics to determine whether to perform buffering of one or more write commands received from a file system.
  • the system can minimize read energy and buffering overhead by efficiently re-ordering write commands in a queue along page-aligned boundaries of a buffer.
  • the system can optimally combine write commands from a buffer with write commands from a queue. After combining the commands, the system can dispatch the commands in a single transaction.
  • FIGS. 1 and 2 are block diagrams of electronic devices configured in accordance with various embodiments of the invention.
  • FIG. 3 is a functional view of a non-volatile memory in accordance with various embodiments of the invention.
  • FIG. 4 is a graphical view of illustrative timing diagrams for various systems
  • FIG. 5 is a graphical view of an illustrative programming flow for a system configured in accordance with various embodiments of the invention.
  • FIG. 6 is a graphical view of an illustrative buffer configured in accordance with various embodiments of the invention.
  • FIGS. 7A and 7B are flowcharts of an illustrative process for efficient buffering in a non-volatile memory in accordance with various embodiments of the invention.
  • FIG. 8 is a flowchart of an illustrative process for determining whether to buffer in a non-volatile memory in accordance with various embodiments of the invention.
  • a control circuitry of a system can use heuristics to determine whether or not to buffer write commands received from a file system. Any suitable heuristics can be used, such as, for example, the size of a write command, the number of dispatched write command immediately preceding the write command, the number of remaining write commands saved in a queue, the amount of available space in memory, the size of a buffer, and input/output (“I/O”) patterns.
  • the control circuitry can determine whether or not to buffer a write command based on the size of command. For example, if the control circuitry determines that the write command is a relatively small command (e.g., the size of the write command is below a pre-determined threshold), the control circuitry can select a buffered path for the write command (e.g., transfer the write command to a buffer in memory). If the control circuitry instead determines that the write command is a relatively large command (e.g., the size of the write command exceeds a pre-determined threshold), the control circuitry can select a write path for the write command (e.g., direct a bus controller to dispatch the write command to a NVM).
  • a relatively small command e.g., the size of the write command is below a pre-determined threshold
  • the control circuitry can select a buffered path for the write command (e.g., transfer the write command to a buffer in memory). If the control circuitry instead determines that the write command is a relatively large command (
  • control circuitry can use additional heuristics to determine if write commands saved in a queue can be combined into a write-multi command directly in the queue. Once the commands have been combined into the write-multi command, the control circuitry can dispatch the write-multi command directly from the queue to a NVM via a write path.
  • control circuitry in order to maximize concurrency, can attempt to re-order the write commands in a queue along page-aligned boundaries of a buffer. Consequently, read energy can be minimized on a subsequent read because user data is often read with the same alignments and granularity as the alignments and granularity with which the user data was programmed.
  • a control circuitry can combine buffered write commands and queue write commands into the same batch or transaction to improve system efficiency.
  • FIG. 1 illustrates a block diagram of electronic device 100 .
  • electronic device 100 can be or can include a portable media player, a cellular telephone, a pocket-sized personal computer, a personal digital assistance (“PDA”), a desktop computer, a laptop computer, and any other suitable type of electronic device.
  • PDA personal digital assistance
  • Non-volatile memory 120 can include a NAND flash memory based on floating gate or charge trapping technology, NOR flash memory, erasable programmable read only memory (“EPROM”), electrically erasable programmable read only memory (“EEPROM”), Ferroelectric RAM (“FRAM”), magnetoresistive RAM (“MRAM”), or any combination thereof.
  • SoC system-on-a-chip
  • NVM non-volatile memory
  • Non-volatile memory 120 can include a NAND flash memory based on floating gate or charge trapping technology, NOR flash memory, erasable programmable read only memory (“EPROM”), electrically erasable programmable read only memory (“EEPROM”), Ferroelectric RAM (“FRAM”), magnetoresistive RAM (“MRAM”), or any combination thereof.
  • NVM 120 can be organized into “blocks”, which can the smallest erasable unit, and further organized into “pages”, which can be the smallest unit that can be programmed or read.
  • NVM 120 can include multiple integrated circuits, where each integrated circuit may have multiple blocks. Memory locations (e.g., blocks or pages of blocks) from corresponding integrated circuits may form “super blocks”. Each memory location (e.g., page or block) of NVM 120 can be referenced using a physical address (e.g., a physical page address or physical block address).
  • System-on-a-chip 110 can include SoC control circuitry 112 , memory 114 , encryption module 116 , and NVM interface 118 .
  • SoC control circuitry 112 can control the general operations and functions of SoC 110 and the other components of SoC 110 or device 100 .
  • SoC control circuitry 112 can issue read or write commands to NVM interface 118 to obtain data from or store data in NVM 120 .
  • data that SoC control circuitry 112 may request for storage or retrieval may be referred to as “user data,” even though the data may not be directly associated with a user or user application. Rather, the user data can be any suitable sequence of digital information generated or obtained by SoC control circuitry 112 (e.g., via an application or operating system).
  • SoC control circuitry 112 can include any combination of hardware, software, and firmware, and any components, circuitry, or logic operative to drive the functionality of electronic device 100 .
  • SoC control circuitry 112 can include one or more processors that operate under the control of software/firmware stored in NVM 120 or memory 114 .
  • SoC control circuitry 112 can dispatch one or more commands to NVM 120 .
  • SoC control circuitry 112 can include a block device driver or wrapper that can be configured to dispatch application programming interface (“API”) operations to NVM 120 or a controller of NVM 120 .
  • SoC control circuitry 112 can modify one or more parameters of the block device driver or wrapper in order to transfer information to NVM 120 . For example, by modifying the one or more parameters, SoC control circuitry 112 can transfer information associated with commands used to access NVM 120 (e.g., read, program, erase, and/or write-multi commands).
  • Memory 114 can include any suitable type of volatile memory, such as random access memory (“RAM”) (e.g., static RAM (“SRAM”), dynamic random access memory (“DRAM”), synchronous dynamic random access memory (“SDRAM”), double-data-rate (“DDR”) RAM), cache memory, read-only memory (“ROM”), or any combination thereof.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDR double-data-rate
  • ROM read-only memory
  • Memory 114 can include a data source that can temporarily store user data for programming into or reading from non-volatile memory 120 .
  • memory 114 may act as the main memory for any processors implemented as part of SoC control circuitry 112 .
  • memory 114 can include one or more command queues (not shown in FIG. 1 ) for saving commands (e.g., read, write, and/or trim commands) received from a file system that have not yet been dispatched to NVM 120 .
  • the one or more command queues can be block-device level queues.
  • memory 114 can include a read queue for retaining read commands, a write queue for retaining write commands, and a trim queue for retaining trim commands. Consequently, SoC control circuitry 112 can scan one or more queues in order to select one or more commands that may be dispatched to NVM 120 . After the one or more commands have been dispatched, SoC control circuitry 112 can remove those commands from the queue.
  • memory 114 may instead include one queue configured to retain all types of commands (e.g., read, write, and trim commands).
  • SoC control circuitry 112 can have a set of fairness rules for servicing each of the separate queues. For example, SoC control circuitry 112 may attempt to service the queues such that each of the queues is serviced at approximately the same rate as the number of commands that are currently included in the queue. The order in which the queues are serviced affects the quality of service of the system.
  • memory 114 can include a buffer (not shown in FIG. 1 ) that can provide for longer-term storage of commands.
  • SoC control circuitry 112 can select one or more write commands saved in a queue, and transfer the selected write commands to the buffer. After saving the write commands to the buffer, SoC control circuitry 112 can remove those write commands from the queue.
  • the buffer in memory 114 can have any suitable size.
  • the buffer may be a relatively small buffer capable of storing less than one page of data.
  • the buffer may be a relatively large buffer capable of storing one or more pages of data.
  • the buffer may have a size of one page, one stripe, multiple stripes, a super block, or any other suitable size.
  • Encryption module 116 can be or may include any hardware or software, or combination thereof, configured to perform encryption and decryption based on a suitable cipher.
  • encryption module 116 can be based on the Advanced Encryption Standard (“AES”), Data Encryption Standard (“DES”), or RSA.
  • Encryption module 116 can provide security for sensitive data, such as personal information or billing information, stored on NVM 120 or transmitted/received to/from SoC 110 .
  • AES Advanced Encryption Standard
  • DES Data Encryption Standard
  • SoC 110 Although encryption module 116 is shown in FIG. 1 as included in SoC 110 , persons skilled in the art will appreciate that encryption module 116 may instead be implemented in NVM 120 .
  • encryption module 116 can encrypt and decrypt data using one or more “encryption seeds” provided by control circuitry (e.g., SoC control circuitry 112 or a controller on NVM 120 ), which may be required by the encryption algorithm to perform encryption or decryption.
  • the encryption seeds can include a key and an initialization vector (“IV”). To recover the original unencrypted data from encrypted data, the encryption seeds used for decryption may need to be the same as the seeds originally used for encryption.
  • NVM interface 118 may include any suitable combination of hardware, software, and/or firmware configured to act as an interface or driver between SoC control circuitry 112 and NVM 120 .
  • corresponding program code may be stored in NVM 120 or memory 114 .
  • NVM interface 118 can perform a variety of functions that allow SoC control circuitry 112 to access NVM 120 and to manage the memory locations (e.g., pages, blocks, super blocks, integrated circuits) of NVM 120 and the data stored therein (e.g., user data). For example, NVM interface 118 can interpret the read or write requests from SoC control circuitry 112 , perform wear leveling, and generate read and program instructions compatible with the bus protocol of NVM 120 .
  • NVM interface 118 and SoC control circuitry 112 are shown as separate modules, this is intended only to simplify the description of the embodiments of the invention. It should be understood that these modules may share hardware components, software components, or both.
  • SoC control circuitry 112 may execute a software-based memory driver for NVM interface 118 .
  • electronic device 100 can include a target device, such as a flash memory drive or Secure Digital (“SD”) card, that includes NVM 120 and some or all portions of NVM interface 118 (e.g., a translation layer, discussed below).
  • a target device such as a flash memory drive or Secure Digital (“SD”) card
  • SD Secure Digital
  • SoC 110 or SoC control circuitry 112 may act as the host controller for the target device. For example, as the host controller, SoC 110 can issue read and write requests to the target device.
  • FIG. 2 illustrates a block diagram of electronic device 200 , which may illustrate in greater detail some of the firmware, software, and/or hardware components of electronic device 100 ( FIG. 1 ) in accordance with various embodiments.
  • Electronic device 200 may have any of the features and functionalities described above in connection with FIG. 1 , and vice versa. As shown, dashed lines demarcate the layers. It is understood that the depiction of which components fall within the demarcation lines are merely illustrative and that one or more components can be affiliated with a different layer.
  • Electronic device 200 can include file system 210 , NVM driver 212 , NVM bus controller 216 , and NVM 220 .
  • file system 210 and NVM driver 212 may be software or firmware modules
  • NVM bus controller 216 and NVM 220 may be hardware modules.
  • NVM driver 212 may represent the software or firmware aspect of NVM interface 218
  • NVM bus controller 216 may represent the hardware aspect of NVM interface 218 .
  • File system 210 can include any suitable type of file system, such as a File Allocation Table (“FAT”) file system or a Hierarchical File System Plus (“HFS+”), and may be part of the operating system of electronic device 200 (e.g., part of SoC control circuitry 112 of FIG. 1 ).
  • file system 210 may include a flash file system, which provides a logical to physical mapping of pages.
  • file system 210 may perform some or all of the functionalities of NVM driver 212 discussed below, and therefore file system 210 and NVM driver 212 may or may not be separate modules.
  • File system 210 may manage file and folder structures for the application and operating system.
  • File system 210 may operate under the control of an application or operating system running on electronic device 200 , and may provide write and read commands to NVM driver 212 when the application or operating system requests that information be read from or stored in NVM 220 .
  • file system 210 can provide a logical address to indicate where the user data should be read from or written to, such as a logical page address or a logical block address (“LBA”) with a page offset.
  • LBA logical block address
  • File system 210 may provide read and write requests to NVM driver 212 that are not directly compatible with NVM 220 .
  • the logical addresses may use conventions or protocols typical of hard-drive-based systems.
  • a hard-drive-based system unlike flash memory, can overwrite a memory location without first performing a block erase.
  • hard drives may not need wear leveling to increase the lifespan of the device. Therefore, NVM interface 218 can perform any functions that are memory-specific, vendor-specific, or both to handle file system requests and perform other management functions in a manner suitable for NVM 220 .
  • NVM driver 212 can include translation layer 214 .
  • translation layer 214 may be or include a flash translation layer (“FTL”).
  • FTL flash translation layer
  • Translation layer 214 can perform any other suitable functions in addition to logical-to-physical address mapping. For example, translation layer 214 can perform any of the other functions that may be typical of flash translation layers, such as garbage collection and wear leveling.
  • NVM driver 212 may interface with NVM bus controller 216 to complete NVM access commands (e.g., program, read, and erase commands).
  • Bus controller 216 may act as the hardware interface to NVM 220 , and can communicate with NVM 220 using the bus protocol, data rate, and other specifications of NVM 220 .
  • NVM interface 218 upon receiving a command (e.g., a write command) from file system 210 , NVM interface 218 can direct NVM bus controller 216 to dispatch a command (e.g., an API operation) to NVM 220 .
  • NVM interface 218 may use translation layer 214 to obtain the physical addresses of one or more memory locations of NVM 220 .
  • NVM interface 218 may manage NVM 220 based on memory management data, sometimes referred to herein as “metadata”.
  • the metadata may be generated by NVM driver 212 or may be generated by a module operating under the control of NVM driver 212 .
  • metadata can include any information used for managing the mapping between logical and physical addresses, bad block management, wear leveling, error-correcting code (“ECC”) data used for detecting or correcting data errors, or any combination thereof.
  • ECC error-correcting code
  • the metadata may include data provided by file system 210 along with the user data, such as a logical address.
  • “metadata” may refer to any information about or relating to user data or used generally to manage the operation and memory locations of a non-volatile memory.
  • NVM interface 218 may be configured to store metadata in NVM 220 .
  • NVM interface 218 may store metadata associated with user data at the same memory location (e.g., page) in which the user data is stored.
  • NVM interface 218 may store user data, the associated logical address, and ECC data for the user data at one or more memory locations of NVM 220 .
  • NVM interface 218 may also store other types of metadata about the user data in the same memory location.
  • NVM interface 218 may store the logical address so that, on power-up of NVM 220 or during operation of NVM 220 , electronic device 200 can determine what data resides at that location.
  • file system 210 may reference the user data according to its logical address and not its physical address
  • NVM interface 218 may store the user data and logical address together to maintain their association. This way, even if a separate table maintaining the physical-to-logical mapping in NVM 220 becomes outdated, NVM interface 218 may still determine the proper mapping at power-up or reboot of electronic device 200 , for example.
  • a NVM (e.g., NVM 120 of FIG. 1 or NVM 220 of FIG. 2 ) can be organized into dies, blocks, pages, super blocks, and the like.
  • FIG. 3 shows a schematic layout of NVM 300 .
  • NVM 300 can be the same as or similar to NVM 120 ( FIG. 1 ) or NVM 220 ( FIG. 2 ).
  • FIG. 3 is merely meant to illustrate the organizational layout of NVM 300 and does not indicate an actual, physical layout of the non-volatile memory.
  • die 0 is illustrated as being next to die 1 in FIG. 3 , this is merely for illustrating the functional relationship of these dies, and in the actual, physical layout of NVM 300 , these dies may or may not be located near one another.
  • NVM 300 could include any suitable number of dies, blocks, and pages.
  • each die of NVM 300 can include 4096 blocks, each block may include 64, 128, or 192 pages based on whether the block is an SLC block, 2-bit MLC block, or 3-bit MLC block.
  • each page of NVM 300 can have any suitable size such as, for example, 2 KiB, 4 KiB, 8 KiB, or 16 KiB.
  • NVM 300 can include one or more dies (i.e., integrated circuits), such as die 0 , die 1 , die 2 , and die 3 . Each die of NVM 300 may be organized into one or more “blocks”. For example, dies 0 - 3 is illustrated as each being organized into blocks 0 - 3 .
  • Each block of the dies may be organized into one or more pages.
  • block 0 of each of dies 0 - 3 is illustrated as being organized into pages 0 - 3 .
  • Each page of NVM 300 can include any suitable information.
  • the pages can include user data, metadata, or both.
  • metadata such as ECC data can be included in the pages to provide for error detection and/or correction of the information stored in NVM 300 .
  • NVM 300 can also include one or more super blocks that include one block from each die that has the same position or “block number” in the die.
  • super block 0 of NVM 300 can include block 0 of each of dies 0 - 3 .
  • super block 1 of NVM 300 can include block 1 of each of dies 0 - 3
  • super block 2 of NVM 300 can include block 2 of each of dies 0 - 3 , and so forth.
  • Super blocks can be formed by virtually linking or “striping” together one or more blocks. Blocks need not be in the same row of each die to be virtually linked as a super block. In fact, blocks may be chosen randomly from two or more dies to form a super block. In some embodiments, a super block can include only one block. Super blocks provide operational parallelism, thereby enabling programming, reading, and erase operations to be performed on blocks located in different dies in parallel.
  • pages from two or more dies may be virtually linked together to form super pages or “stripes”.
  • stripe 302 can include page 0 from block 0 of each of dies 0 - 3 .
  • stripe 304 can include page 3 from block 0 of each of dies 0 - 3 .
  • a translation layer e.g., translation layer 214 of FIG. 2
  • the programming time can remain the same for pages that are included in a particular stripe (e.g., stripe 302 or 304 ).
  • a file system of a system can operate in a logical space.
  • each file in the system can map to one or more LBAs.
  • the one or more LBAs can be mapped to one or more physical page addresses in the NVM.
  • a control circuitry e.g., control circuitry 112 of FIG. 1
  • This operation wastes space in the NVM, and may consequently increase the amount of GC that needs to be performed on the NVM.
  • a system can be configured to dispatch multiple write commands to a NVM in a single transaction.
  • FIG. 4 show illustrative timing diagrams for various systems.
  • the programming time (t_prog) is not substantially different if one page (as shown in timing diagram 400 ) or multiple pages (as shown in time diagram 402 ) are being programmed in a single operation (e.g., as a write-multi command). This may be particularly true if the multiple pages are included in a single stripe.
  • time for startup programming also do not vary substantially if one page or multiple pages are being programmed in a single operation.
  • time for startup programming refers to the time that it takes to activate the NVM interface (e.g., NVM interface 218 of FIG. 2 ).
  • t_transfer the time that it takes to transfer contents to a page buffer (t_transfer) is relatively short as compared to t_prog.
  • t_transfer for programming a write-multi command can still be of shorter duration as compared to t_prog.
  • timing diagram 404 programming multiple pages separately is less efficient due to the combination of multiple t_progs. The efficiency of the system is especially impacted if the amount of user data that can be programmed each time is relatively small.
  • a control circuitry can combine multiple write commands into a single write-multi command.
  • a write-multi command can be formed using any suitable approach.
  • the control circuitry can form a write-multi command by combining a set of logical addresses, a set of counters, and a set of buffers associated with the multiple commands into an array of logical addresses, an array of counters, and an array of buffers, respectively.
  • the control circuitry can dispatch the write-multi command to a NVM (e.g., in the form of an API operation).
  • a NVM e.g., in the form of an API operation.
  • FIG. 5 shows a graphical view of an illustrative programming flow 500 for a system.
  • a file system can issue one or more commands (e.g., read, write, and/or trim commands) that can be saved in one or more queues in the system.
  • commands e.g., read, write, and/or trim commands
  • queue 502 can represent a write queue that can be configured to retain one or more write commands.
  • the system may include additional queues for storing read and/or trim commands.
  • queue 502 can be configured to store write, read, and trim commands.
  • a control circuitry e.g., control circuitry 112 of FIG. 1
  • the control circuitry can select one of two paths for dispatching the write command.
  • the control circuitry can select a write path (e.g., path 504 ) for dispatching the write command to NVM 510 .
  • NVM 510 can be the same as or similar to NVM 120 ( FIG. 1 ), NVM 220 ( FIG. 2 ), or NVM 300 ( FIG. 3 ).
  • Hardware engine 504 can be, for example, a direct memory access (“DMA”) engine.
  • DMA direct memory access
  • the control circuitry can encrypt the write command using an encryption module (e.g., encryption module 116 of FIG. 1 ).
  • the encryption module can encrypt the write command using any suitable encryption standard such as, for example, AES, DES, or RSA.
  • NVM interface 508 can be the same as or substantially similar to NVM interface 218 of FIG. 8 .
  • a NVM bus controller e.g., NVM bus controller 216 of FIG. 2
  • NVM bus controller 216 of FIG. 2 can be directed to dispatch the write command to NVM 510 across bus 512 .
  • the control circuitry can select a buffered path (e.g., path 520 ) for a particular write command. By selecting path 504 , the control circuitry can transfer the write command from queue 502 to buffer 522 .
  • buffer 522 can be stored in volatile memory (e.g., memory 114 of FIG. 1 ), and can be configured to store any suitable number of write commands.
  • control circuitry can wait to receive additional write commands from the file system before directing a bus controller to concurrently dispatch multiple commands to NVM 510 via path 524 .
  • the multiple commands can be combined as a write-multi command, which can then be dispatched to NVM 510 .
  • control circuitry can direct an encryption module to encrypt the write command before the write command is transferred to buffer 522 . After encrypting the write command, the control circuitry can store the encrypted write command in buffer 522 .
  • control circuitry can transfer an unencrypted write command to buffer 522 .
  • the control circuitry may need to copy one or more encryption seeds (e.g., one or more keys and IVs) to volatile memory (e.g., memory 114 of FIG. 1 ).
  • the control circuitry can direct an encryption module to use the stored encryption seeds to encrypt the write command. Consequently, for such a configuration, buffer 522 may be associated with unique encryption characteristics (e.g., a fixed set of encryption seeds).
  • side-band signals may be issued by an application, operating system, or file system of a system.
  • side-band signals can refer to any suitable signal that may be generated by a cache algorithm of an application, operating system, or file system to maintain coherency.
  • one type of side-band signal may be a synchronized cache signal, which may be generated by one or more applications in the system.
  • Another side-band signal may be a force unit access (“FUA”) signal, which can be generated by HFS+.
  • FUA force unit access
  • control circuitry can direct a bus controller to flush the buffer. That is, the control circuitry can direct the bus controller to dispatch all of the commands currently stored in the buffer to the NVM via path 524 .
  • path 520 involves an extra step of copying a write command from queue 502 to buffer 522 , additional traffic may be generated in the system. As a result, in some scenarios, path 520 may be less efficient as compared to path 504 . For example, if the transferred command is a large command, it may be more efficient to directly program the command to NVM 510 via path 504 .
  • control circuitry may need to apply one or more heuristics to select a path that is best suited for dispatching a particular write command.
  • One of the heuristics that may be used is the size of a write command. Based at least in part on the size of the write command, the control circuitry can determine whether or not to buffer a write command.
  • control circuitry can first determine the size of a write command that is saved in queue 502 .
  • the control circuitry can then determine if the size of the write command is below a pre-determined threshold.
  • the control circuitry can determine to perform buffering of the write command.
  • control circuitry can transfer the write command from queue 502 to buffer 522 via path 520 . Once the write command has been stored in buffer 522 , the command can be combined with other write commands at a later time.
  • control circuitry instead determines that the size of the write command is above the pre-determined threshold (e.g., the write command is a relatively large write command), the control circuitry can determine not to perform buffering of the write command. As a result, the control circuitry can direct a bus controller to dispatch the write command from queue 502 to NVM 510 via path 504 .
  • control circuitry can reduce the total required bandwidth for executing write commands from an overall system perspective.
  • control circuitry in addition to the size of a write command, can use other heuristics to determine which of path 504 or path 520 to select. For example, the control circuitry can use one or more heuristics to determine if system efficiency can be improved by combining multiple write commands into a write-multi command directly in queue 502 . In particular, once the write commands have been combined into the write-multi command, the write-multi command can be dispatched directly from queue 502 via path 504 .
  • One of the heuristics that can be used is the number of write commands saved in queue 502 .
  • the control circuitry can determine whether the number of write commands saved in queue 502 exceeds a pre-determined number. In response to determining that the number of write commands exceeds the pre-determined number, the control circuitry can combine the write commands into a write-multi command, and can direct a bus controller to dispatch the write-multi command from queue 502 to NVM 510 . This way, the control circuitry can gain the benefits of concurrently dispatching multiple write commands, while also avoiding the extra step of copying a write command to buffer 522 .
  • I/O patterns Another heuristic that may be used is I/O patterns.
  • the control circuitry can assume that I/O generally arrive in batches. For instance, during a particular period of time, the control circuitry may be more likely to receive a burst of read commands or write commands rather than write commands interspersed with read commands. Consequently, if the control circuitry has already received a series of sequential writes, the control circuitry may expect to receive additional sequential writes in the near future.
  • control circuitry can determine whether a set of the most recently received write commands in queue 502 is sequential. In response to determining that the set of the most recently received write commands is sequential, the control circuitry can wait to receive additional write commands in queue 502 .
  • control circuitry can combine the additional write commands with the set of the most recently received commands into a write-multi command.
  • the control circuitry can then direct a bus controller to dispatch the write-multi command from queue 502 to NVM 510 . Consequently, the control circuitry can bypass buffer 522 for sequential write commands.
  • control circuitry while the control circuitry is waiting to receive additional write commands in queue 502 , the control circuitry can instead service a read queue and/or a trim queue stored in memory. As such, the control circuitry can ignore existing fairness rules, and allow write commands to stack up in queue 502 .
  • control circuitry can determine whether or not to buffer a small write command based on the number of dispatched write commands immediately preceding the write command and/or the amount of time that has elapsed since the last dispatched write command. That is, based on an observation of past access patterns, the control circuitry can determine that additional write commands may likely arrive in the near future. Consequently, it may be more efficient to hold on to a small write command in queue 502 , and wait for additional write commands.
  • control circuitry determines that the number of dispatched write commands immediately preceding the write command exceeds a pre-determined number, the control circuitry can wait to receive one or more additional write commands in queue 502 . Similarly, the control circuitry may wait to receive additional write commands if the amount of time that has elapsed since the last dispatched write command is less than a pre-determined time period (e.g., a few milliseconds).
  • a pre-determined time period e.g., a few milliseconds
  • control circuitry can combine the additional write commands with the small write command into a write-multi command.
  • the control circuitry can then direct a bus controller to dispatch the write-multi command from queue 502 to NVM 510 .
  • control circuitry can transfer one or more write commands from queue 502 to buffer 522 after a period of time has elapsed (e.g., a few milliseconds). That is, if the control circuitry does not actually receive the expected write commands, the control circuitry can transfer the small write command to buffer 522 .
  • control circuitry can determine whether or not to buffer a small write command based on the number of remaining write commands saved in queue 502 . For instance, if queue 502 only has a single write command or a small number of write commands, the control circuitry can transfer the small write command to buffer 522 .
  • control circuitry can determine whether to perform buffering based at least in part on the amount of available space in memory. For instance, if the amount of available space in memory is below a pre-determined threshold, the control circuitry can select path 504 and bypass buffering. On the other hand, if the amount of available space is above the pre-determined threshold, the control circuitry can select path 520 , and buffer a small write command.
  • control circuitry may use past performance of the I/O scheduling algorithm to determine whether or not to buffer. For instance, the control circuitry can maintain a scorecard of past performance of the scheduling algorithm, which can include the success rate of past buffering decisions. If the control circuitry determines that the success rate is relatively low, the control circuitry can adapt the heuristics used in determining whether or not to buffer.
  • control circuitry can determine whether or not to buffer based on the size of the buffer, or, particularly, the size of the buffer relative to the size of a write command. For example, if the size of the write command is the same as or larger than the size of buffer 522 , the control circuitry can determine not to transfer the write command to buffer 522 , and instead select to dispatch the write command directly to NVM 510 via path 504 .
  • the control circuitry can apply one or more algorithms in order to reduce fragmentation across dies for a particular write span.
  • an assumption can be made that user data is generally read from a NVM with the same granularity as the granularity with which the user data was programmed (e.g., along similar boundaries). For example, a system may save a text document to a NVM. Subsequently, the system will likely load the entire document as well.
  • a system programs a database in a NVM the database is programmed using small, random updates. Likewise, at a later time, the database will likely be accessed using random reads. As such, the system can expect that user data associated with a sequence of small write commands will be read simultaneously.
  • a system may separately program each write command to a NVM.
  • the user data associated with the write commands may be split among multiple dies, which can increase the energy (e.g., the overall power consumption) that it will take to read the user data at a later time. This is because each die needs to be activated in order to read the user data.
  • control circuitry can attempt to re-order the write commands in a queue along page-aligned boundaries of a buffer.
  • the control circuitry can attempt to select one or more commands from a queue in order to fill a buffer along one or more page-aligned boundaries.
  • buffer 600 includes 8 page-aligned portions (e.g., page-aligned portions 0 - 7 ), where each page-aligned portion includes space to hold four LBAs.
  • Each page-aligned portion of buffer 600 may be configured to store the same amount of data as a page of a NVM (e.g., NVM 300 of FIG. 3 or NVM 510 of FIG. 5 ).
  • NVM e.g., NVM 300 of FIG. 3 or NVM 510 of FIG. 5
  • buffer 600 may include any suitable number of page-aligned portions.
  • page-aligned portions 0 and 1 of buffer 600 have been completely filled with write commands, and page-aligned portion 2 of buffer 600 has only been partially filled.
  • write commands 0 - 3 are stored in page-aligned portion 0
  • write commands 4 - 7 are stored in page-aligned portion 1
  • write commands 8 and 9 are stored in page-aligned portion 2 .
  • the remaining portions of buffer 600 are empty.
  • the control circuitry can determine that the write commands stored in buffer 600 are not currently aligned with at least one page boundary (e.g., page-aligned portion 2 has not been completely filled). In response to determining that the write commands are not aligned in buffer 600 , the control circuitry can detect a fill size that is needed to fill buffer 600 to at least one page boundary. For page-aligned portion 2 , for instance, the control circuitry can determine that the fill size is the size of two LBAs (e.g., sections 602 and 604 ).
  • the control circuitry can then determine if there is at least one write command in a queue (e.g., queue 502 of FIG. 5 ) that has a size equal to the fill size. For example, for page-aligned portion 2 of buffer 600 , the control circuitry can select one write command with a LBA size of two or two write commands each with a LBA size of one.
  • a suitable algorithm can be used to determine if there is at least one write command in a queue that has a size equal to the fill size such as, for example, a best-fit algorithm.
  • the control circuitry can use one or more of the heuristics discussed above to select write command(s) for filling the buffer.
  • the control circuitry can transfer the at least one write command from the queue to buffer 600 .
  • the control circuitry can completely fill page-aligned portion 2 of buffer 600 (e.g., fill sections 602 and 604 with one or more write commands).
  • the control circuitry can combine the commands into a write-multi command. After generating the write-multi command, the control circuitry can direct a bus controller to dispatch the write-multi command from buffer 600 to a NVM.
  • the control circuitry can first direct an encryption module to encrypt the at least one command prior to transferring the command from a queue to buffer 600 . After the command has been encrypted, the control circuitry can store the at least one encrypted command in buffer 600 .
  • control circuitry may receive one or more side-band signals (e.g., synchronized cache or FUA signals) before a page-aligned portion of buffer 600 has been completely filled.
  • the control circuitry can fill buffer 600 to the end of the page-aligned portion (e.g., page-aligned portion 2 ) with invalid data (e.g., fill sections 602 and 604 with invalid data).
  • the control circuitry can then combine the write commands stored in buffer 600 with the invalid data into a write-multi command, and direct the bus controller to dispatch the write-multi command from buffer 600 to the NVM.
  • the system can minimize read energy on the read path, minimize buffering overhead, and reduce fragmentation in a NVM.
  • a control circuitry can combine buffered write commands and queue write commands into the same batch or transaction (e.g., as a write-multi command) via path 530 .
  • the control circuitry can select at least a first write command from queue 502 and at least a second write command from buffer 522 , where the first and second write commands may be selected to minimize page crossings in NVM 510 .
  • the control circuitry can combine the commands to form a write-multi command.
  • the control circuitry can direct a bus controller to dispatch the write-multi command to the NVM via path 530 .
  • the control circuitry may need to combine multiple types of encryption characteristics in a single transaction. For instance, the control circuitry can apply at least a first encryption seed to the one or more write commands selected from queue 502 .
  • the control circuitry can retrieve one or more encryption seeds from memory (e.g., memory 114 of FIG. 1 ). After retrieving the one or more encryption seeds, the control circuitry can apply the one or more encryption seeds to the one or more write commands selected from buffer 522 . The control circuitry can then combine the first and second encrypted write commands into a write-multi command, and dispatch the write-multi command to NVM 510 .
  • FIGS. 7A-7B and 8 flowcharts of illustrative processes are shown in accordance with various embodiments of the invention. These processes may be executed by one or more components in a system (e.g., electronic device 100 of FIG. 1 or electronic device 200 of FIG. 2 ). For example, at least some of the steps in the processes of FIGS. 7A-7B and 8 may be performed by control circuitry (e.g., SoC control circuitry 112 of FIG. 1 ). Persons skilled in the art will appreciate that one or more of the steps may be performed by a NVM interface (e.g., NVM interface 118 of FIG. 1 or NVM interface 218 of FIG. 2 ). Dashed boxes are meant to illustrate steps that may or may not be performed based on the configuration of the system.
  • control circuitry e.g., SoC control circuitry 112 of FIG. 1
  • NVM interface e.g., NVM interface 118 of FIG. 1 or NVM interface 218 of FIG. 2 .
  • Dashed boxes are meant
  • process 700 is shown for efficient buffering in a NVM (e.g., NVM 120 of FIG. 1 , NVM 220 of FIG. 2 , NVM 300 of FIG. 3 , or NVM 510 of FIG. 5 ).
  • NVM e.g., NVM 120 of FIG. 1 , NVM 220 of FIG. 2 , NVM 300 of FIG. 3 , or NVM 510 of FIG. 5 .
  • Process 700 may begin at step 702 , and at step 704 , the control circuitry can receive a first write command from a file system (e.g., file system 210 of FIG. 2 ) for programming to a NVM.
  • a file system e.g., file system 210 of FIG. 2
  • the control circuitry can save the first write command in a queue (e.g., queue 502 of FIG. 5 ).
  • the control circuitry can apply one or more heuristics to the first write command in order to determine whether to transfer the first write command to a buffer (e.g., buffer 522 of FIG. 5 or buffer 600 of FIG. 6 ). That is, the control circuitry can determine whether to perform buffering of the first write command.
  • process 700 may move to step 712 .
  • the control circuitry can direct an encryption module (e.g., encryption module 116 of FIG. 1 ) to encrypt the first write command.
  • the control circuitry can direct a bus controller (e.g., NVM bus controller 216 of FIG. 2 ) to dispatch the encrypted write command to the NVM (e.g., via path 504 of FIG. 5 ).
  • the control circuitry may be configured to direct the bus controller to dispatch the encrypted write command to a hardware engine (e.g., hardware engine 506 of FIG. 5 ).
  • Process 700 may then end at step 716 .
  • process 700 may move to step 718 or, alternatively, to step 720 .
  • the control circuitry can direct an encryption module to encrypt the first write command.
  • the control circuitry can also copy one or more encryption seeds to memory (e.g., memory 114 of FIG. 1 ).
  • the control circuitry can store either the encrypted command or the unencrypted command in the buffer (e.g., via path 520 of FIG. 5 ).
  • the control circuitry can determine if the write commands stored in the buffer are currently aligned with at least one page boundary. If, at step 722 , the control circuitry determines that the write commands stored in the buffer are currently aligned with at least one page boundary, process 700 may move to step 724 or step 726 . That is, if unencrypted commands are stored in the buffer, at step 724 , the control circuitry can direct an encryption module to encrypt the write commands using one or more stored encryption seeds. If, however, encrypted commands are stored in the buffer, process 700 may move directly to step 726 .
  • the control circuitry can combine the write commands in the buffer into a write-multi command. Then, at step 728 , the control circuitry can direct a bus controller to dispatch the write-multi command from the buffer to the NVM (e.g., via path 524 of FIG. 5 ). Process 700 may then end at step 716 .
  • process 700 may move to step 730 .
  • the control circuitry can detect a fill size that is needed to fill the buffer to the at least one page boundary.
  • control circuitry can determine if there is at least one write command in the queue that has a size equal to the fill size. If at step 732 , the control circuitry determines that there is at least one write command that has a size equal to the fill size, process 700 may move to step 734 .
  • control circuitry can transfer the at least one write command from the queue to the buffer.
  • Process 700 may then return to step 724 or step 726 (e.g., depending on whether or not the write commands in the buffer are encrypted).
  • process 700 may move to step 736 .
  • the control circuitry can wait for additional write commands to arrive (e.g., wait to receive additional write commands from a file system). Process 700 may then end at step 716 .
  • process 800 is shown for determining whether or not to buffer in a NVM (e.g., NVM 120 of FIG. 1 , NVM 220 of FIG. 2 , NVM 300 of FIG. 3 , or NVM 510 of FIG. 5 ).
  • NVM e.g., NVM 120 of FIG. 1 , NVM 220 of FIG. 2 , NVM 300 of FIG. 3 , or NVM 510 of FIG. 5 .
  • process 800 may represent a more detailed view of the applying at least one heuristic step 708 and the determining whether to transfer step 710 of process 700 ( FIG. 7 ).
  • Process 800 may begin at step 802 , and at step 804 , the control circuitry can determine the size of a write command saved in a queue (e.g., queue 502 of FIG. 5 ).
  • a queue e.g., queue 502 of FIG. 5
  • the control circuitry can determine whether the size of the write command is below a pre-determined threshold. If, at step 806 , the control circuitry determines that the size of the write command is not below the pre-determined threshold (e.g., the write command is a large write command), process 800 may move to step 808 .
  • control circuitry can direct a bus controller to dispatch the write command from the queue to a NVM (e.g., via path 504 of FIG. 5 ).
  • Process 800 may then end at step 810 .
  • control circuitry may move to step 812 .
  • the control circuitry can determine whether the number of write commands saved in the queue exceeds a pre-determined number. If, at step 812 , the control circuitry determines that the number of write commands saved in the queue exceeds the pre-determined number, process 800 may move to step 814 .
  • control circuitry can combine the write commands into a write-multi command.
  • Process 800 may then move to step 808 , where the write-multi command can be dispatched from the queue to the NVM.
  • process 800 may move to step 816 .
  • the control circuitry can determine whether a set of most recently received write commands in the queue is sequential. If, at step 816 , the control circuitry determines that the set of most recently received write commands is sequential, process 800 may move to step 818 .
  • the control circuitry can wait to receive additional write commands in the queue. Then, at step 820 , the control circuitry can determine if at least one additional write command has been received from a file system (e.g., file system 210 of FIG. 2 ). For example, the control circuitry can make this determination after a short period of time (e.g., a few milliseconds).
  • a file system e.g., file system 210 of FIG. 2
  • the control circuitry can make this determination after a short period of time (e.g., a few milliseconds).
  • process 800 may move to step 814 , where the write commands can be combined into a write-multi command.
  • process 800 may move to step 822 .
  • the control circuitry can transfer the write command to a buffer (e.g., via path 520 of FIG. 5 ).
  • Process 800 may then end at step 810 .
  • process 800 may move to step 824 .
  • the control circuitry can determine whether the number of dispatched write commands immediately preceding the write command exceeds a pre-determined number.
  • process 800 may move to step 818 , where the control circuitry can wait to receive additional commands in the queue. If, at step 824 , the control circuitry instead determines that the number of dispatched write commands does not exceed a pre-determined number, process 800 may move to step 822 , where the control circuitry can transfer the write command to the buffer.
  • processes 700 and 800 of FIGS. 7A-7B and 8 are merely illustrative. Any of the steps may be removed, modified, or combined, and any additional steps may be added, without departing from the scope of the invention.

Abstract

Systems and methods are disclosed for efficient buffering for a system having non-volatile memory (“NVM”). In some embodiments, a control circuitry of a system can use heuristics to determine whether to perform buffering of one or more write commands received from a file system. In other embodiments, the control circuitry can minimize read energy and buffering overhead by efficiently re-ordering write commands in a queue along page-aligned boundaries of a buffer. In further embodiments, the control circuitry can optimally combine write commands from a buffer with write commands from a queue. After combining the commands, the control circuitry can dispatch the commands in a single transaction.

Description

    BACKGROUND OF THE DISCLOSURE
  • NAND flash memory, as well as other types of non-volatile memories (“NVMs”), are commonly used for mass storage. For example, consumer electronics such as portable media players often include flash memory to store music, videos, and other media.
  • Because a file system of a device may perform multiple small synchronous input/outputs (“I/Os”), die parallelism cannot be achieved in some cases. This consequently may reduce system bandwidth, and increase the latency for programming the pages of a NVM.
  • In some scenarios, the latency for programming a NVM can be improved by buffering one or more I/Os prior to programming the I/Os to the NVM. However, in other scenarios, if I/Os can instead be directly programmed to the NVM, indiscriminate buffering of all I/Os received from a file system may actually slow down the programming process.
  • SUMMARY OF THE DISCLOSURE
  • Systems and methods are disclosed for efficient buffering of a system having non-volatile memory (“NVM”). The system can select to buffer writes where the buffering will increase system performance, and select to program writes directly to a NVM where the buffering would be a net performance burden. In some embodiments, a system can use heuristics to determine whether to perform buffering of one or more write commands received from a file system. In other embodiments, the system can minimize read energy and buffering overhead by efficiently re-ordering write commands in a queue along page-aligned boundaries of a buffer. In further embodiments, the system can optimally combine write commands from a buffer with write commands from a queue. After combining the commands, the system can dispatch the commands in a single transaction.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other aspects and advantages of the invention will become more apparent upon consideration of the following detailed description, taken in conjunction with accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
  • FIGS. 1 and 2 are block diagrams of electronic devices configured in accordance with various embodiments of the invention;
  • FIG. 3 is a functional view of a non-volatile memory in accordance with various embodiments of the invention;
  • FIG. 4 is a graphical view of illustrative timing diagrams for various systems;
  • FIG. 5 is a graphical view of an illustrative programming flow for a system configured in accordance with various embodiments of the invention;
  • FIG. 6 is a graphical view of an illustrative buffer configured in accordance with various embodiments of the invention;
  • FIGS. 7A and 7B are flowcharts of an illustrative process for efficient buffering in a non-volatile memory in accordance with various embodiments of the invention; and
  • FIG. 8 is a flowchart of an illustrative process for determining whether to buffer in a non-volatile memory in accordance with various embodiments of the invention.
  • DETAILED DESCRIPTION OF THE DISCLOSURE
  • Systems and methods for efficient buffering of a system having non-volatile memory (“NVM”) are provided. A control circuitry of a system can use heuristics to determine whether or not to buffer write commands received from a file system. Any suitable heuristics can be used, such as, for example, the size of a write command, the number of dispatched write command immediately preceding the write command, the number of remaining write commands saved in a queue, the amount of available space in memory, the size of a buffer, and input/output (“I/O”) patterns.
  • In some embodiments, the control circuitry can determine whether or not to buffer a write command based on the size of command. For example, if the control circuitry determines that the write command is a relatively small command (e.g., the size of the write command is below a pre-determined threshold), the control circuitry can select a buffered path for the write command (e.g., transfer the write command to a buffer in memory). If the control circuitry instead determines that the write command is a relatively large command (e.g., the size of the write command exceeds a pre-determined threshold), the control circuitry can select a write path for the write command (e.g., direct a bus controller to dispatch the write command to a NVM).
  • In some embodiments, instead of automatically selecting a buffered path for small write commands, the control circuitry can use additional heuristics to determine if write commands saved in a queue can be combined into a write-multi command directly in the queue. Once the commands have been combined into the write-multi command, the control circuitry can dispatch the write-multi command directly from the queue to a NVM via a write path.
  • In other embodiments, in order to maximize concurrency, the control circuitry can attempt to re-order the write commands in a queue along page-aligned boundaries of a buffer. Consequently, read energy can be minimized on a subsequent read because user data is often read with the same alignments and granularity as the alignments and granularity with which the user data was programmed. In further embodiments, a control circuitry can combine buffered write commands and queue write commands into the same batch or transaction to improve system efficiency.
  • FIG. 1 illustrates a block diagram of electronic device 100. In some embodiments, electronic device 100 can be or can include a portable media player, a cellular telephone, a pocket-sized personal computer, a personal digital assistance (“PDA”), a desktop computer, a laptop computer, and any other suitable type of electronic device.
  • Electronic device 100 can include system-on-a-chip (“SoC”) 110 and non-volatile memory (“NVM”) 120. Non-volatile memory 120 can include a NAND flash memory based on floating gate or charge trapping technology, NOR flash memory, erasable programmable read only memory (“EPROM”), electrically erasable programmable read only memory (“EEPROM”), Ferroelectric RAM (“FRAM”), magnetoresistive RAM (“MRAM”), or any combination thereof.
  • NVM 120 can be organized into “blocks”, which can the smallest erasable unit, and further organized into “pages”, which can be the smallest unit that can be programmed or read. In some embodiments, NVM 120 can include multiple integrated circuits, where each integrated circuit may have multiple blocks. Memory locations (e.g., blocks or pages of blocks) from corresponding integrated circuits may form “super blocks”. Each memory location (e.g., page or block) of NVM 120 can be referenced using a physical address (e.g., a physical page address or physical block address).
  • System-on-a-chip 110 can include SoC control circuitry 112, memory 114, encryption module 116, and NVM interface 118. SoC control circuitry 112 can control the general operations and functions of SoC 110 and the other components of SoC 110 or device 100. For example, responsive to user inputs and/or the instructions of an application or operating system, SoC control circuitry 112 can issue read or write commands to NVM interface 118 to obtain data from or store data in NVM 120. For clarity, data that SoC control circuitry 112 may request for storage or retrieval may be referred to as “user data,” even though the data may not be directly associated with a user or user application. Rather, the user data can be any suitable sequence of digital information generated or obtained by SoC control circuitry 112 (e.g., via an application or operating system).
  • SoC control circuitry 112 can include any combination of hardware, software, and firmware, and any components, circuitry, or logic operative to drive the functionality of electronic device 100. For example, SoC control circuitry 112 can include one or more processors that operate under the control of software/firmware stored in NVM 120 or memory 114.
  • SoC control circuitry 112 can dispatch one or more commands to NVM 120. In some embodiments, SoC control circuitry 112 can include a block device driver or wrapper that can be configured to dispatch application programming interface (“API”) operations to NVM 120 or a controller of NVM 120. In some embodiments, SoC control circuitry 112 can modify one or more parameters of the block device driver or wrapper in order to transfer information to NVM 120. For example, by modifying the one or more parameters, SoC control circuitry 112 can transfer information associated with commands used to access NVM 120 (e.g., read, program, erase, and/or write-multi commands).
  • Memory 114 can include any suitable type of volatile memory, such as random access memory (“RAM”) (e.g., static RAM (“SRAM”), dynamic random access memory (“DRAM”), synchronous dynamic random access memory (“SDRAM”), double-data-rate (“DDR”) RAM), cache memory, read-only memory (“ROM”), or any combination thereof. Memory 114 can include a data source that can temporarily store user data for programming into or reading from non-volatile memory 120. In some embodiments, memory 114 may act as the main memory for any processors implemented as part of SoC control circuitry 112.
  • In some embodiments, memory 114 can include one or more command queues (not shown in FIG. 1) for saving commands (e.g., read, write, and/or trim commands) received from a file system that have not yet been dispatched to NVM 120. The one or more command queues can be block-device level queues. For example, memory 114 can include a read queue for retaining read commands, a write queue for retaining write commands, and a trim queue for retaining trim commands. Consequently, SoC control circuitry 112 can scan one or more queues in order to select one or more commands that may be dispatched to NVM 120. After the one or more commands have been dispatched, SoC control circuitry 112 can remove those commands from the queue. Persons skilled in the art will appreciate that memory 114 may instead include one queue configured to retain all types of commands (e.g., read, write, and trim commands).
  • The one or more queues will generally be serviced until the queues are empty. In some embodiments, SoC control circuitry 112 can have a set of fairness rules for servicing each of the separate queues. For example, SoC control circuitry 112 may attempt to service the queues such that each of the queues is serviced at approximately the same rate as the number of commands that are currently included in the queue. The order in which the queues are serviced affects the quality of service of the system.
  • In some embodiments, because commands may not be able to be saved in the queues indefinitely (e.g., other threads or processes may be waiting for those I/Os to complete), memory 114 can include a buffer (not shown in FIG. 1) that can provide for longer-term storage of commands. For example, in some cases, SoC control circuitry 112 can select one or more write commands saved in a queue, and transfer the selected write commands to the buffer. After saving the write commands to the buffer, SoC control circuitry 112 can remove those write commands from the queue.
  • Persons skilled in the art will appreciate that the buffer in memory 114 can have any suitable size. For example, the buffer may be a relatively small buffer capable of storing less than one page of data. However, if the minimum program granularity is one page or larger, the buffer may be a relatively large buffer capable of storing one or more pages of data. For instance, the buffer may have a size of one page, one stripe, multiple stripes, a super block, or any other suitable size.
  • Encryption module 116 can be or may include any hardware or software, or combination thereof, configured to perform encryption and decryption based on a suitable cipher. For example, encryption module 116 can be based on the Advanced Encryption Standard (“AES”), Data Encryption Standard (“DES”), or RSA. Encryption module 116 can provide security for sensitive data, such as personal information or billing information, stored on NVM 120 or transmitted/received to/from SoC 110. Although encryption module 116 is shown in FIG. 1 as included in SoC 110, persons skilled in the art will appreciate that encryption module 116 may instead be implemented in NVM 120.
  • In some embodiments, encryption module 116 can encrypt and decrypt data using one or more “encryption seeds” provided by control circuitry (e.g., SoC control circuitry 112 or a controller on NVM 120), which may be required by the encryption algorithm to perform encryption or decryption. In some embodiments, and particularly for AES-based encryption modules, the encryption seeds can include a key and an initialization vector (“IV”). To recover the original unencrypted data from encrypted data, the encryption seeds used for decryption may need to be the same as the seeds originally used for encryption.
  • NVM interface 118 may include any suitable combination of hardware, software, and/or firmware configured to act as an interface or driver between SoC control circuitry 112 and NVM 120. For any software modules included in NVM interface 118, corresponding program code may be stored in NVM 120 or memory 114.
  • NVM interface 118 can perform a variety of functions that allow SoC control circuitry 112 to access NVM 120 and to manage the memory locations (e.g., pages, blocks, super blocks, integrated circuits) of NVM 120 and the data stored therein (e.g., user data). For example, NVM interface 118 can interpret the read or write requests from SoC control circuitry 112, perform wear leveling, and generate read and program instructions compatible with the bus protocol of NVM 120.
  • While NVM interface 118 and SoC control circuitry 112 are shown as separate modules, this is intended only to simplify the description of the embodiments of the invention. It should be understood that these modules may share hardware components, software components, or both. For example, SoC control circuitry 112 may execute a software-based memory driver for NVM interface 118.
  • In some embodiments, electronic device 100 can include a target device, such as a flash memory drive or Secure Digital (“SD”) card, that includes NVM 120 and some or all portions of NVM interface 118 (e.g., a translation layer, discussed below). In these embodiments, SoC 110 or SoC control circuitry 112 may act as the host controller for the target device. For example, as the host controller, SoC 110 can issue read and write requests to the target device.
  • FIG. 2 illustrates a block diagram of electronic device 200, which may illustrate in greater detail some of the firmware, software, and/or hardware components of electronic device 100 (FIG. 1) in accordance with various embodiments. Electronic device 200 may have any of the features and functionalities described above in connection with FIG. 1, and vice versa. As shown, dashed lines demarcate the layers. It is understood that the depiction of which components fall within the demarcation lines are merely illustrative and that one or more components can be affiliated with a different layer.
  • Electronic device 200 can include file system 210, NVM driver 212, NVM bus controller 216, and NVM 220. In some embodiments, file system 210 and NVM driver 212 may be software or firmware modules, and NVM bus controller 216 and NVM 220 may be hardware modules. Accordingly, in these embodiments, NVM driver 212 may represent the software or firmware aspect of NVM interface 218, and NVM bus controller 216 may represent the hardware aspect of NVM interface 218.
  • File system 210 can include any suitable type of file system, such as a File Allocation Table (“FAT”) file system or a Hierarchical File System Plus (“HFS+”), and may be part of the operating system of electronic device 200 (e.g., part of SoC control circuitry 112 of FIG. 1). In some embodiments, file system 210 may include a flash file system, which provides a logical to physical mapping of pages. In these embodiments, file system 210 may perform some or all of the functionalities of NVM driver 212 discussed below, and therefore file system 210 and NVM driver 212 may or may not be separate modules.
  • File system 210 may manage file and folder structures for the application and operating system. File system 210 may operate under the control of an application or operating system running on electronic device 200, and may provide write and read commands to NVM driver 212 when the application or operating system requests that information be read from or stored in NVM 220. Along with each read or write command, file system 210 can provide a logical address to indicate where the user data should be read from or written to, such as a logical page address or a logical block address (“LBA”) with a page offset.
  • File system 210 may provide read and write requests to NVM driver 212 that are not directly compatible with NVM 220. For example, the logical addresses may use conventions or protocols typical of hard-drive-based systems. A hard-drive-based system, unlike flash memory, can overwrite a memory location without first performing a block erase. Moreover, hard drives may not need wear leveling to increase the lifespan of the device. Therefore, NVM interface 218 can perform any functions that are memory-specific, vendor-specific, or both to handle file system requests and perform other management functions in a manner suitable for NVM 220.
  • NVM driver 212 can include translation layer 214. In some embodiments, translation layer 214 may be or include a flash translation layer (“FTL”). On a write command, translation layer 214 can map the provided logical address to a free, erased physical location on NVM 220. On a read command, translation layer 214 can use the provided logical address to determine the physical address at which the requested data is stored. Because each NVM may have a different layout depending on the size or vendor of the NVM, this mapping operation may be memory and/or vendor-specific. Translation layer 214 can perform any other suitable functions in addition to logical-to-physical address mapping. For example, translation layer 214 can perform any of the other functions that may be typical of flash translation layers, such as garbage collection and wear leveling.
  • NVM driver 212 may interface with NVM bus controller 216 to complete NVM access commands (e.g., program, read, and erase commands). Bus controller 216 may act as the hardware interface to NVM 220, and can communicate with NVM 220 using the bus protocol, data rate, and other specifications of NVM 220.
  • In some embodiments, upon receiving a command (e.g., a write command) from file system 210, NVM interface 218 can direct NVM bus controller 216 to dispatch a command (e.g., an API operation) to NVM 220. In some embodiments, NVM interface 218 may use translation layer 214 to obtain the physical addresses of one or more memory locations of NVM 220.
  • NVM interface 218 may manage NVM 220 based on memory management data, sometimes referred to herein as “metadata”. The metadata may be generated by NVM driver 212 or may be generated by a module operating under the control of NVM driver 212. For example, metadata can include any information used for managing the mapping between logical and physical addresses, bad block management, wear leveling, error-correcting code (“ECC”) data used for detecting or correcting data errors, or any combination thereof. The metadata may include data provided by file system 210 along with the user data, such as a logical address. Thus, in general, “metadata” may refer to any information about or relating to user data or used generally to manage the operation and memory locations of a non-volatile memory.
  • NVM interface 218 may be configured to store metadata in NVM 220. In some embodiments, NVM interface 218 may store metadata associated with user data at the same memory location (e.g., page) in which the user data is stored. For example, NVM interface 218 may store user data, the associated logical address, and ECC data for the user data at one or more memory locations of NVM 220. NVM interface 218 may also store other types of metadata about the user data in the same memory location.
  • NVM interface 218 may store the logical address so that, on power-up of NVM 220 or during operation of NVM 220, electronic device 200 can determine what data resides at that location. In particular, because file system 210 may reference the user data according to its logical address and not its physical address, NVM interface 218 may store the user data and logical address together to maintain their association. This way, even if a separate table maintaining the physical-to-logical mapping in NVM 220 becomes outdated, NVM interface 218 may still determine the proper mapping at power-up or reboot of electronic device 200, for example.
  • As discussed above, a NVM (e.g., NVM 120 of FIG. 1 or NVM 220 of FIG. 2) can be organized into dies, blocks, pages, super blocks, and the like. For example, FIG. 3 shows a schematic layout of NVM 300. NVM 300 can be the same as or similar to NVM 120 (FIG. 1) or NVM 220 (FIG. 2). FIG. 3 is merely meant to illustrate the organizational layout of NVM 300 and does not indicate an actual, physical layout of the non-volatile memory. For instance, although die 0 is illustrated as being next to die 1 in FIG. 3, this is merely for illustrating the functional relationship of these dies, and in the actual, physical layout of NVM 300, these dies may or may not be located near one another.
  • Moreover, although a certain number of dies, blocks, and pages are shown in FIG. 3, this is merely for the purpose of illustration and one skilled in the art could appreciate that NVM 300 could include any suitable number of dies, blocks, and pages. As one illustration, each die of NVM 300 can include 4096 blocks, each block may include 64, 128, or 192 pages based on whether the block is an SLC block, 2-bit MLC block, or 3-bit MLC block. In addition, each page of NVM 300 can have any suitable size such as, for example, 2 KiB, 4 KiB, 8 KiB, or 16 KiB.
  • As illustrated by FIG. 3, NVM 300 can include one or more dies (i.e., integrated circuits), such as die 0, die 1, die 2, and die 3. Each die of NVM 300 may be organized into one or more “blocks”. For example, dies 0-3 is illustrated as each being organized into blocks 0-3.
  • Each block of the dies may be organized into one or more pages. For example, block 0 of each of dies 0-3 is illustrated as being organized into pages 0-3. Each page of NVM 300 can include any suitable information. For example, the pages can include user data, metadata, or both. In some embodiments, metadata such as ECC data can be included in the pages to provide for error detection and/or correction of the information stored in NVM 300.
  • Over time, page sizes have increased in order to compensate for the longer time that it takes to program a NVM. That is, because memory cells of NVMs have decreased in size, more precise pulses have to be applied to memory cells of a NVM in order to program data in those cells. Accordingly, by increasing the size of pages, the same amount of user data can be programmed during any particular period of time. The net effect of this is that a user of an electronic device will not experience a reduction in the overall programming speed of a NVM.
  • NVM 300 can also include one or more super blocks that include one block from each die that has the same position or “block number” in the die. For example, super block 0 of NVM 300 can include block 0 of each of dies 0-3. Similarly, super block 1 of NVM 300 can include block 1 of each of dies 0-3, super block 2 of NVM 300 can include block 2 of each of dies 0-3, and so forth.
  • Super blocks can be formed by virtually linking or “striping” together one or more blocks. Blocks need not be in the same row of each die to be virtually linked as a super block. In fact, blocks may be chosen randomly from two or more dies to form a super block. In some embodiments, a super block can include only one block. Super blocks provide operational parallelism, thereby enabling programming, reading, and erase operations to be performed on blocks located in different dies in parallel.
  • In addition, pages from two or more dies may be virtually linked together to form super pages or “stripes”. For instance, stripe 302 can include page 0 from block 0 of each of dies 0-3. Similarly, stripe 304 can include page 3 from block 0 of each of dies 0-3. A translation layer (e.g., translation layer 214 of FIG. 2) being implemented in an NVM interface may keep track of super blocks or stripes. Generally, the programming time can remain the same for pages that are included in a particular stripe (e.g., stripe 302 or 304).
  • A file system of a system (e.g., file system 210 of FIG. 2) can operate in a logical space. Thus, as discussed above, each file in the system can map to one or more LBAs. Correspondingly, the one or more LBAs can be mapped to one or more physical page addresses in the NVM.
  • However, when a NVM is configured to have larger pages capable of storing multiple LBAs, the file system may sometimes issue a write command that may not completely fill up a physical page in a NVM. In conventional systems, a control circuitry (e.g., control circuitry 112 of FIG. 1) can pad out the unfilled portions of a page with invalid data, and program the write command along with the invalid data to the NVM. This operation wastes space in the NVM, and may consequently increase the amount of GC that needs to be performed on the NVM.
  • Accordingly, in order to increasing system efficiency, a system can be configured to dispatch multiple write commands to a NVM in a single transaction.
  • FIG. 4 show illustrative timing diagrams for various systems. For example, as shown in FIG. 4, the programming time (t_prog) is not substantially different if one page (as shown in timing diagram 400) or multiple pages (as shown in time diagram 402) are being programmed in a single operation (e.g., as a write-multi command). This may be particularly true if the multiple pages are included in a single stripe.
  • In addition, the time for startup programming (t_startup) also do not vary substantially if one page or multiple pages are being programmed in a single operation. As used herein, the “time for startup programming” refers to the time that it takes to activate the NVM interface (e.g., NVM interface 218 of FIG. 2).
  • Moreover, under most scenarios, the time that it takes to transfer contents to a page buffer (t_transfer) is relatively short as compared to t_prog. For example, as shown in timing diagram 402, t_transfer for programming a write-multi command can still be of shorter duration as compared to t_prog. However, as shown in timing diagram 404, programming multiple pages separately is less efficient due to the combination of multiple t_progs. The efficiency of the system is especially impacted if the amount of user data that can be programmed each time is relatively small.
  • Consequently, to increase the amount of die programming that can be performed in parallel and the amount of user data that can be programmed in a particular I/O transaction, a control circuitry can combine multiple write commands into a single write-multi command. Persons skilled in the art will appreciate that a write-multi command can be formed using any suitable approach. For example, the control circuitry can form a write-multi command by combining a set of logical addresses, a set of counters, and a set of buffers associated with the multiple commands into an array of logical addresses, an array of counters, and an array of buffers, respectively.
  • After forming the write-multi command, the control circuitry can dispatch the write-multi command to a NVM (e.g., in the form of an API operation). By forming and dispatching a write-multi command, user data can be programmed more efficiently in the NVM without wasting valuable space.
  • In some embodiments, the system may have several options for splitting data writes into multiple write paths. FIG. 5 shows a graphical view of an illustrative programming flow 500 for a system.
  • As discussed, a file system (e.g., file system 210 of FIG. 2) can issue one or more commands (e.g., read, write, and/or trim commands) that can be saved in one or more queues in the system. For example, as shown in FIG. 5, queue 502 can represent a write queue that can be configured to retain one or more write commands. However, persons skilled in the art will appreciate that the system may include additional queues for storing read and/or trim commands. Alternatively, queue 502 can be configured to store write, read, and trim commands.
  • In some embodiments, after a write command is received in queue 502, a control circuitry (e.g., control circuitry 112 of FIG. 1) can select one of two paths for dispatching the write command. For example, the control circuitry can select a write path (e.g., path 504) for dispatching the write command to NVM 510. NVM 510 can be the same as or similar to NVM 120 (FIG. 1), NVM 220 (FIG. 2), or NVM 300 (FIG. 3).
  • In some cases, by selecting path 504, the write command can be dispatched directly to hardware engine 506. Hardware engine 504 can be, for example, a direct memory access (“DMA”) engine.
  • In some embodiments, prior to dispatching the write command to hardware engine 506, the control circuitry can encrypt the write command using an encryption module (e.g., encryption module 116 of FIG. 1). As discussed, the encryption module can encrypt the write command using any suitable encryption standard such as, for example, AES, DES, or RSA.
  • Subsequently, the control circuitry can pass the write command to NVM interface 508. NVM interface 508 can be the same as or substantially similar to NVM interface 218 of FIG. 8. Once NVM interface 508 receives the write command, a NVM bus controller (e.g., NVM bus controller 216 of FIG. 2) can be directed to dispatch the write command to NVM 510 across bus 512.
  • In some cases, instead of selecting path 504, the control circuitry can select a buffered path (e.g., path 520) for a particular write command. By selecting path 504, the control circuitry can transfer the write command from queue 502 to buffer 522. As discussed above, buffer 522 can be stored in volatile memory (e.g., memory 114 of FIG. 1), and can be configured to store any suitable number of write commands.
  • By storing one or more write commands in buffer 522, the control circuitry can wait to receive additional write commands from the file system before directing a bus controller to concurrently dispatch multiple commands to NVM 510 via path 524. For example, the multiple commands can be combined as a write-multi command, which can then be dispatched to NVM 510.
  • In some embodiments, the control circuitry can direct an encryption module to encrypt the write command before the write command is transferred to buffer 522. After encrypting the write command, the control circuitry can store the encrypted write command in buffer 522.
  • In other embodiments, the control circuitry can transfer an unencrypted write command to buffer 522. In such embodiments, however, the control circuitry may need to copy one or more encryption seeds (e.g., one or more keys and IVs) to volatile memory (e.g., memory 114 of FIG. 1). Then, when the write command is ready to be dispatched from buffer 522, the control circuitry can direct an encryption module to use the stored encryption seeds to encrypt the write command. Consequently, for such a configuration, buffer 522 may be associated with unique encryption characteristics (e.g., a fixed set of encryption seeds).
  • In order to retain system coherency, one or more “side-band signals” may be issued by an application, operating system, or file system of a system. As used herein, “side-band signals” can refer to any suitable signal that may be generated by a cache algorithm of an application, operating system, or file system to maintain coherency. For example, one type of side-band signal may be a synchronized cache signal, which may be generated by one or more applications in the system. Another side-band signal may be a force unit access (“FUA”) signal, which can be generated by HFS+.
  • In response to receiving one or more of these side-band signals, the control circuitry can direct a bus controller to flush the buffer. That is, the control circuitry can direct the bus controller to dispatch all of the commands currently stored in the buffer to the NVM via path 524.
  • Because path 520 involves an extra step of copying a write command from queue 502 to buffer 522, additional traffic may be generated in the system. As a result, in some scenarios, path 520 may be less efficient as compared to path 504. For example, if the transferred command is a large command, it may be more efficient to directly program the command to NVM 510 via path 504.
  • On the other hand, if the control circuitry indiscriminately selects path 504 for dispatching all commands (including small commands), system performance may also decrease. Accordingly, the control circuitry may need to apply one or more heuristics to select a path that is best suited for dispatching a particular write command.
  • One of the heuristics that may be used is the size of a write command. Based at least in part on the size of the write command, the control circuitry can determine whether or not to buffer a write command.
  • For example, the control circuitry can first determine the size of a write command that is saved in queue 502. The control circuitry can then determine if the size of the write command is below a pre-determined threshold. In response to determining that the size of the write command is below the pre-determined threshold (e.g., the write command is a relatively small write command), the control circuitry can determine to perform buffering of the write command.
  • Consequently, the control circuitry can transfer the write command from queue 502 to buffer 522 via path 520. Once the write command has been stored in buffer 522, the command can be combined with other write commands at a later time.
  • If the control circuitry instead determines that the size of the write command is above the pre-determined threshold (e.g., the write command is a relatively large write command), the control circuitry can determine not to perform buffering of the write command. As a result, the control circuitry can direct a bus controller to dispatch the write command from queue 502 to NVM 510 via path 504.
  • Accordingly, by selecting a path based on the size of a write command, the control circuitry can reduce the total required bandwidth for executing write commands from an overall system perspective.
  • In other embodiments, in addition to the size of a write command, the control circuitry can use other heuristics to determine which of path 504 or path 520 to select. For example, the control circuitry can use one or more heuristics to determine if system efficiency can be improved by combining multiple write commands into a write-multi command directly in queue 502. In particular, once the write commands have been combined into the write-multi command, the write-multi command can be dispatched directly from queue 502 via path 504.
  • One of the heuristics that can be used is the number of write commands saved in queue 502. For instance, the control circuitry can determine whether the number of write commands saved in queue 502 exceeds a pre-determined number. In response to determining that the number of write commands exceeds the pre-determined number, the control circuitry can combine the write commands into a write-multi command, and can direct a bus controller to dispatch the write-multi command from queue 502 to NVM 510. This way, the control circuitry can gain the benefits of concurrently dispatching multiple write commands, while also avoiding the extra step of copying a write command to buffer 522.
  • Another heuristic that may be used is I/O patterns. For example, the control circuitry can assume that I/O generally arrive in batches. For instance, during a particular period of time, the control circuitry may be more likely to receive a burst of read commands or write commands rather than write commands interspersed with read commands. Consequently, if the control circuitry has already received a series of sequential writes, the control circuitry may expect to receive additional sequential writes in the near future.
  • Under such an assumption, the control circuitry can determine whether a set of the most recently received write commands in queue 502 is sequential. In response to determining that the set of the most recently received write commands is sequential, the control circuitry can wait to receive additional write commands in queue 502.
  • Once the control circuitry receives those additional write commands, the control circuitry can combine the additional write commands with the set of the most recently received commands into a write-multi command. The control circuitry can then direct a bus controller to dispatch the write-multi command from queue 502 to NVM 510. Consequently, the control circuitry can bypass buffer 522 for sequential write commands.
  • In some embodiments, while the control circuitry is waiting to receive additional write commands in queue 502, the control circuitry can instead service a read queue and/or a trim queue stored in memory. As such, the control circuitry can ignore existing fairness rules, and allow write commands to stack up in queue 502.
  • As another example, the control circuitry can determine whether or not to buffer a small write command based on the number of dispatched write commands immediately preceding the write command and/or the amount of time that has elapsed since the last dispatched write command. That is, based on an observation of past access patterns, the control circuitry can determine that additional write commands may likely arrive in the near future. Consequently, it may be more efficient to hold on to a small write command in queue 502, and wait for additional write commands.
  • For instance, if the control circuitry determines that the number of dispatched write commands immediately preceding the write command exceeds a pre-determined number, the control circuitry can wait to receive one or more additional write commands in queue 502. Similarly, the control circuitry may wait to receive additional write commands if the amount of time that has elapsed since the last dispatched write command is less than a pre-determined time period (e.g., a few milliseconds).
  • Once the control circuitry receives one or more additional write commands in queue 502, the control circuitry can combine the additional write commands with the small write command into a write-multi command. The control circuitry can then direct a bus controller to dispatch the write-multi command from queue 502 to NVM 510.
  • In the examples discussed above, persons skilled in the art will appreciate that the control circuitry can transfer one or more write commands from queue 502 to buffer 522 after a period of time has elapsed (e.g., a few milliseconds). That is, if the control circuitry does not actually receive the expected write commands, the control circuitry can transfer the small write command to buffer 522.
  • As yet another example, the control circuitry can determine whether or not to buffer a small write command based on the number of remaining write commands saved in queue 502. For instance, if queue 502 only has a single write command or a small number of write commands, the control circuitry can transfer the small write command to buffer 522.
  • As a further example, the control circuitry can determine whether to perform buffering based at least in part on the amount of available space in memory. For instance, if the amount of available space in memory is below a pre-determined threshold, the control circuitry can select path 504 and bypass buffering. On the other hand, if the amount of available space is above the pre-determined threshold, the control circuitry can select path 520, and buffer a small write command.
  • As yet a further example, the control circuitry may use past performance of the I/O scheduling algorithm to determine whether or not to buffer. For instance, the control circuitry can maintain a scorecard of past performance of the scheduling algorithm, which can include the success rate of past buffering decisions. If the control circuitry determines that the success rate is relatively low, the control circuitry can adapt the heuristics used in determining whether or not to buffer.
  • As another example, the control circuitry can determine whether or not to buffer based on the size of the buffer, or, particularly, the size of the buffer relative to the size of a write command. For example, if the size of the write command is the same as or larger than the size of buffer 522, the control circuitry can determine not to transfer the write command to buffer 522, and instead select to dispatch the write command directly to NVM 510 via path 504.
  • If, however, the size of the buffer is greater than the size of the write command, the control circuitry can apply one or more algorithms in order to reduce fragmentation across dies for a particular write span. In particular, an assumption can be made that user data is generally read from a NVM with the same granularity as the granularity with which the user data was programmed (e.g., along similar boundaries). For example, a system may save a text document to a NVM. Subsequently, the system will likely load the entire document as well. As another example, when a system programs a database in a NVM, the database is programmed using small, random updates. Likewise, at a later time, the database will likely be accessed using random reads. As such, the system can expect that user data associated with a sequence of small write commands will be read simultaneously.
  • Conventionally, a system may separately program each write command to a NVM. As a result, the user data associated with the write commands may be split among multiple dies, which can increase the energy (e.g., the overall power consumption) that it will take to read the user data at a later time. This is because each die needs to be activated in order to read the user data.
  • Accordingly, in order to maximize concurrency (e.g., minimize channel doubling) and minimize read energy, the control circuitry can attempt to re-order the write commands in a queue along page-aligned boundaries of a buffer. In particular, the control circuitry can attempt to select one or more commands from a queue in order to fill a buffer along one or more page-aligned boundaries.
  • Thus, under such a scenario, the system may take page alignment into consideration when storing write commands into a buffer. For example, as shown in FIG. 6, buffer 600 includes 8 page-aligned portions (e.g., page-aligned portions 0-7), where each page-aligned portion includes space to hold four LBAs. Each page-aligned portion of buffer 600 may be configured to store the same amount of data as a page of a NVM (e.g., NVM 300 of FIG. 3 or NVM 510 of FIG. 5). Persons skilled in the art will appreciate that buffer 600 may include any suitable number of page-aligned portions.
  • For example, as shown, page-aligned portions 0 and 1 of buffer 600 have been completely filled with write commands, and page-aligned portion 2 of buffer 600 has only been partially filled. In particular, write commands 0-3 are stored in page-aligned portion 0, write commands 4-7 are stored in page-aligned portion 1, and write commands 8 and 9 are stored in page-aligned portion 2. The remaining portions of buffer 600 are empty.
  • Based on this information, the control circuitry can determine that the write commands stored in buffer 600 are not currently aligned with at least one page boundary (e.g., page-aligned portion 2 has not been completely filled). In response to determining that the write commands are not aligned in buffer 600, the control circuitry can detect a fill size that is needed to fill buffer 600 to at least one page boundary. For page-aligned portion 2, for instance, the control circuitry can determine that the fill size is the size of two LBAs (e.g., sections 602 and 604).
  • The control circuitry can then determine if there is at least one write command in a queue (e.g., queue 502 of FIG. 5) that has a size equal to the fill size. For example, for page-aligned portion 2 of buffer 600, the control circuitry can select one write command with a LBA size of two or two write commands each with a LBA size of one. Persons skilled in the art will appreciate that any suitable algorithm can be used to determine if there is at least one write command in a queue that has a size equal to the fill size such as, for example, a best-fit algorithm. Persons skilled in the art will also appreciate that if there are multiple write commands in a queue that satisfy the fill size, the control circuitry can use one or more of the heuristics discussed above to select write command(s) for filling the buffer.
  • Thus, if the control circuitry determines that there is at least one write command that has a size equal to the fill size, the control circuitry can transfer the at least one write command from the queue to buffer 600. In the example shown in FIG. 6, for instance, the control circuitry can completely fill page-aligned portion 2 of buffer 600 (e.g., fill sections 602 and 604 with one or more write commands).
  • Once buffer 600 has been filled to at least one page-aligned portion (e.g., the write commands stored in buffer 600 is aligned with page-aligned portion 2), the control circuitry can combine the commands into a write-multi command. After generating the write-multi command, the control circuitry can direct a bus controller to dispatch the write-multi command from buffer 600 to a NVM.
  • In some embodiments, if the write commands stored in buffer 600 are encrypted commands, the control circuitry can first direct an encryption module to encrypt the at least one command prior to transferring the command from a queue to buffer 600. After the command has been encrypted, the control circuitry can store the at least one encrypted command in buffer 600.
  • As discussed above, the control circuitry may receive one or more side-band signals (e.g., synchronized cache or FUA signals) before a page-aligned portion of buffer 600 has been completely filled. In such a scenario, the control circuitry can fill buffer 600 to the end of the page-aligned portion (e.g., page-aligned portion 2) with invalid data (e.g., fill sections 602 and 604 with invalid data). The control circuitry can then combine the write commands stored in buffer 600 with the invalid data into a write-multi command, and direct the bus controller to dispatch the write-multi command from buffer 600 to the NVM.
  • Accordingly, by optimally re-ordering write commands in a queue and fitting one or more of these write commands into page-aligned portions in a buffer, the system can minimize read energy on the read path, minimize buffering overhead, and reduce fragmentation in a NVM.
  • Referring back to FIG. 5, instead of selecting path 504 or path 520, a control circuitry can combine buffered write commands and queue write commands into the same batch or transaction (e.g., as a write-multi command) via path 530. For example, the control circuitry can select at least a first write command from queue 502 and at least a second write command from buffer 522, where the first and second write commands may be selected to minimize page crossings in NVM 510. After selecting the first and second write commands, the control circuitry can combine the commands to form a write-multi command. Finally, the control circuitry can direct a bus controller to dispatch the write-multi command to the NVM via path 530.
  • In some cases, in combining write commands from a buffer with write commands from a queue, the control circuitry may need to combine multiple types of encryption characteristics in a single transaction. For instance, the control circuitry can apply at least a first encryption seed to the one or more write commands selected from queue 502. In addition, if write commands stored in buffer 522 are unencrypted commands, the control circuitry can retrieve one or more encryption seeds from memory (e.g., memory 114 of FIG. 1). After retrieving the one or more encryption seeds, the control circuitry can apply the one or more encryption seeds to the one or more write commands selected from buffer 522. The control circuitry can then combine the first and second encrypted write commands into a write-multi command, and dispatch the write-multi command to NVM 510.
  • Referring now to FIGS. 7A-7B and 8, flowcharts of illustrative processes are shown in accordance with various embodiments of the invention. These processes may be executed by one or more components in a system (e.g., electronic device 100 of FIG. 1 or electronic device 200 of FIG. 2). For example, at least some of the steps in the processes of FIGS. 7A-7B and 8 may be performed by control circuitry (e.g., SoC control circuitry 112 of FIG. 1). Persons skilled in the art will appreciate that one or more of the steps may be performed by a NVM interface (e.g., NVM interface 118 of FIG. 1 or NVM interface 218 of FIG. 2). Dashed boxes are meant to illustrate steps that may or may not be performed based on the configuration of the system.
  • Turning first to FIGS. 7A and 7B, process 700 is shown for efficient buffering in a NVM (e.g., NVM 120 of FIG. 1, NVM 220 of FIG. 2, NVM 300 of FIG. 3, or NVM 510 of FIG. 5). Process 700 may begin at step 702, and at step 704, the control circuitry can receive a first write command from a file system (e.g., file system 210 of FIG. 2) for programming to a NVM.
  • At step 706, the control circuitry can save the first write command in a queue (e.g., queue 502 of FIG. 5). Continuing to step 708, the control circuitry can apply one or more heuristics to the first write command in order to determine whether to transfer the first write command to a buffer (e.g., buffer 522 of FIG. 5 or buffer 600 of FIG. 6). That is, the control circuitry can determine whether to perform buffering of the first write command.
  • If, at step 710, the control circuitry determines not to transfer the first write command, process 700 may move to step 712. At step 712, the control circuitry can direct an encryption module (e.g., encryption module 116 of FIG. 1) to encrypt the first write command. Then, at step 714, the control circuitry can direct a bus controller (e.g., NVM bus controller 216 of FIG. 2) to dispatch the encrypted write command to the NVM (e.g., via path 504 of FIG. 5). For example, the control circuitry may be configured to direct the bus controller to dispatch the encrypted write command to a hardware engine (e.g., hardware engine 506 of FIG. 5). Process 700 may then end at step 716.
  • If, at step 710, the control circuitry instead determines to transfer the first write command, process 700 may move to step 718 or, alternatively, to step 720. In particular, if the system is configured such that commands are encrypted prior to storage in a buffer, then, at step 718, the control circuitry can direct an encryption module to encrypt the first write command. On the other hand, if the system is configured such that unencrypted commands are stored in the buffer, process 700 may directly move to step 720. In such a configuration, the control circuitry can also copy one or more encryption seeds to memory (e.g., memory 114 of FIG. 1). Thus, at step 720, the control circuitry can store either the encrypted command or the unencrypted command in the buffer (e.g., via path 520 of FIG. 5).
  • Continuing to step 722, the control circuitry can determine if the write commands stored in the buffer are currently aligned with at least one page boundary. If, at step 722, the control circuitry determines that the write commands stored in the buffer are currently aligned with at least one page boundary, process 700 may move to step 724 or step 726. That is, if unencrypted commands are stored in the buffer, at step 724, the control circuitry can direct an encryption module to encrypt the write commands using one or more stored encryption seeds. If, however, encrypted commands are stored in the buffer, process 700 may move directly to step 726.
  • At step 726, the control circuitry can combine the write commands in the buffer into a write-multi command. Then, at step 728, the control circuitry can direct a bus controller to dispatch the write-multi command from the buffer to the NVM (e.g., via path 524 of FIG. 5). Process 700 may then end at step 716.
  • If, at step 722, the control circuitry instead determines that the write commands stored in the buffer are not currently aligned with at least one page boundary, process 700 may move to step 730. At step 730, the control circuitry can detect a fill size that is needed to fill the buffer to the at least one page boundary.
  • Continuing to step 732, the control circuitry can determine if there is at least one write command in the queue that has a size equal to the fill size. If at step 732, the control circuitry determines that there is at least one write command that has a size equal to the fill size, process 700 may move to step 734.
  • At step 734, the control circuitry can transfer the at least one write command from the queue to the buffer. Process 700 may then return to step 724 or step 726 (e.g., depending on whether or not the write commands in the buffer are encrypted).
  • If, at step 732, the control circuitry instead determines that there is not at least one write command in the queue that has a size equal to the fill size, process 700 may move to step 736. At step 736, the control circuitry can wait for additional write commands to arrive (e.g., wait to receive additional write commands from a file system). Process 700 may then end at step 716.
  • Turning now to FIG. 8, process 800 is shown for determining whether or not to buffer in a NVM (e.g., NVM 120 of FIG. 1, NVM 220 of FIG. 2, NVM 300 of FIG. 3, or NVM 510 of FIG. 5). In some embodiments, process 800 may represent a more detailed view of the applying at least one heuristic step 708 and the determining whether to transfer step 710 of process 700 (FIG. 7).
  • Process 800 may begin at step 802, and at step 804, the control circuitry can determine the size of a write command saved in a queue (e.g., queue 502 of FIG. 5).
  • Then, at step 806, the control circuitry can determine whether the size of the write command is below a pre-determined threshold. If, at step 806, the control circuitry determines that the size of the write command is not below the pre-determined threshold (e.g., the write command is a large write command), process 800 may move to step 808.
  • At step 808, the control circuitry can direct a bus controller to dispatch the write command from the queue to a NVM (e.g., via path 504 of FIG. 5). Process 800 may then end at step 810.
  • If, at step 806, the control circuitry instead determines that the size of the write command is below a pre-determined threshold (e.g., the write command is a small write command), process 800 may move to step 812.
  • At step 812, the control circuitry can determine whether the number of write commands saved in the queue exceeds a pre-determined number. If, at step 812, the control circuitry determines that the number of write commands saved in the queue exceeds the pre-determined number, process 800 may move to step 814.
  • At step 814, the control circuitry can combine the write commands into a write-multi command. Process 800 may then move to step 808, where the write-multi command can be dispatched from the queue to the NVM.
  • If, at step 812, the control circuitry instead determines that the number of write commands saved in the queue does not exceed the pre-determined number, process 800 may move to step 816.
  • At step 816, the control circuitry can determine whether a set of most recently received write commands in the queue is sequential. If, at step 816, the control circuitry determines that the set of most recently received write commands is sequential, process 800 may move to step 818.
  • At step 818, the control circuitry can wait to receive additional write commands in the queue. Then, at step 820, the control circuitry can determine if at least one additional write command has been received from a file system (e.g., file system 210 of FIG. 2). For example, the control circuitry can make this determination after a short period of time (e.g., a few milliseconds).
  • If, at step 820, the control circuitry determines that it has received at least one additional write command from the file system, process 800 may move to step 814, where the write commands can be combined into a write-multi command.
  • If, at step 820, the control circuitry instead determines that it has not received at least one additional write command from the file system, process 800 may move to step 822. At step 822, the control circuitry can transfer the write command to a buffer (e.g., via path 520 of FIG. 5). Process 800 may then end at step 810.
  • If, at step 816, the control circuitry instead determines that a set of most recently received write commands in the queue are not sequential, process 800 may move to step 824. At step 824, the control circuitry can determine whether the number of dispatched write commands immediately preceding the write command exceeds a pre-determined number.
  • If, at step 824, the control circuitry determines that the number of dispatched write commands exceeds a pre-determined number, process 800 may move to step 818, where the control circuitry can wait to receive additional commands in the queue. If, at step 824, the control circuitry instead determines that the number of dispatched write commands does not exceed a pre-determined number, process 800 may move to step 822, where the control circuitry can transfer the write command to the buffer.
  • It should be understood that processes 700 and 800 of FIGS. 7A-7B and 8 are merely illustrative. Any of the steps may be removed, modified, or combined, and any additional steps may be added, without departing from the scope of the invention.
  • The described embodiments of the invention are presented for the purpose of illustration and not of limitation.

Claims (26)

1. A method for splitting data writes into multiple write paths, the method comprising:
receiving a first write command from a file system for programming to a non-volatile memory (“NVM”);
saving the first write command in a queue;
determining the size of the first write command;
determining whether to perform buffering based at least in part on the size of the first write command; and
in response to determining to perform buffering, transferring the first write command from the queue to a buffer.
2. The method of claim 1, wherein in response to determining not to perform buffering, dispatching the first write command from the queue to the NVM.
3. The method of claim 2, further comprising:
determining whether the amount of time that has elapsed since a last dispatched write command is less than a pre-determined time period; and
in response to determining that the amount of time that has elapsed since a last dispatched write command is less than the pre-determined time period, waiting to receive additional write commands in the queue before dispatching the first write command from the queue to the NVM.
4. The method of claim 1, wherein the determining whether to perform buffering further comprises:
determining if the size of the first write command is below a pre-determined threshold; and
in response to determining that the size of the first write command is below the pre-determined threshold, determining whether the number of write commands saved in the queue exceeds a pre-determined number.
5. The method of claim 4, further comprises:
in response to determining that the number of write commands saved in the queue exceeds the pre-determined number, combining the write commands into a write-multi command; and
dispatching the write-multi command from the queue to the NVM.
6. The method of claim 1, wherein the determining whether to perform buffering further comprises:
determining whether a set of most recently receive write commands in the queue is sequential;
in response to determining that the set of the most recently receive write commands is sequential, waiting to receive additional write commands in the queue.
7. The method of claim 6, further comprising:
receiving at least one additional write command from the file system;
combining the at least one additional write command with the set of most recently received write commands into a write-multi command; and
dispatching the write-multi command from the queue to the NVM.
8. A memory interface for accessing a non-volatile memory (“NVM”), the memory interface comprising:
a bus controller operative to communicate with the NVM; and
control circuitry operative to:
receive a first write command in a queue;
apply at least one heuristic to the first write command in order to determine whether to transfer the first write command to a buffer;
in response to determining not to transfer the first write command to the buffer, direct an encryption module to encrypt the first write command; and
direct the bus controller to dispatch the encrypted write command to the NVM.
9. The memory interface of claim 8, wherein the control circuitry is further operative to direct the bus controller to dispatch the encrypted write command to a direct memory access (“DMA”) engine.
10. The memory interface of claim 8, wherein in response to determining to transfer the first write command to the buffer, the control circuitry is further operative to:
direct the encryption module to encrypt the first write command; and
store the encrypted write command in the buffer.
11. The memory interface of claim 8, wherein in response to determining to transfer the first write command to the buffer, the control circuitry is further operative to:
copy at least one encryption seed to the volatile memory; and
store the first write command in the buffer.
12. The memory interface of claim 8, wherein the at least one heuristic comprises at least one of the size of the first write command, the number of dispatched write commands immediately preceding the first write command, the number of remaining write commands saved in the queue, the amount of available space in the volatile memory, a size of the buffer, and input/output patterns.
13. The memory interface of claim 12, wherein the control circuitry is further operative to:
determine if the size of the first write command is the same as or larger than the size of the buffer;
in response to determining that the size of the first write command is the same as or larger than the size of the buffer, determine not to transfer the first write command to the buffer.
14. The memory interface of claim 12, wherein the control circuitry is further operative to:
determine if the number of dispatched write commands immediately preceding the first write command exceeds a pre-determined number;
in response to determining that the number of write commands exceeds the pre-determined number, wait to receive at least one additional write command in the queue.
15. The memory interface of claim 14, wherein the control circuitry is further operative to:
receive at least one additional write command in the queue;
combine the at least one additional write command with the first write command into a write-multi command; and
direct the bus controller to dispatch the write-multi command from the queue to the NVM.
16. The memory interface of claim 8, wherein the control circuitry is further operative to:
receive at least one side-band signal; and
direct the bus controller to dispatch all of the commands stored in the buffer to the NVM.
17. A method for combining write commands for dispatch to a non-volatile memory (“NVM”), the method comprising:
selecting at least a first write command from a queue and at least a second write command from a buffer, wherein the at least the first and second write commands are selected to minimize page crossings in the NVM;
combining the at least the first and second write commands to form a write-multi command; and
dispatching the write-multi command to the NVM.
18. The method of claim 17, wherein the combining the at least the first and second write commands further comprises applying at least a first encryption seed to the at least the first write command.
19. The method of claim 18, wherein the at least the second write command is unencrypted.
20. The method of claim 19, wherein the combining the at least the first and second write commands further comprises:
retrieving at least a second encryption seed from volatile memory; and
applying the at least the second encryption seed to the at least the second write command.
21. A system comprising:
a non-volatile memory (“NVM”) comprising a plurality of pages;
a bus controller operative to communicate with the NVM;
volatile memory comprising a queue and a buffer for storing a set of write commands; and
control circuitry operative to:
determine if the set of write commands stored in the buffer is currently aligned with at least one page boundary;
in response to determining that the set of write commands stored in the buffer is not currently aligned with the at least one page boundary, detect a fill size that is needed to fill the buffer to the at least one page boundary;
determine if there is at least one write command in the queue that has a size equal to the fill size; and
in response to determining that there is at least one write command in the queue that has a size equal to the fill size, transfer the at least one write command from the queue to the buffer.
22. The system of claim 21, wherein the control circuitry is operative to use a best-fit algorithm to determine if there is at least one write command in the queue that has a size equal to the fill size.
23. The system of claim 21, wherein in response to determining that the set of write commands stored in the buffer is currently aligned with the at least one page boundary, the control circuitry is operative to:
combine the set of write commands into a write-multi command; and
direct the bus controller to dispatch the write-multi command from the buffer to the NVM.
24. The system of claim 21, wherein the control circuitry is operative to:
receive at least one side-band signal;
fill the buffer to the end of at least one page-aligned portion with invalid data;
combine the set of write commands with the invalid data into a write-multi command; and
direct the bus controller to dispatch the write-multi command from the buffer to the NVM.
25. The system of claim 21, wherein the set of write commands comprises encrypted commands.
26. The system of claim 25, further comprising an encryption module, and wherein in response to determining that there is at least one command in the queue with a size equal to the fill size, the control circuitry is operative to:
direct the encryption module to encrypt the at least one command; and
store the at least one encrypted command in the buffer.
US13/036,460 2011-02-28 2011-02-28 Efficient buffering for a system having non-volatile memory Abandoned US20120221767A1 (en)

Priority Applications (10)

Application Number Priority Date Filing Date Title
US13/036,460 US20120221767A1 (en) 2011-02-28 2011-02-28 Efficient buffering for a system having non-volatile memory
TW101106297A TWI503822B (en) 2011-02-28 2012-02-24 Efficient buffering for a system having non-volatile memory
TW101134054A TWI537952B (en) 2011-02-28 2012-02-24 Efficient buffering for a system having non-volatile memory
PCT/US2012/026752 WO2012118743A1 (en) 2011-02-28 2012-02-27 Efficient buffering for a system having non-volatile memory
KR1020120019692A KR20120098505A (en) 2011-02-28 2012-02-27 Efficient buffering for a system having non-volatile memory
EP12157312A EP2492817A3 (en) 2011-02-28 2012-02-28 Efficient buffering for a system having non-volatile memory
CN201210114701.9A CN102750226B (en) 2011-02-28 2012-02-28 There is the high efficiency buffer of the system of nonvolatile memory
KR1020120020909A KR101624007B1 (en) 2011-02-28 2012-02-29 Efficient buffering for a system having non-volatile memory
US14/693,260 US9703700B2 (en) 2011-02-28 2015-04-22 Efficient buffering for a system having non-volatile memory
US15/630,376 US9996457B2 (en) 2011-02-28 2017-06-22 Efficient buffering for a system having non-volatile memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/036,460 US20120221767A1 (en) 2011-02-28 2011-02-28 Efficient buffering for a system having non-volatile memory

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/693,260 Division US9703700B2 (en) 2011-02-28 2015-04-22 Efficient buffering for a system having non-volatile memory

Publications (1)

Publication Number Publication Date
US20120221767A1 true US20120221767A1 (en) 2012-08-30

Family

ID=45808178

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/036,460 Abandoned US20120221767A1 (en) 2011-02-28 2011-02-28 Efficient buffering for a system having non-volatile memory
US14/693,260 Active US9703700B2 (en) 2011-02-28 2015-04-22 Efficient buffering for a system having non-volatile memory
US15/630,376 Active US9996457B2 (en) 2011-02-28 2017-06-22 Efficient buffering for a system having non-volatile memory

Family Applications After (2)

Application Number Title Priority Date Filing Date
US14/693,260 Active US9703700B2 (en) 2011-02-28 2015-04-22 Efficient buffering for a system having non-volatile memory
US15/630,376 Active US9996457B2 (en) 2011-02-28 2017-06-22 Efficient buffering for a system having non-volatile memory

Country Status (6)

Country Link
US (3) US20120221767A1 (en)
EP (1) EP2492817A3 (en)
KR (2) KR20120098505A (en)
CN (1) CN102750226B (en)
TW (2) TWI503822B (en)
WO (1) WO2012118743A1 (en)

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120284460A1 (en) * 2011-05-02 2012-11-08 Western Digital Technologies, Inc. High performance path for command processing
US20130054841A1 (en) * 2011-08-30 2013-02-28 International Business Machines Corporation Multiple i/o request processing in a storage system
WO2014039665A1 (en) * 2012-09-06 2014-03-13 Intel Corporation Mitigating unauthorized access to data traffic
US20140173077A1 (en) * 2012-12-13 2014-06-19 Level 3 Communications, Llc Maintaining invalidation information
WO2014098839A1 (en) * 2012-12-19 2014-06-26 Hewlett-Packard Development Company Nvram path selection
US8769318B2 (en) 2011-05-11 2014-07-01 Apple Inc. Asynchronous management of access requests to control power consumption
US20140237167A1 (en) * 2011-06-24 2014-08-21 Sandisk Technologies Inc. Apparatus and Methods for Peak Power Management in Memory Systems
US20140280759A1 (en) * 2013-03-15 2014-09-18 International Business Machines Corporation Data transmission for transaction processing in a networked environment
JP2014182792A (en) * 2013-03-15 2014-09-29 Samsung Electronics Co Ltd Operation method of memory system, memory system, and memory controller
US20140372588A1 (en) 2011-12-14 2014-12-18 Level 3 Communications, Llc Request-Response Processing in a Content Delivery Network
US8938583B1 (en) * 2011-11-16 2015-01-20 Western Digital Technologies, Inc. System and method for concurrently executing data access requests
US9047189B1 (en) * 2013-05-28 2015-06-02 Amazon Technologies, Inc. Self-describing data blocks of a minimum atomic write size for a data store
US20150169254A1 (en) * 2013-12-16 2015-06-18 International Business Machines Corporation Firmware bypass for medium-access commands
US9412455B2 (en) 2013-09-11 2016-08-09 Kabushiki Kaisha Toshiba Data write control device and data storage device
US20160299687A1 (en) 2015-04-13 2016-10-13 SK Hynix Inc. Controller transmitting output commands and method of operating the same
US9591047B1 (en) * 2016-04-11 2017-03-07 Level 3 Communications, Llc Invalidation in a content delivery network (CDN)
US9634918B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Invalidation sequencing in a content delivery framework
US20170171171A1 (en) * 2015-12-15 2017-06-15 International Business Machines Corporation Management of encryption within processing elements
US20170177061A1 (en) * 2015-12-17 2017-06-22 Pmc-Sierra Us, Inc. Power saving in multi-directional data transfer
US9841917B2 (en) 2011-09-16 2017-12-12 Apple Inc. Systems and methods for configuring non-volatile memory
US20180018111A1 (en) * 2016-07-14 2018-01-18 SK Hynix Inc. Memory system and operating method thereof
US9880926B1 (en) * 2013-08-20 2018-01-30 Seagate Technology Llc Log structured reserved zone for a data storage device
US20180203605A1 (en) * 2017-01-17 2018-07-19 Phison Electronics Corp. Data transmitting method, memory storage device and memory control circuit unit
US10152241B2 (en) 2016-05-04 2018-12-11 SK Hynix Inc. Data processing system and operating method thereof
US20190042501A1 (en) * 2018-09-25 2019-02-07 Intel Corporation Technologies for computational storage via offload kernel extensions
US20190065370A1 (en) * 2016-03-08 2019-02-28 Toshiba Memory Corporation Storage system and information processing system for controlling nonvolatile memory
US10430117B2 (en) * 2017-10-23 2019-10-01 Micron Technology, Inc. Host accelerated operations in managed NAND devices
US10558591B2 (en) * 2017-10-09 2020-02-11 Advanced Micro Devices, Inc. Method and apparatus for in-band priority adjustment forwarding in a communication fabric
US20200050390A1 (en) * 2018-08-08 2020-02-13 Micron Technology, Inc. Throttle Response Signals from a Memory System
US10642494B2 (en) * 2015-12-31 2020-05-05 Lenovo (Beijing) Limited Method, electronic device and computer program product for data processing
US10652087B2 (en) 2012-12-13 2020-05-12 Level 3 Communications, Llc Content delivery framework having fill services
US10701149B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having origin services
US10701148B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having storage services
US10725703B2 (en) * 2017-11-29 2020-07-28 Lapis Semiconductor Co., Ltd. Method for writing data, memory device and data writing system
US10791050B2 (en) 2012-12-13 2020-09-29 Level 3 Communications, Llc Geographic location determination in a content delivery framework
US10861504B2 (en) 2017-10-05 2020-12-08 Advanced Micro Devices, Inc. Dynamic control of multi-region fabric
CN112602049A (en) * 2018-08-08 2021-04-02 美光科技公司 Optimizing information requests to a memory system
US11029893B2 (en) 2017-09-08 2021-06-08 Samsung Electronics Co., Ltd. Storage device including nonvolatile memory device and controller, controller and operating method of nonvolatile memory device
US11196657B2 (en) 2017-12-21 2021-12-07 Advanced Micro Devices, Inc. Self identifying interconnect topology
US11223575B2 (en) 2019-12-23 2022-01-11 Advanced Micro Devices, Inc. Re-purposing byte enables as clock enables for power savings
US11368548B2 (en) 2012-12-13 2022-06-21 Level 3 Communications, Llc Beacon services in a content delivery framework
US11379151B2 (en) 2020-03-31 2022-07-05 Western Digital Technologies, Inc. Flow control of large sequential host read commands based on a queue threshold
WO2022212205A1 (en) * 2021-03-29 2022-10-06 Micron Technology, Inc. Sideband communication management
US11507522B2 (en) 2019-12-06 2022-11-22 Advanced Micro Devices, Inc. Memory request priority assignment techniques for parallel processors
US11513736B2 (en) 2021-03-08 2022-11-29 Western Digital Technologies, Inc. Revised host command generation for unaligned access
US20230280908A1 (en) * 2011-06-30 2023-09-07 Amazon Technologies, Inc. System and method for providing a committed throughput level in a data store

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120221767A1 (en) * 2011-02-28 2012-08-30 Apple Inc. Efficient buffering for a system having non-volatile memory
KR101889298B1 (en) * 2011-11-08 2018-08-20 삼성전자주식회사 Memory device including nonvolatile memory and controling method of nonvolatile memory
CN103914397B (en) * 2013-01-09 2018-01-30 深圳市江波龙电子有限公司 Flash memory device and its management method
US10089224B2 (en) * 2013-03-15 2018-10-02 The Boeing Company Write caching using volatile shadow memory
CN105934747B (en) * 2013-11-07 2020-03-06 奈特力斯股份有限公司 Hybrid memory module and system and method for operating the same
CN103617101A (en) * 2013-12-12 2014-03-05 北京旋极信息技术股份有限公司 Power fail safeguard method and device
KR102301230B1 (en) * 2014-12-24 2021-09-10 삼성전자주식회사 Device and Method for performing scheduling for virtualized GPUs
KR102364381B1 (en) 2015-03-06 2022-02-18 에스케이하이닉스 주식회사 Memory system and operation method for the same
CN108139904B (en) * 2015-10-23 2022-07-05 惠普发展公司,有限责任合伙企业 Write command filtering
US10459855B2 (en) * 2016-07-01 2019-10-29 Intel Corporation Load reduced nonvolatile memory interface
KR102641107B1 (en) * 2016-07-29 2024-02-27 삼성전자주식회사 Storage device, system including the same and method of operating the same
CN106713450A (en) * 2016-12-21 2017-05-24 广州优视网络科技有限公司 Downloading acceleration method and apparatus based on read-write separation mode
KR20180083093A (en) * 2017-01-12 2018-07-20 에스케이하이닉스 주식회사 Memory system and operating method of memory system
TWI635391B (en) * 2017-02-14 2018-09-11 點序科技股份有限公司 Flash memory and management method thereof
US10296473B2 (en) * 2017-03-24 2019-05-21 Western Digital Technologies, Inc. System and method for fast execution of in-capsule commands
CN108572790B (en) * 2017-04-20 2021-11-26 深圳市得一微电子有限责任公司 Solid state disk writing instruction ordering method and solid state disk
US10014056B1 (en) * 2017-05-18 2018-07-03 Sandisk Technologies Llc Changing storage parameters
CN107506138B (en) * 2017-08-11 2020-12-18 东莞记忆存储科技有限公司 Method for prolonging service life of solid state disk
US10514748B2 (en) * 2017-09-27 2019-12-24 Western Digital Technologies, Inc. Reactive power management for non-volatile memory controllers
US11016890B2 (en) * 2018-02-05 2021-05-25 Micron Technology, Inc. CPU cache flushing to persistent memory
US10705747B2 (en) * 2018-03-21 2020-07-07 Micron Technology, Inc. Latency-based storage in a hybrid memory system
US10782916B2 (en) * 2018-08-08 2020-09-22 Micron Technology, Inc. Proactive return of write credits in a memory system
KR20200025184A (en) * 2018-08-29 2020-03-10 에스케이하이닉스 주식회사 Nonvolatile memory device, data storage apparatus including the same and operating method thereof
CN109445690B (en) * 2018-10-08 2020-10-23 苏州浪潮智能科技有限公司 RAID card performance optimization method, device, terminal and storage medium
KR20200057331A (en) * 2018-11-16 2020-05-26 에스케이하이닉스 주식회사 Memory system and operating method thereof
US11614889B2 (en) * 2018-11-29 2023-03-28 Advanced Micro Devices, Inc. Aggregating commands in a stream based on cache line addresses
CN111625180B (en) 2019-02-27 2021-05-28 英韧科技(上海)有限公司 Data writing method and device and storage medium
CN109992212B (en) * 2019-04-10 2020-03-27 苏州浪潮智能科技有限公司 Data writing method and data reading method
US11113198B2 (en) * 2019-05-06 2021-09-07 Micron Technology, Inc. Timed data transfer between a host system and a memory sub-system
CN112015527B (en) * 2019-05-29 2024-03-22 闪迪技术有限公司 Managing fetching and executing commands from a commit queue
US11669272B2 (en) * 2019-05-31 2023-06-06 Micron Technology, Inc. Predictive data transfer based on availability of media units in memory sub-systems
KR20210025836A (en) 2019-08-28 2021-03-10 에스케이하이닉스 주식회사 Memory controller, storage device including the same and operating method thereof
US11537510B2 (en) * 2020-04-24 2022-12-27 Western Digital Technologies, Inc. Storage devices having minimum write sizes of data
US11294598B2 (en) * 2020-04-24 2022-04-05 Western Digital Technologies, Inc. Storage devices having minimum write sizes of data
US11934695B2 (en) * 2021-06-23 2024-03-19 Western Digital Technologies, Inc. Aligned command based firmware architecture for unaligned write handling
EP4231132A1 (en) * 2022-02-18 2023-08-23 Samsung Electronics Co., Ltd. Memory system for tailoring data, host system for controlling the memory system, and operation method of the memory system

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5651137A (en) * 1995-04-12 1997-07-22 Intel Corporation Scalable cache attributes for an input/output bus
US6397301B1 (en) * 1999-12-29 2002-05-28 Intel Corporation Preventing access to secure area of a cache
US6748493B1 (en) * 1998-11-30 2004-06-08 International Business Machines Corporation Method and apparatus for managing memory operations in a data processing system using a store buffer
US6748441B1 (en) * 1999-12-02 2004-06-08 Microsoft Corporation Data carousel receiving and caching
US20040186961A1 (en) * 2003-03-19 2004-09-23 Shinji Kimura Cache control method for node apparatus
US20060015748A1 (en) * 2004-06-30 2006-01-19 Fujitsu Limited Secure processor and a program for a secure processor
US20060080553A1 (en) * 2004-10-08 2006-04-13 International Business Machines Corporation Secure memory caching structures for data, integrity and version values
US20080201574A1 (en) * 2007-02-15 2008-08-21 Fujitsu Limited Data encryption apparatus, data decryption apparatus, data encryption method, data decryption method, and data relay apparatus
US20080209203A1 (en) * 2007-02-15 2008-08-28 Fujitsu Limited Data encryption apparatus, data decryption apparatus, data encryption method, data decryption method, and data transfer controlling apparatus
US20080320209A1 (en) * 2000-01-06 2008-12-25 Super Talent Electronics, Inc. High Performance and Endurance Non-volatile Memory Based Storage Systems
US20090313416A1 (en) * 2008-06-16 2009-12-17 George Wayne Nation Computer main memory incorporating volatile and non-volatile memory
US20100031056A1 (en) * 2007-07-27 2010-02-04 Hitachi, Ltd. Storage system to which removable encryption/decryption module is connected
US20100174847A1 (en) * 2009-01-05 2010-07-08 Alexander Paley Non-Volatile Memory and Method With Write Cache Partition Management Methods
US20100293321A1 (en) * 2009-05-12 2010-11-18 Hanan Weingarten Systems and method for flash memory management
US20120005409A1 (en) * 2010-07-01 2012-01-05 Tsung-Chieh Yang Method for performing data shaping, and associated memory device and controller thereof
US20120191982A1 (en) * 2007-12-06 2012-07-26 Levin Timothy Evert Non-volatile storage of encrypted data

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4433374A (en) * 1980-11-14 1984-02-21 Sperry Corporation Cache/disk subsystem with cache bypass
US5724592A (en) 1995-03-31 1998-03-03 Intel Corporation Method and apparatus for managing active power consumption in a microprocessor controlled storage device
JP3821536B2 (en) 1997-05-16 2006-09-13 沖電気工業株式会社 Nonvolatile semiconductor disk device
US6233693B1 (en) 1998-05-06 2001-05-15 International Business Machines Corporation Smart DASD spin-up
US8266367B2 (en) * 2003-12-02 2012-09-11 Super Talent Electronics, Inc. Multi-level striping and truncation channel-equalization for flash-memory system
JP4694040B2 (en) 2001-05-29 2011-06-01 ルネサスエレクトロニクス株式会社 Semiconductor memory device
GB0123416D0 (en) 2001-09-28 2001-11-21 Memquest Ltd Non-volatile memory control
US6925573B2 (en) 2002-01-02 2005-08-02 Intel Corporation Method and apparatus to manage use of system power within a given specification
US7400062B2 (en) 2002-10-15 2008-07-15 Microsemi Corp. - Analog Mixed Signal Group Ltd. Rack level power management
US7121639B2 (en) * 2002-12-02 2006-10-17 Silverbrook Research Pty Ltd Data rate equalisation to account for relatively different printhead widths
US7146455B2 (en) * 2003-05-19 2006-12-05 Accton Technology Corporation System and method for optimized access to memory devices requiring block writing
US7512816B2 (en) 2003-11-28 2009-03-31 Panasonic Corporation Recording apparatus receiving current supply from an accessing apparatus
US8112574B2 (en) * 2004-02-26 2012-02-07 Super Talent Electronics, Inc. Swappable sets of partial-mapping tables in a flash-memory system with a command queue for combining flash writes
US7155623B2 (en) 2003-12-03 2006-12-26 International Business Machines Corporation Method and system for power management including local bounding of device group power consumption
US20070083491A1 (en) * 2004-05-27 2007-04-12 Silverbrook Research Pty Ltd Storage of key in non-volatile memory
US7305572B1 (en) 2004-09-27 2007-12-04 Emc Corporation Disk drive input sequencing for staggered drive spin-up
US7363442B2 (en) * 2004-11-12 2008-04-22 International Business Machines Corporation Separate handling of read and write of read-modify-write
JP2006185407A (en) 2004-12-01 2006-07-13 Matsushita Electric Ind Co Ltd Peak power-controlling apparatus and method
JP2006195569A (en) 2005-01-11 2006-07-27 Sony Corp Memory unit
US7440215B1 (en) 2005-03-30 2008-10-21 Emc Corporation Managing disk drive spin up
US7444526B2 (en) 2005-06-16 2008-10-28 International Business Machines Corporation Performance conserving method for reducing power consumption in a server system
US7562234B2 (en) 2005-08-25 2009-07-14 Apple Inc. Methods and apparatuses for dynamic power control
US20070211551A1 (en) 2005-11-25 2007-09-13 Yoav Yogev Method for dynamic performance optimization conforming to a dynamic maximum current level
US7609564B2 (en) * 2005-12-31 2009-10-27 Intel Corporation Systems and techniques for non-volatile memory buffering
US7793059B2 (en) 2006-01-18 2010-09-07 Apple Inc. Interleaving policies for flash memory
US7587559B2 (en) 2006-08-10 2009-09-08 International Business Machines Corporation Systems and methods for memory module power management
US20080107275A1 (en) * 2006-11-08 2008-05-08 Mehdi Asnaashari Method and system for encryption of information stored in an external nonvolatile memory
US7953954B2 (en) * 2007-01-26 2011-05-31 Micron Technology, Inc. Flash storage partial page caching
JP4356765B2 (en) * 2007-04-20 2009-11-04 ソニー株式会社 Information processing apparatus and method, and program
US7895397B2 (en) * 2007-09-12 2011-02-22 Intel Corporation Using inter-arrival times of data requests to cache data in a computing environment
US7961544B2 (en) 2008-08-05 2011-06-14 Sandisk Il Ltd. Storage system and method for managing a plurality of storage devices
JP5489434B2 (en) 2008-08-25 2014-05-14 株式会社日立製作所 Storage device with flash memory
US8601213B2 (en) * 2008-11-03 2013-12-03 Teradata Us, Inc. System, method, and computer-readable medium for spool cache management
US8386808B2 (en) 2008-12-22 2013-02-26 Intel Corporation Adaptive power budget allocation between multiple components in a computing system
CN101498994B (en) * 2009-02-16 2011-04-20 华中科技大学 Solid state disk controller
US8307258B2 (en) 2009-05-18 2012-11-06 Fusion-10, Inc Apparatus, system, and method for reconfiguring an array to operate with less storage elements
US8281227B2 (en) 2009-05-18 2012-10-02 Fusion-10, Inc. Apparatus, system, and method to increase data integrity in a redundant storage system
US20110173462A1 (en) 2010-01-11 2011-07-14 Apple Inc. Controlling and staggering operations to limit current spikes
US20110252187A1 (en) * 2010-04-07 2011-10-13 Avigdor Segal System and method for operating a non-volatile memory including a portion operating as a single-level cell memory and a portion operating as a multi-level cell memory
US9235530B2 (en) * 2010-05-31 2016-01-12 Sandisk Technologies Inc. Method and system for binary cache cleanup
US8826051B2 (en) 2010-07-26 2014-09-02 Apple Inc. Dynamic allocation of power budget to a system having non-volatile memory and a processor
US8555095B2 (en) 2010-07-26 2013-10-08 Apple Inc. Methods and systems for dynamically controlling operations in a non-volatile memory to limit power consumption
US8522055B2 (en) 2010-07-26 2013-08-27 Apple Inc. Peak power validation methods and systems for non-volatile memory
US20120159042A1 (en) * 2010-12-21 2012-06-21 Western Digital Technologies, Inc. Data storage device executing a unitary command comprising two cipher keys to access a sector spanning two encryption zones
US20120221767A1 (en) * 2011-02-28 2012-08-30 Apple Inc. Efficient buffering for a system having non-volatile memory

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5651137A (en) * 1995-04-12 1997-07-22 Intel Corporation Scalable cache attributes for an input/output bus
US6748493B1 (en) * 1998-11-30 2004-06-08 International Business Machines Corporation Method and apparatus for managing memory operations in a data processing system using a store buffer
US6748441B1 (en) * 1999-12-02 2004-06-08 Microsoft Corporation Data carousel receiving and caching
US6397301B1 (en) * 1999-12-29 2002-05-28 Intel Corporation Preventing access to secure area of a cache
US20080320209A1 (en) * 2000-01-06 2008-12-25 Super Talent Electronics, Inc. High Performance and Endurance Non-volatile Memory Based Storage Systems
US20040186961A1 (en) * 2003-03-19 2004-09-23 Shinji Kimura Cache control method for node apparatus
US20060015748A1 (en) * 2004-06-30 2006-01-19 Fujitsu Limited Secure processor and a program for a secure processor
US20060080553A1 (en) * 2004-10-08 2006-04-13 International Business Machines Corporation Secure memory caching structures for data, integrity and version values
US20080209203A1 (en) * 2007-02-15 2008-08-28 Fujitsu Limited Data encryption apparatus, data decryption apparatus, data encryption method, data decryption method, and data transfer controlling apparatus
US20080201574A1 (en) * 2007-02-15 2008-08-21 Fujitsu Limited Data encryption apparatus, data decryption apparatus, data encryption method, data decryption method, and data relay apparatus
US20100031056A1 (en) * 2007-07-27 2010-02-04 Hitachi, Ltd. Storage system to which removable encryption/decryption module is connected
US20120191982A1 (en) * 2007-12-06 2012-07-26 Levin Timothy Evert Non-volatile storage of encrypted data
US20090313416A1 (en) * 2008-06-16 2009-12-17 George Wayne Nation Computer main memory incorporating volatile and non-volatile memory
US20100174847A1 (en) * 2009-01-05 2010-07-08 Alexander Paley Non-Volatile Memory and Method With Write Cache Partition Management Methods
US20100293321A1 (en) * 2009-05-12 2010-11-18 Hanan Weingarten Systems and method for flash memory management
US20120005409A1 (en) * 2010-07-01 2012-01-05 Tsung-Chieh Yang Method for performing data shaping, and associated memory device and controller thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Brian Kaplan. "RAM is Key." May 2007. Carnegie Mellon University. *

Cited By (139)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9021178B2 (en) * 2011-05-02 2015-04-28 Western Digital Technologies, Inc. High performance path for command processing
US20120284460A1 (en) * 2011-05-02 2012-11-08 Western Digital Technologies, Inc. High performance path for command processing
US8874942B2 (en) 2011-05-11 2014-10-28 Apple Inc. Asynchronous management of access requests to control power consumption
US8769318B2 (en) 2011-05-11 2014-07-01 Apple Inc. Asynchronous management of access requests to control power consumption
US20140237167A1 (en) * 2011-06-24 2014-08-21 Sandisk Technologies Inc. Apparatus and Methods for Peak Power Management in Memory Systems
US20230280908A1 (en) * 2011-06-30 2023-09-07 Amazon Technologies, Inc. System and method for providing a committed throughput level in a data store
US9292209B2 (en) 2011-08-30 2016-03-22 International Business Machines Corporation Multiple I/O request processing in a storage system
US20130054841A1 (en) * 2011-08-30 2013-02-28 International Business Machines Corporation Multiple i/o request processing in a storage system
US9134909B2 (en) * 2011-08-30 2015-09-15 International Business Machines Corporation Multiple I/O request processing in a storage system
US9483188B2 (en) 2011-08-30 2016-11-01 International Business Machines Corporation Multiple I/O request processing in a storage system
US9841917B2 (en) 2011-09-16 2017-12-12 Apple Inc. Systems and methods for configuring non-volatile memory
US8938583B1 (en) * 2011-11-16 2015-01-20 Western Digital Technologies, Inc. System and method for concurrently executing data access requests
US20140372588A1 (en) 2011-12-14 2014-12-18 Level 3 Communications, Llc Request-Response Processing in a Content Delivery Network
US11838385B2 (en) 2011-12-14 2023-12-05 Level 3 Communications, Llc Control in a content delivery network
US10187491B2 (en) 2011-12-14 2019-01-22 Level 3 Communications, Llc Request-response processing an a content delivery network
US10841398B2 (en) 2011-12-14 2020-11-17 Level 3 Communications, Llc Control in a content delivery network
US11218566B2 (en) 2011-12-14 2022-01-04 Level 3 Communications, Llc Control in a content delivery network
US9516136B2 (en) 2011-12-14 2016-12-06 Level 3 Communications, Llc Customer-specific request-response processing in a content delivery network
US9451045B2 (en) 2011-12-14 2016-09-20 Level 3 Communications, Llc Content delivery network
US9456053B2 (en) 2011-12-14 2016-09-27 Level 3 Communications, Llc Content delivery network
WO2014039665A1 (en) * 2012-09-06 2014-03-13 Intel Corporation Mitigating unauthorized access to data traffic
US9769123B2 (en) 2012-09-06 2017-09-19 Intel Corporation Mitigating unauthorized access to data traffic
US9755914B2 (en) 2012-12-13 2017-09-05 Level 3 Communications, Llc Request processing in a content delivery network
US10791050B2 (en) 2012-12-13 2020-09-29 Level 3 Communications, Llc Geographic location determination in a content delivery framework
US20140173077A1 (en) * 2012-12-13 2014-06-19 Level 3 Communications, Llc Maintaining invalidation information
US11368548B2 (en) 2012-12-13 2022-06-21 Level 3 Communications, Llc Beacon services in a content delivery framework
US11121936B2 (en) 2012-12-13 2021-09-14 Level 3 Communications, Llc Rendezvous optimization in a content delivery framework
US10992547B2 (en) 2012-12-13 2021-04-27 Level 3 Communications, Llc Rendezvous systems, methods, and devices
US10931541B2 (en) 2012-12-13 2021-02-23 Level 3 Communications, Llc Devices and methods supporting content delivery with dynamically configurable log information
US9628344B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Framework supporting content delivery with reducer services network
US9628343B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Content delivery framework with dynamic service network topologies
US9628345B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Framework supporting content delivery with collector services network
US9628346B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Devices and methods supporting content delivery with reducer services
US9628347B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Layered request processing in a content delivery network (CDN)
US9628342B2 (en) 2012-12-13 2017-04-18 Level 3 Communications, Llc Content delivery framework
US9634904B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Framework supporting content delivery with hybrid content delivery services
US9634918B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Invalidation sequencing in a content delivery framework
US9634906B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback
US9634905B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Invalidation systems, methods, and devices
US9634907B2 (en) 2012-12-13 2017-04-25 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback
US9641402B2 (en) 2012-12-13 2017-05-02 Level 3 Communications, Llc Configuring a content delivery network (CDN)
US9641401B2 (en) 2012-12-13 2017-05-02 Level 3 Communications, Llc Framework supporting content delivery with content delivery services
US9647900B2 (en) 2012-12-13 2017-05-09 Level 3 Communications, Llc Devices and methods supporting content delivery with delivery services
US9647901B2 (en) 2012-12-13 2017-05-09 Level 3 Communications, Llc Configuring a content delivery network (CDN)
US9647899B2 (en) 2012-12-13 2017-05-09 Level 3 Communications, Llc Framework supporting content delivery with content delivery services
US9654353B2 (en) 2012-12-13 2017-05-16 Level 3 Communications, Llc Framework supporting content delivery with rendezvous services network
US9654354B2 (en) 2012-12-13 2017-05-16 Level 3 Communications, Llc Framework supporting content delivery with delivery services network
US9654355B2 (en) 2012-12-13 2017-05-16 Level 3 Communications, Llc Framework supporting content delivery with adaptation services
US9654356B2 (en) 2012-12-13 2017-05-16 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services
US9661046B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services
US9660876B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Collector mechanisms in a content delivery network
US9660875B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Devices and methods supporting content delivery with rendezvous services having dynamically configurable log information
US9660874B2 (en) 2012-12-13 2017-05-23 Level 3 Communications, Llc Devices and methods supporting content delivery with delivery services having dynamically configurable log information
US9667506B2 (en) 2012-12-13 2017-05-30 Level 3 Communications, Llc Multi-level peering in a content delivery framework
US10862769B2 (en) 2012-12-13 2020-12-08 Level 3 Communications, Llc Collector mechanisms in a content delivery network
US10841177B2 (en) 2012-12-13 2020-11-17 Level 3 Communications, Llc Content delivery framework having autonomous CDN partitioned into multiple virtual CDNs to implement CDN interconnection, delegation, and federation
US9686148B2 (en) 2012-12-13 2017-06-20 Level 3 Communications, Llc Responsibility-based cache peering
US10826793B2 (en) 2012-12-13 2020-11-03 Level 3 Communications, Llc Verification and auditing in a content delivery framework
US10742521B2 (en) 2012-12-13 2020-08-11 Level 3 Communications, Llc Configuration and control in content delivery framework
US9705754B2 (en) 2012-12-13 2017-07-11 Level 3 Communications, Llc Devices and methods supporting content delivery with rendezvous services
US9722883B2 (en) 2012-12-13 2017-08-01 Level 3 Communications, Llc Responsibility-based peering
US9722882B2 (en) 2012-12-13 2017-08-01 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with provisioning
US9722884B2 (en) 2012-12-13 2017-08-01 Level 3 Communications, Llc Event stream collector systems, methods, and devices
US9749191B2 (en) 2012-12-13 2017-08-29 Level 3 Communications, Llc Layered request processing with redirection and delegation in a content delivery network (CDN)
US9749190B2 (en) * 2012-12-13 2017-08-29 Level 3 Communications, Llc Maintaining invalidation information
US10708145B2 (en) 2012-12-13 2020-07-07 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback from health service
US9749192B2 (en) 2012-12-13 2017-08-29 Level 3 Communications, Llc Dynamic topology transitions in a content delivery framework
US10700945B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Role-specific sub-networks in a content delivery framework
US10701148B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having storage services
US9787551B2 (en) 2012-12-13 2017-10-10 Level 3 Communications, Llc Responsibility-based request processing
US10701149B2 (en) 2012-12-13 2020-06-30 Level 3 Communications, Llc Content delivery framework having origin services
US10652087B2 (en) 2012-12-13 2020-05-12 Level 3 Communications, Llc Content delivery framework having fill services
US9819554B2 (en) 2012-12-13 2017-11-14 Level 3 Communications, Llc Invalidation in a content delivery framework
US10608894B2 (en) 2012-12-13 2020-03-31 Level 3 Communications, Llc Systems, methods, and devices for gradual invalidation of resources
US10142191B2 (en) 2012-12-13 2018-11-27 Level 3 Communications, Llc Content delivery framework with autonomous CDN partitioned into multiple virtual CDNs
US9847917B2 (en) 2012-12-13 2017-12-19 Level 3 Communications, Llc Devices and methods supporting content delivery with adaptation services with feedback
US10135697B2 (en) 2012-12-13 2018-11-20 Level 3 Communications, Llc Multi-level peering in a content delivery framework
US9887885B2 (en) 2012-12-13 2018-02-06 Level 3 Communications, Llc Dynamic fill target selection in a content delivery framework
WO2014098839A1 (en) * 2012-12-19 2014-06-26 Hewlett-Packard Development Company Nvram path selection
US20150317095A1 (en) * 2012-12-19 2015-11-05 Hewlett-Packard Development Company, L.P. Nvram path selection
US10514855B2 (en) * 2012-12-19 2019-12-24 Hewlett Packard Enterprise Development Lp NVRAM path selection
US20140280759A1 (en) * 2013-03-15 2014-09-18 International Business Machines Corporation Data transmission for transaction processing in a networked environment
US9473561B2 (en) * 2013-03-15 2016-10-18 International Business Machines Corporation Data transmission for transaction processing in a networked environment
JP2014182792A (en) * 2013-03-15 2014-09-29 Samsung Electronics Co Ltd Operation method of memory system, memory system, and memory controller
US9473565B2 (en) * 2013-03-15 2016-10-18 International Business Machines Corporation Data transmission for transaction processing in a networked environment
US20140280680A1 (en) * 2013-03-15 2014-09-18 International Business Machines Corporation Data transmission for transaction processing in a networked environment
US9817710B2 (en) 2013-05-28 2017-11-14 Amazon Technologies, Inc. Self-describing data blocks stored with atomic write
US9465693B2 (en) 2013-05-28 2016-10-11 Amazon Technologies, Inc. Self-describing data blocks of a minimum atomic write size for a data store
US9047189B1 (en) * 2013-05-28 2015-06-02 Amazon Technologies, Inc. Self-describing data blocks of a minimum atomic write size for a data store
US9880926B1 (en) * 2013-08-20 2018-01-30 Seagate Technology Llc Log structured reserved zone for a data storage device
US9412455B2 (en) 2013-09-11 2016-08-09 Kabushiki Kaisha Toshiba Data write control device and data storage device
US20150169254A1 (en) * 2013-12-16 2015-06-18 International Business Machines Corporation Firmware bypass for medium-access commands
US9798493B2 (en) * 2013-12-16 2017-10-24 International Business Machines Corporation Firmware bypass for medium-access commands
US20160299687A1 (en) 2015-04-13 2016-10-13 SK Hynix Inc. Controller transmitting output commands and method of operating the same
US10564851B2 (en) 2015-04-13 2020-02-18 SK Hynix Inc. Controller transmitting output commands and method of operating the same
US9998436B2 (en) * 2015-12-15 2018-06-12 International Business Machines Corporation Management of encryption within processing elements
US20170171171A1 (en) * 2015-12-15 2017-06-15 International Business Machines Corporation Management of encryption within processing elements
US9948620B2 (en) * 2015-12-15 2018-04-17 International Business Machines Corporation Management of encryption within processing elements
US20170366522A1 (en) * 2015-12-15 2017-12-21 International Business Machines Corporation Management of encryption within processing elements
US9705897B2 (en) * 2015-12-15 2017-07-11 International Business Machines Corporation Management of encryption within processing elements
US20170171223A1 (en) * 2015-12-15 2017-06-15 International Business Machines Corporation Management of encryption within processing elements
US9985940B2 (en) * 2015-12-15 2018-05-29 International Business Machines Corporation Management of encryption within processing elements
US20170177061A1 (en) * 2015-12-17 2017-06-22 Pmc-Sierra Us, Inc. Power saving in multi-directional data transfer
US10007319B2 (en) * 2015-12-17 2018-06-26 Microsemi Solutions (U.S.), Inc. Power saving in multi-directional data transfer
US10642494B2 (en) * 2015-12-31 2020-05-05 Lenovo (Beijing) Limited Method, electronic device and computer program product for data processing
US20190065370A1 (en) * 2016-03-08 2019-02-28 Toshiba Memory Corporation Storage system and information processing system for controlling nonvolatile memory
US9749381B1 (en) 2016-04-11 2017-08-29 Level 3 Communications, Llc Invalidation in a content delivery network (CDN)
WO2017180218A1 (en) * 2016-04-11 2017-10-19 Level 3 Communications, Llc Invalidation in a content delivery network (cdn)
US9591047B1 (en) * 2016-04-11 2017-03-07 Level 3 Communications, Llc Invalidation in a content delivery network (CDN)
US10152241B2 (en) 2016-05-04 2018-12-11 SK Hynix Inc. Data processing system and operating method thereof
CN107622019A (en) * 2016-07-14 2018-01-23 爱思开海力士有限公司 Accumulator system and its operating method
US20180018111A1 (en) * 2016-07-14 2018-01-18 SK Hynix Inc. Memory system and operating method thereof
US10157007B2 (en) * 2016-07-14 2018-12-18 SK Hynix Inc. Memory system and operating method thereof
US10664167B2 (en) * 2017-01-17 2020-05-26 Phison Electronics Corp. Data transmitting method, memory storage device and memory control circuit unit
US20180203605A1 (en) * 2017-01-17 2018-07-19 Phison Electronics Corp. Data transmitting method, memory storage device and memory control circuit unit
US11029893B2 (en) 2017-09-08 2021-06-08 Samsung Electronics Co., Ltd. Storage device including nonvolatile memory device and controller, controller and operating method of nonvolatile memory device
US11693605B2 (en) 2017-09-08 2023-07-04 Samsung Electronics Co., Ltd. Storage device including nonvolatile memory device and controller, controller and operating method of nonvolatile memory device
US11289131B2 (en) 2017-10-05 2022-03-29 Advanced Micro Devices, Inc. Dynamic control of multi-region fabric
US10861504B2 (en) 2017-10-05 2020-12-08 Advanced Micro Devices, Inc. Dynamic control of multi-region fabric
US10558591B2 (en) * 2017-10-09 2020-02-11 Advanced Micro Devices, Inc. Method and apparatus for in-band priority adjustment forwarding in a communication fabric
KR102160512B1 (en) 2017-10-23 2020-10-15 마이크론 테크놀로지, 인크. Host acceleration behavior on managed NAND devices
US10936250B2 (en) 2017-10-23 2021-03-02 Micron Technology, Inc. Host accelerated operations in managed NAND devices
US10430117B2 (en) * 2017-10-23 2019-10-01 Micron Technology, Inc. Host accelerated operations in managed NAND devices
US11385838B2 (en) 2017-10-23 2022-07-12 Micron Technology, Inc. Host accelerated operations in managed NAND devices
KR20200057781A (en) * 2017-10-23 2020-05-26 마이크론 테크놀로지, 인크. Host acceleration behavior on managed NAND devices
US10725703B2 (en) * 2017-11-29 2020-07-28 Lapis Semiconductor Co., Ltd. Method for writing data, memory device and data writing system
US11196657B2 (en) 2017-12-21 2021-12-07 Advanced Micro Devices, Inc. Self identifying interconnect topology
US10969994B2 (en) * 2018-08-08 2021-04-06 Micron Technology, Inc. Throttle response signals from a memory system
US11074007B2 (en) 2018-08-08 2021-07-27 Micron Technology, Inc. Optimize information requests to a memory system
US20200050390A1 (en) * 2018-08-08 2020-02-13 Micron Technology, Inc. Throttle Response Signals from a Memory System
US11740833B2 (en) * 2018-08-08 2023-08-29 Micron Technology, Inc. Throttle response signals from a memory system
CN112602049A (en) * 2018-08-08 2021-04-02 美光科技公司 Optimizing information requests to a memory system
US10719462B2 (en) * 2018-09-25 2020-07-21 Intel Corporation Technologies for computational storage via offload kernel extensions
US20190042501A1 (en) * 2018-09-25 2019-02-07 Intel Corporation Technologies for computational storage via offload kernel extensions
US11507522B2 (en) 2019-12-06 2022-11-22 Advanced Micro Devices, Inc. Memory request priority assignment techniques for parallel processors
US11223575B2 (en) 2019-12-23 2022-01-11 Advanced Micro Devices, Inc. Re-purposing byte enables as clock enables for power savings
US11379151B2 (en) 2020-03-31 2022-07-05 Western Digital Technologies, Inc. Flow control of large sequential host read commands based on a queue threshold
US11513736B2 (en) 2021-03-08 2022-11-29 Western Digital Technologies, Inc. Revised host command generation for unaligned access
WO2022212205A1 (en) * 2021-03-29 2022-10-06 Micron Technology, Inc. Sideband communication management

Also Published As

Publication number Publication date
CN102750226A (en) 2012-10-24
US20170286290A1 (en) 2017-10-05
KR20120098547A (en) 2012-09-05
TWI537952B (en) 2016-06-11
KR20120098505A (en) 2012-09-05
TWI503822B (en) 2015-10-11
EP2492817A2 (en) 2012-08-29
CN102750226B (en) 2015-11-25
US9996457B2 (en) 2018-06-12
TW201250691A (en) 2012-12-16
EP2492817A3 (en) 2012-11-14
TW201306034A (en) 2013-02-01
US20150227460A1 (en) 2015-08-13
WO2012118743A1 (en) 2012-09-07
KR101624007B1 (en) 2016-05-24
US9703700B2 (en) 2017-07-11

Similar Documents

Publication Publication Date Title
US9996457B2 (en) Efficient buffering for a system having non-volatile memory
US8595414B2 (en) Selectively combining commands for a system having non-volatile memory
CN107346290B (en) Replaying partition logical to physical data address translation tables using parallelized log lists
US10671299B2 (en) Nonvolatile memory module having device controller that detects validity of data in RAM based on at least one of size of data and phase bit corresponding to the data, and method of operating the nonvolatile memory module
US10817415B2 (en) Data storage device and method including device or controller-based space allocation and host-based mapping table searching
US9697116B2 (en) Storage system and writing method thereof
US8463826B2 (en) Incremental garbage collection for non-volatile memories
US10802733B2 (en) Methods and apparatus for configuring storage tiers within SSDs
US8645723B2 (en) Asynchronous management of access requests to control power consumption
US20160062885A1 (en) Garbage collection method for nonvolatile memory device
US9690953B2 (en) Generating efficient reads for a system having non-volatile memory
US20130326113A1 (en) Usage of a flag bit to suppress data transfer in a mass storage system having non-volatile memory
US20130238833A1 (en) Heuristics for programming data in a non-volatile memory
US11681472B2 (en) Unmap data pattern for coarse mapping memory sub-system
US8886963B2 (en) Secure relocation of encrypted files
US11573891B2 (en) Memory controller for scheduling commands based on response for receiving write command, storage device including the memory controller, and operating method of the memory controller and the storage device
US11507289B2 (en) Storage device, controller and method for operating storage device
US8972650B2 (en) Methods and systems for performing efficient page reads in a non-volatile memory
US20120198124A1 (en) Methods and systems for optimizing read operations in a non-volatile memory
US11403011B1 (en) Host memory buffer allocation management
US8850160B2 (en) Adaptive write behavior for a system having non-volatile memory
US20120198126A1 (en) Methods and systems for performing selective block switching to perform read operations in a non-volatile memory

Legal Events

Date Code Title Description
AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:POST, DANIEL J.;WAKRAT, NIR;REEL/FRAME:026105/0856

Effective date: 20110328

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION