US20080201522A1 - Buffer management method and optical disc drive - Google Patents

Buffer management method and optical disc drive Download PDF

Info

Publication number
US20080201522A1
US20080201522A1 US12/032,722 US3272208A US2008201522A1 US 20080201522 A1 US20080201522 A1 US 20080201522A1 US 3272208 A US3272208 A US 3272208A US 2008201522 A1 US2008201522 A1 US 2008201522A1
Authority
US
United States
Prior art keywords
write
data blocks
list
optical disc
buffer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/032,722
Inventor
Tse-Hong Wu
Shih-Hsin Chen
Shih-Ta Hung
KuanYu Lai
Tai-Liang Lin
Ping-Sheng Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MediaTek Inc
Original Assignee
MediaTek Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MediaTek Inc filed Critical MediaTek Inc
Priority to US12/032,722 priority Critical patent/US20080201522A1/en
Assigned to MEDIATEK INC. reassignment MEDIATEK INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHEN, PING-SHENG, CHEN, SHIH-HSIN, HUNG, SHIH-TA, LAI, KUANYU, LIN, TAI-LIANG, WU, TSE-HONG
Publication of US20080201522A1 publication Critical patent/US20080201522A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B19/00Driving, starting, stopping record carriers not specifically of filamentary or web form, or of supports therefor; Control thereof; Control of operating function ; Driving both disc and head
    • G11B19/02Control of operating function, e.g. switching from recording to reproducing
    • G11B19/04Arrangements for preventing, inhibiting, or warning against double recording on the same blank or against other recording or reproducing malfunctions
    • G11B19/041Detection or prevention of read or write errors
    • G11B19/044Detection or prevention of read or write errors by using a data buffer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • G06F3/0677Optical disk device, e.g. CD-ROM, DVD

Definitions

  • the invention relates to optical disc drives, and in particular, to buffer management in random access optical discs.
  • FIG. 1 shows a conventional optical disc drive 120 coupled to a host computer 110 .
  • the host computer 110 may issue certain read or write commands to access an optical disc (not shown) installed in the optical disc drive 120 .
  • a typical read command comprises one or more destination addresses where data blocks are requested, and a write command also comprises specific one or more destination addresses where one or more data blocks are designated to be recorded thereto. Data blocks to be recorded may be sent from the host computer 110 in conjunction with the write commands.
  • the optical disc drive 120 basically comprises a processor 122 , a memory device 124 and a driving module 126 .
  • the memory device 124 is usually separated into two areas, a read buffer 130 and a write buffer 132 .
  • the read buffer 130 buffers data blocks acquired from the optical disc in response to the read commands.
  • the write buffer 132 buffers data blocks to be recorded onto the optical disc.
  • the driving module 126 include a mechanical unit comprising a pick up head (PUH), a motor and other controlling means (not shown) to perform physical data access of the optical disc.
  • the write buffer 132 may be divided into a plurality of sections 134 each corresponding to a destination address. Each section 134 serves as a ring buffer to cache data blocks of adjacent destination addresses. In other word, it is better that buffered data blocks should have continuous destination addresses.
  • An embodiment of a buffer management method is provided, particularly adaptable in an optical disc drive to access an optical disc.
  • One or more data blocks are recorded to the optical disc in response to received write commands.
  • Data blocks corresponding to the write commands are first buffered in a buffer of the optical disc drive.
  • one or more write tasks may be organized based on the buffered write commands, each associated with a group of data blocks having consecutive destination addresses.
  • a recording operation can be scheduled based on those write tasks, and the recording operation is performed to record the data blocks to the optical disc.
  • a write list comprising entries of each data block allocated in the buffer. Entries of data blocks having consecutive destination addresses are linked to form at least one link list according to the write list, and write tasks are established from the link list, each write task comprising allocation of a first data block.
  • a free list may also be provided to maintain unallocated entries of the buffer.
  • the write list is scanned to determine whether an incoming data block has a previous copy in the buffer. If so, the previous copy is overwritten by the incoming data block. Otherwise, a free entry is acquired from the free list to store the incoming data block. Thereafter, it is determined whether the incoming data block has a destination address consecutive to those contained in an existing write task. If so, the existing write task associated with the incoming data block is updated. Otherwise, a new write task is created for the incoming data block.
  • a latest list is provided to maintain entries of data blocks associated with the latest certain amount of read and write commands received by the optical disc drive. While buffering the data blocks, a read command may be received, and designated to read a data block from a destination address of the optical disc. It is determined whether the latest list is hit according to the destination address. If the latest list is hit, the data block is output from the buffer to respond to the read command, and the latest list is renewed at the same time. On the contrary, if the latest list is not hit, the optical disc is read to acquire the data block, and the latest list is updated with an entry where the data block is buffered.
  • hit rates of each write task, numbers of data blocks contained in each write task, and pick up head (PUH) loadings according to first data blocks' destination addresses of each write task are counted.
  • a priority order of the write tasks is arranged based on their hit rates, number of data blocks and PUH loadings.
  • the write operation is performed by executing the write tasks following the priority order. Execution of a write task comprises steps of encoding data blocks into error correction code (ECC) blocks and burning them onto a destination address on the optical disc. Upon completion of the write operation, successfully burnt data blocks are flushed from the buffer.
  • ECC error correction code
  • a defect list is provided, maintaining entries of data blocks having destination addresses where defects exist. It is determined whether an error is found when burning an ECC block. If an error is found, the entry where a data block corresponding to the ECC block is buffered is added into the defect list. Upon completion of all prioritized write tasks, a further write task may be processed to burn the data blocks listed in the defect list.
  • Another embodiment is the optical disc drive implementing the buffer management method.
  • FIG. 1 shows a conventional optical disc drive
  • FIG. 2 a shows an embodiment of an optical disc drive according to the invention
  • FIG. 2 b is a flowchart of an embodiment of buffer management method according to the invention.
  • FIG. 3 a shows an embodiment of a write list and a free list
  • FIG. 3 b shows another embodiment of a write list
  • FIG. 4 shows an embodiment of a link list
  • FIG. 5 a show embodiments of a buffer using a forward type link list
  • FIG. 5 b shows another embodiment of a write list
  • FIG. 6 a is a flowchart of an embodiment of a buffering operation
  • FIG. 6 b is a flowchart of data block reception when perform the buffering operation
  • FIG. 6 c is a flowchart of mode detection when perform the buffering operation
  • FIG. 6 d is a flowchart of priority determination when perform the buffering operation
  • FIG. 6 e is an embodiment of calculating priority value of the invention.
  • FIG. 7 is a flowchart of a read command handling process
  • FIG. 8 is a flowchart of recording start condition determination
  • FIG. 9 is an exemplary flowchart of a recording operation
  • FIG. 10 is a flowchart of a conventional defect handling process
  • FIG. 11 is a flowchart of an embodiment of a defect handling process.
  • FIG. 12 shows and embodiment of a defect list.
  • FIG. 2 a shows an embodiment of an optical disc drive according to the invention. While a processor 122 processes read and write commands #R and #W issued from the host computer 110 , a buffer 140 is deployed in the memory device 124 to temporarily store associated data blocks. Data blocks requested by the read command #R are referred to as read data block #D R , whereas those associated with write commands #W are write data blocks #D W .
  • the buffer 140 is partitioned into several blocks. Each block serves as a unit for data storage. And several blocks are collected as a section.
  • the buffer 140 serves as a cache to store the read data block #D R and the write data block #D W , and in the embodiment, a buffer management system and approach is disclosed to optimize a recording operation using management tables, such as a write list 136 , a latest list, a defect list and a free list 138 .
  • the driving module 126 is accordingly controlled to perform the recording operation. When a start recording condition is met, the data blocks in the buffer 140 are transferred and recorded to the optical disc.
  • the write data blocks #D W are first buffered in the memory device 124 before the physical recording operation is performed. And the write list 136 , the latest list and free list 138 are updated accordingly.
  • the write list 136 serves as a lookup table for maintaining relationship of all write data blocks #D W buffered in the buffer 140 .
  • the free list 138 serves as another lookup table containing unallocated blocks of the buffer 140 that direct to free spaces or available spaces.
  • a latest list 137 is provided to maintain blocks of latest accessed data blocks in the buffer 140
  • a defect list 139 is used to maintain blocks of those failed to be recorded onto the optical disc.
  • the write list 136 , latest list 137 , free list 138 and defect list 139 may be established by tables, but other data structure such as link list also adaptable. Implementations of the proposed architecture of FIG. 2 a are further described in the embodiments hereafter.
  • FIG. 2 b is a flowchart of an embodiment of buffer management method according to the invention.
  • the fundamental steps are summarized into steps 201 to 207 .
  • step 201 the optical disc drive 120 is initialized.
  • a buffering operation is recursively processed in step 203 .
  • the buffering operation buffers write data blocks #D W transferred from the host 110 according to the write command #W.
  • the buffering operation also buffers read data blocks #D R transferred from the optical disc according to the read command #R.
  • step 205 a start recording condition is checked. Only when the start recording condition is met, the optical disc drive 120 enters a physical recording operation in step 207 . Otherwise the process loops back to step 203 .
  • the host computer 110 may randomly issue read commands #R or write commands #W designated to request certain read data blocks #D R from the optical disc, or to record write data blocks #D W onto the optical disc.
  • the read and write data blocks #D R and #D W may be buffered into buffer 140 .
  • the write list 136 , latest list 137 and free list 138 are updated accordingly for maintenance thereof. It is well known, continuity of data blocks is excessively desirable when performing the recording operation.
  • a recording operation which successively record at least one write data blocks #D W onto consecutive destination area of the optical disc is defined as a disc write task.
  • the processor 122 collects unrecorded data blocks having consecutive destination addresses and successively records those collected data blocks onto the optical disc in a disc write task.
  • the write list 136 is created from the buffer 140 , and contents of write list 136 are utilized to assistance in establishing the disc write tasks.
  • FIG. 3 a shows an embodiment of a write list 136 a and a free list 142 .
  • a plurality of data blocks are stored in the buffer 140 .
  • the labels of A, B and C in each block denotes destination addresses of the certain write data blocks #D W .
  • there are pluralities of write data blocks #D W stored in the buffer 140 in which those of consecutive destination addresses are categorized into one disc write task.
  • addresses denoted as A, A+1 and A+2 are discovered and categorized into a first disc write task.
  • the write data blocks #D W of destination addresses denoted as B and B+1, and C, C+1 and C+2 can construct two other disc write tasks. It is shown that the write list 136 a includes buffer index and corresponding destination addresses of the write data blocks #D W . Although the write data blocks #D W may be distributed randomly in different blocks of the buffer 140 . With the write list 136 a, when an incoming write data block #D W is received, it can be easily determined whether the incoming write data block #D W corresponding to any of the existed disc write tasks. As shown, free blocks or available blocks of the buffer 140 denoted as “FREE” are maintained by the free list 138 .
  • FREE free blocks or available blocks of the buffer 140
  • FIG. 3 b shows another embodiment of a write list 136 b.
  • the write list 136 b is a sorted version of write list 136 a in FIG. 3 a, in which elements are rearranged based on destination addresses of the write data block #D W . Since the write list 136 b is implemented in the memory device 124 , the cost of sorting the contents is ignorable while manageability of the write list 136 b is thereby increased.
  • an incoming write data block #D W denoted as “A+3” is input, one of the free entries in the free list 138 , such as “FREE1”, is assigned for storage of it, and in the write list 136 b, an additional column is appended to record its destination address “A+3” and a pointer pointing to its newly assigned entry.
  • the write list 136 b with the newly added entry “A+3” could be further sorted to be an updated write list.
  • FIG. 4 shows an embodiment of a link list 400 .
  • the data structure of the buffer 140 can be implemented with a link list.
  • a link list has various types, basically a forward type and a backward type.
  • each element itself is associated with a next index pointing to an address where the next element is located.
  • a link list of backward type each element itself is bound with a previous index to indicate where a previous element is located.
  • the advantage of link list is, there is no need to sort the elements, and in addition, costs of adding or removing an element is almost ignorable since only relative indices need to be changed. Practically, the forward and backward types can be simultaneously implemented to form a bi-directional link list.
  • FIG. 5 a show embodiments of a buffer 140 using a forward type link list.
  • the write list 150 a maintains several disc write tasks by recording their task entries.
  • a task entry indicates where a first write data block #D W of the disc write task is buffered.
  • each block is bound with a pointer linking to another block. For example, for the disc write task A, its task entry points to where write data block #D W with a beginning designation addresses A locates, and the write data block #D W A has a pointer linking to a successive write data block #D W with a designation addresses A+1.
  • the pointer in write data block #D W A+1 links to a following write data block #D W with another designation addresses A+2.
  • Free spaces in the buffer 140 can also be managed in this way.
  • the free list 144 only records an entry indicating a first free block, and through a pointer, its successions are linked.
  • the link list structure facilitates data additions and removals, while complexities of managing the write list 150 a and free list 144 are also reduced.
  • FIG. 5 b shows another embodiment of a write list 150 b.
  • a backward type link list is used, and the mechanism is very similar to the embodiment of FIG. 5 a except for the pointer directions.
  • the task entry in write list 150 b indicates where a last write data block #D W of the disc write task is buffered. Taking disc write task A as an example, the last write data block #D W with destination address A+2 is located at block index “2”, and the write data block #D W A+2 has a pointer linking to a previous write data block #D W A+1. Likewise, the pointer in the write data block #D W A+1 links to write data block #D W A.
  • FIG. 6 a is a flowchart of an embodiment of a buffering operation.
  • the buffering operation in step 203 of FIG. 2 b further comprises a plurality of steps.
  • step 601 when the buffering operation of step 203 is initialized, write commands #W are randomly issued from the host computer 110 and handled in different procedures.
  • step 603 discusses when a specific write command #W is received, a block reception procedure is performed in step 605 to store its corresponding write data blocks #D W into the buffer 140 . A detailed embodiment of the block reception is described in FIG. 6 b.
  • a mode detection procedure is triggered in step 607 .
  • the optical disc drive 120 supports two modes when buffering the write data block #D W and the read data blocks #D R .
  • One is the conventional sequential mode, and the other is a random mode.
  • the arrangement of all buffered write data block #D W conforms to a conventional sequential structure, it is more efficient to record the write data blocks #D W in sequential access mode.
  • destination addresses of the buffered write data blocks #D W are not continuous, the recording operation is more complex, thus, it is processed in random mode in which various approaches such as disc write tasks are used to optimize the performance.
  • the determination of the modes is described in an embodiment in FIG. 6 c.
  • step 607 If random mode is set in step 607 , a plurality of disc write tasks will be established. To schedule the disc write tasks, priorities of each disc write task are required. A priority calculation process is therefore executed in step 609 to prioritize all disc write tasks. The priorities may be determined by various buffer statuses of each disc write task, and a detailed embodiment is described in FIG. 6 d.
  • One write command #W may be associated with more than one write data block #D W .
  • step 611 it is determined whether write data block #D W corresponding to a write command #W are pending buffered in the buffer 140 . If yes, the process loops back to step 605 for buffering another data blocks. Otherwise, the buffering operation is concluded, followed by a start recording condition determination process as described in step 205 of FIG. 2 b.
  • FIG. 6 b is a flowchart of data block reception when performing the buffering operation.
  • the block reception procedure as described in step 605 of FIG. 6 a is initialized in step 621 to handle an incoming write data block #D W .
  • the processor checks the write list 136 to determine whether the incoming write data block #D W has a previous copy in the buffer 140 . If so, overwriting is required, so step 625 is processed, whereby the processor overwrites the previous copy by the incoming write data block #D W . Otherwise, a free block should be allocated to store the incoming write data block #D W . Before allocating the free entry, capacity of the memory device 124 is checked in step 627 .
  • step 629 If there is not enough space left for further storage, a release procedure is triggered in step 629 to release more spaces for storing data.
  • a cache policy may be previously defined, whereby the processor releases a certain blocks accordingly to acquire additional capacity. There already exist various algorithms to release cached data depending on usages such as hit rates or idle time, so detailed example is not introduced herein.
  • step 627 is followed by step 631 , the block allocation step.
  • step 631 the processor 122 acquires a free block from the free list 138 to store the incoming write data block #D W .
  • step 633 it is determined whether the incoming write data block #D W hits an existing disc write task.
  • a particular destination address where the write data block #D W is bound to can be deduced.
  • the processor 122 can identify whether the particular destination address successive to or precedes whatever previously was buffered in the buffer 140 . For example, if the incoming write data block #D W has a destination address consecutive to those contained in an existing disc write task, step 637 is processed, in which the existing disc write task should be updated to include the incoming write data block #D W .
  • step 635 if there is no adjacency detected, a new disc write task may be created in the write list 136 to handle the incoming write data block #D W .
  • the write list may not need an update, though, but its last access time may be refreshed in order to count tasks such as time-outs or hit rate of the disc write task.
  • a latest list 137 is also updated in step 639 .
  • a latest list 137 is established as a read cache, recording entries of data blocks associated with latest certain amount of received read and write commands #R and #W.
  • the latest list 137 may utilize the described link list architecture in FIG. 5 a, with additional pointers implemented in the buffer 140 to link certain write data blocks #D W and read data blocks #D R . Therefore, the write list 136 and the latest list 137 are both deployed on the basis of the buffer 140 . In other words, the architecture allows one buffer 140 to function as read and write caches at the same time. In step 640 , the block reception is concluded.
  • FIG. 6 c is a flowchart of mode detection when performing the buffering operation.
  • the mode detection procedure as described in step 607 of FIG. 6 a is initialized. Various conditions are considered to decide which mode to set.
  • the processor 122 determines the current mode. If the current mode is the sequential mode, the process jumps to step 649 . Otherwise, step 645 is processed, in which a total of disc write tasks are counted. If there are more than one disc write tasks, random mode is set in step 651 . In step 647 , if there is no disc write tasks left in the buffer 140 , the sequential mode is set in step 653 .
  • step 649 if there is only one disc write task left, the last write data block #D W buffered in the block reception procedure is checked whether the block belongs to the only one disc write task. If not, a new disc write task is created, so the mode should be set to random mode in step 651 .
  • step 651 if the previous mode is sequential mode, the processor 122 creates the write list 136 , the latest list 137 , the free list 138 accordingly. Otherwise, step 649 is still followed by step 653 .
  • the buffer reception may be a continuous process, so steps 605 and 607 may be executed in parallel. In this case, whether the mode is set, should be dependent on the latest status of the buffer 140 .
  • Steps 651 and 653 are followed by step 655 , in which the mode detection procedure is concluded after the mode is set.
  • step 645 if there are one or more disc write tasks, then goes to step 651 , random mode is set in step 651 .
  • FIG. 6 d is a flowchart of priority determination when performing the buffering operation.
  • priority calculation is required for scheduling all of the disc write tasks to determine the sequence of recording of those disc write tasks.
  • the priority calculation procedure is initialized.
  • hit rates of each disc write task are counted. Any action involved in any write data block #D W in a disc write task shall count as a hit, such as overwriting, reading or adding a write data block #D W .
  • a buffered write data block #D W may be requested by a read command #R before it being recorded, so the reading operation is also counted in the hit rate.
  • the hit rates can further be categorized into write and read types. In the write list 136 , write hit rates are counted per disc write task, and for latest list 137 , read hit rates may be counted per read data block #D R .
  • step 665 for each disc write task, the total number of data blocks is considered as a factor to determine the priority.
  • one disc write task corresponds to one sequential recording operation for the driving module 126 , in which track seeking and locking are performed once, so it is more preferable and efficient to have more data blocks recorded at one time.
  • the counted numbers can directly indicate potential performance of a disc write task, thus is taken as a factor for establishing priority.
  • step 667 distances between the currently position of the PUH and task destination area on the optical disc are also considered as a factor of their priorities.
  • a task destination area is exactly the destination physical address of the first write data block #D W in a disc write task.
  • the distance the PUH moves also affects the performance. It is desirable to schedule an optimized recording operation so that the PUH moves as less as possible to complete all disc write tasks.
  • the PUH distances are factors of their priorities.
  • priority value of each disc write task are calculated based on hit rates, number of data blocks #D W , and PUH distances. The method to calculate these factors can be dependent on predetermined performance policies defined in firmware of the optical disc drive 120 , and the implementation is not limited as described in the embodiment.
  • FIG. 6 e is an embodiment of calculating priority value of the invention.
  • the factors of hit rates, task length and PUH distances are respectively multiplied with weighting factors Wa, Wb, Wc, and then summed together to generate the priority value.
  • the weighting factors Wa, Wb, Wc are adjustable depending on the actions of host computer 110 . For example, if host computer 110 issues lots of write commands #W with consecutive destination addresses whereby number of data blocks #D W of a disc write task is big enough, the weighting factor Wb could be set up to equal to weighting factor Wc, and the weighting factor Wb may greater than weighting factor Wa.
  • the weighting factors Wa, Wb, Wc can be modified by processor 122 . And the weighting factors Wa, Wb, Wc can be optimized via checking the data throughput of the optical pick head.
  • FIG. 7 is a flowchart of a read command handling process.
  • step 203 of FIG. 2 b the buffering operation is introduced, and step 603 of FIG. 6 a already discussed a write command #W handling process.
  • the buffering operation corresponding to the read command #R is introduced in step 701 .
  • the buffering operation as step 203 is initialized in step 701 .
  • step 703 a read command #R is received by the optical disc drive 120 , requesting for a certain read data block #D R from a specific address on the optical disc.
  • step 705 the processor 122 first checks whether the read data block #D R is already cached in the buffer 140 .
  • the read data block #D R is not hit in the buffer 140 , it shall be directly acquired from the optical disc.
  • a reading operation is performed to acquire the read data block #D R from the optical disc, and stored in the buffer 140 .
  • the latest list 137 is updated accordingly.
  • the capacity of the buffer 140 may be checked in step 711 . If capacity is not enough, a cache release procedure is performed in step 713 . In other word, if capacity is not enough for buffering current reading data blocks from the disc, the processor 122 would search the blocks according to the latest list 137 and the write list 136 to release the blocks that is not write data blocks.
  • a read command #R may request more than one read data block #D R , so in step 719 , it is determined whether all requested read data block #D R are acquired. If not, the process loops back to step 705 . Upon completion of the read data blocks acquiring read data block #D R , the buffering operation is concluded in step 721 .
  • FIG. 8 is a flowchart of recording start condition determination. As described in FIG. 2 b, step 205 determines whether a recording operation can be initialized.
  • the recording start condition comprises considerations of various factors, such as capacity usages of the buffer 140 , an idle time since last activity of the buffer 140 , duration since the last recording operation, and total number of disc write tasks.
  • step 801 the recording start condition determination of step 205 is triggered.
  • step 803 the available capacity of the buffer 140 is compared with a capacity threshold.
  • a recording operation may be triggered if the write data block #D W buffered therein are sufficient for recording, so the start recording condition is deemed satisfactory when the available capacity of buffer 140 is smaller than the capacity threshold, and the process jumps to step 813 .
  • step 813 the processor determines that disc drive 120 is ready to perform the recording operation.
  • the capacity threshold varies with mode. Generally, in random mode, it is desirable to gather more write data blocks #D W before recording because the consecutiveness may be thereby increased, so the capacity threshold is set to a smaller value in random mode than that in sequential mode.
  • the idle time is compared with an idle threshold.
  • the idle time may be specifically referred to as a period from last activity of the buffer 140 , such as data buffering and data output, is conducted.
  • sequential mode logically there is only one disc write task, so that the buffered write data blocks #D W are ready to be recorded at any time.
  • random mode since the complexity of a recording operation is higher, it is desirable to wait longer to allow more write data blocks #D W to be collected.
  • the idle threshold is set to a higher value in random mode than that in sequential mode.
  • step 807 the duration since the last recording operation compares with a duration threshold. Normally, the buffered write data blocks #D W are periodically flushed into the optical disc if no other specific event occurs.
  • the duration threshold value is also dependent on the mode. In the embodiment, the duration threshold is set to a higher value in random mode than that in sequential mode.
  • step 809 the numbers of disc write tasks are counted.
  • the number is irrelevant in sequential mode because there is only one disc write task. In random mode, however, the tasks number is proportional to randomness of the buffer 140 .
  • the capacity of write list 136 may be limited to manage a certain number of disc write tasks, so a task threshold is set. When the number of disc write tasks exceeds the task threshold, the recording operation is triggered in step 813 .
  • step 811 determines that disc drive 120 is not yet ready to perform the recording operation. Then, step 815 concludes the criterion determination step.
  • FIG. 9 is an exemplary flowchart of a recording operation.
  • the recording operation of step 203 is initialized in step 901 .
  • the mode is detected. For sequential mode, the case is simpler, whereby a conventional sequential recording operation is performed in step 913 .
  • the buffered write data blocks #D W in the buffer 140 are recorded and flushed if no error is detected.
  • the disc write tasks are handled one by one in steps 905 to 911 .
  • a disc write task having the highest priority value is first selected for recording.
  • disc write tasks having priority value exceeding a threshold are selected for recording.
  • the threshold is adjustable according to the status of the buffer 140 , such as available capacity of buffer 140 , and/or total numbers of existing disc write tasks. If the available capacity of buffer 140 is low, the threshold should be adjusted to be lower. If the total numbers of existing disc write tasks is high, the threshold should be adjusted to be low.
  • Step 907 is an optional step, in which a ring buffer may be provided in the memory device 124 as a second level cache.
  • Write data blocks #D W of the selected disc write task to be recorded may be copied to the ring buffer whereby further steps are processed.
  • the ring buffer may not be necessary, and the write data blocks #D W are directly processed in the buffer 140 .
  • the write data blocks #D W are individually encoded into error correction code (ECC) blocks and sequentially recorded onto destination area of the optical disc.
  • ECC error correction code
  • step 911 upon completion of a disc write task, the processor 122 determines whether more disc write tasks are to be processed. If so, the process loops to step 905 to select and process a disc write task of highest priority among the unprocessed ones. If all disc write tasks are done, the recording operation is concluded in step 915 .
  • step 909 when recording the write data blocks #D W , defects may be found on the optical disc where data could not be correctly recorded.
  • write data blocks #D W are written one by one.
  • the PUH moves to a spare area to record the write data block #D W , and moves back to an address successive to the defected address to recorded further write data blocks #D W .
  • the write data blocks #D W are copied to another buffer, and another disc write task should be scheduled to rewrite them.
  • FIG. 10 is a flowchart of a conventional defect handling process.
  • step A 01 a recording procedure for a disc write task is initialized.
  • Write data blocks #D W of a disc write task are sequentially processed through steps A 03 to A 09 .
  • step A 03 one write data block #D W is recorded to the optical disc, and in step A 05 , the recorded write data block #D W are checked. If an error is found, step A 07 is processed, in which the PUH moves to a spare area to rewrite the write data block #D W .
  • the write data block #D W may be copied to another buffer and wait for rewriting.
  • the spare area is preserved space for defect management during recording procedure, and the implementation varies with standards.
  • step A 09 it is determined whether all write data blocks #D W in the disc write task are recorded. If not, the process loops to step A 03 . Otherwise, the recording procedure is concluded in step A 11 .
  • step A 07 becomes a performance bottleneck because when the PUH moves to and from the spare area. If defects are multiple, complex mechanical burdens are induced by frequent track seeking and locking, therefore seriously degrade the performance. Alternatively, additional buffer spaces may be consumed to buffer the write data blocks #D W in need of rewriting.
  • FIG. 11 is a flowchart of an embodiment of a defect handling process. Steps A 01 , A 03 and A 05 are similar to those in FIG. 10 , whereby a write data block #D W is recorded and verified. In step A 08 , if a defect is found on the destination area, the PUH is not moved to the spare area. On the contrary, the processor 122 adds the block of the write data block #D W to the defect list 139 . Thereafter, steps A 09 is proceeded, continuing to process all of the write data block #D W in the disc write task.
  • the PUH continuously processes all write data blocks #D W of a disc write task without interruption and overheads induced by moving to and from the spare area. All the write data blocks #D W failing to be recorded due to defects are collected in the defect list 139 to form an extra disc write task. The write data blocks #D W failed to record on their destination area are reallocated to the spare area with continuity.
  • step A 10 an additional recording operation as step 909 can be triggered to record the write data blocks #D W to the spare area according to the extra disc write task. Thereafter, the recording procedure is concluded in step A 11 . In this way, no matter how bad the optical disc is damaged, continuity of the recording operation is almost unaffected.
  • FIG. 12 shows and embodiment of a defect list 139 .
  • the link list structure may also be used to construct the defect list 139 .
  • the defect list 139 creates an entry pointing to the write data block #D W of A+1.
  • another defect is found when recording a write data block #D W to C+1, and the defect list 139 links the write data block #D W of address A+1 to the write data block #D W of address C+1.
  • C+2 is found defective, so the link list is further extended.
  • a so called data block may have a basic unit in sectors or clusters, which are not exactly limited.
  • the write list 136 , latest list 137 , free list 138 and defect list 139 may be stored in the memory device 124 or other devices. While the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Abstract

A buffer management method is provided, particularly adaptable in an optical disc drive to access an optical disc. One or more data blocks are recorded to the optical disc in response to received write commands. Data blocks corresponding to the write commands are first buffered in a buffer of the optical disc drive. Thereafter, one or more write tasks may be organized based on the buffered write commands, each associated with a group of data blocks having consecutive destination addresses. A recording operation can be scheduled based on those write tasks, and the recording operation is performed to record the data blocks to the optical disc.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 60/890,204 filed on Feb. 16, 2007.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The invention relates to optical disc drives, and in particular, to buffer management in random access optical discs.
  • 2. Description of the Related Art
  • Writable optical disc technologies have been highly developed, and there are various standards such as CD-R, CD-RW, DVD-R, DVD+R, DVD-RW, DVD+RW, DVDRAM, HDDVD and Blue-Ray that allow data to be recorded onto a disc. FIG. 1 shows a conventional optical disc drive 120 coupled to a host computer 110. The host computer 110 may issue certain read or write commands to access an optical disc (not shown) installed in the optical disc drive 120. A typical read command comprises one or more destination addresses where data blocks are requested, and a write command also comprises specific one or more destination addresses where one or more data blocks are designated to be recorded thereto. Data blocks to be recorded may be sent from the host computer 110 in conjunction with the write commands. The optical disc drive 120 basically comprises a processor 122, a memory device 124 and a driving module 126. The memory device 124 is usually separated into two areas, a read buffer 130 and a write buffer 132. The read buffer 130 buffers data blocks acquired from the optical disc in response to the read commands. On the other hand, the write buffer 132 buffers data blocks to be recorded onto the optical disc. The driving module 126 include a mechanical unit comprising a pick up head (PUH), a motor and other controlling means (not shown) to perform physical data access of the optical disc.
  • Due to the spinning nature of the optical disc, a conventional recording operation can be performed easily in sequential mode, whereby data blocks buffered in the write buffer 132 are recorded sequentially according to their destination addresses. Some random access technologies have been proposed, allowing random recording of the optical disc. However, random recording is very inefficient for the driving module 126 because track seeking and locking consumes significant time. To improve efficiency, various buffer management methods are provided. For example, the write buffer 132 may be divided into a plurality of sections 134 each corresponding to a destination address. Each section 134 serves as a ring buffer to cache data blocks of adjacent destination addresses. In other word, it is better that buffered data blocks should have continuous destination addresses. In this way, data blocks with consecutive destination addresses have higher probability to be gathered, so the mechanical operations of track seeking and locking can be reduced to smoothen randomness of PUH moves. Since the scale of disc address is much larger than the buffer size, the effect is limited under very random and frequent disc access operations. It is therefore desirable to propose an enhanced buffer management method.
  • BRIEF SUMMARY OF THE INVENTION
  • An embodiment of a buffer management method is provided, particularly adaptable in an optical disc drive to access an optical disc. One or more data blocks are recorded to the optical disc in response to received write commands. Data blocks corresponding to the write commands are first buffered in a buffer of the optical disc drive. Thereafter, one or more write tasks may be organized based on the buffered write commands, each associated with a group of data blocks having consecutive destination addresses. A recording operation can be scheduled based on those write tasks, and the recording operation is performed to record the data blocks to the optical disc.
  • To organizing the write commands, a write list is provided, comprising entries of each data block allocated in the buffer. Entries of data blocks having consecutive destination addresses are linked to form at least one link list according to the write list, and write tasks are established from the link list, each write task comprising allocation of a first data block.
  • Furthermore, a free list may also be provided to maintain unallocated entries of the buffer. When buffering data blocks, the write list is scanned to determine whether an incoming data block has a previous copy in the buffer. If so, the previous copy is overwritten by the incoming data block. Otherwise, a free entry is acquired from the free list to store the incoming data block. Thereafter, it is determined whether the incoming data block has a destination address consecutive to those contained in an existing write task. If so, the existing write task associated with the incoming data block is updated. Otherwise, a new write task is created for the incoming data block.
  • Furthermore, a latest list is provided to maintain entries of data blocks associated with the latest certain amount of read and write commands received by the optical disc drive. While buffering the data blocks, a read command may be received, and designated to read a data block from a destination address of the optical disc. It is determined whether the latest list is hit according to the destination address. If the latest list is hit, the data block is output from the buffer to respond to the read command, and the latest list is renewed at the same time. On the contrary, if the latest list is not hit, the optical disc is read to acquire the data block, and the latest list is updated with an entry where the data block is buffered.
  • In a further embodiment, to schedule the write operation, hit rates of each write task, numbers of data blocks contained in each write task, and pick up head (PUH) loadings according to first data blocks' destination addresses of each write task are counted. A priority order of the write tasks is arranged based on their hit rates, number of data blocks and PUH loadings. The write operation is performed by executing the write tasks following the priority order. Execution of a write task comprises steps of encoding data blocks into error correction code (ECC) blocks and burning them onto a destination address on the optical disc. Upon completion of the write operation, successfully burnt data blocks are flushed from the buffer.
  • In a further embodiment, when a write task is executed, a defect list is provided, maintaining entries of data blocks having destination addresses where defects exist. It is determined whether an error is found when burning an ECC block. If an error is found, the entry where a data block corresponding to the ECC block is buffered is added into the defect list. Upon completion of all prioritized write tasks, a further write task may be processed to burn the data blocks listed in the defect list.
  • Another embodiment is the optical disc drive implementing the buffer management method. A detailed description is given in the following embodiments with reference to the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
  • FIG. 1 shows a conventional optical disc drive;
  • FIG. 2 a shows an embodiment of an optical disc drive according to the invention;
  • FIG. 2 b is a flowchart of an embodiment of buffer management method according to the invention;
  • FIG. 3 a shows an embodiment of a write list and a free list;
  • FIG. 3 b shows another embodiment of a write list;
  • FIG. 4 shows an embodiment of a link list;
  • FIG. 5 a show embodiments of a buffer using a forward type link list;
  • FIG. 5 b shows another embodiment of a write list;
  • FIG. 6 a is a flowchart of an embodiment of a buffering operation;
  • FIG. 6 b is a flowchart of data block reception when perform the buffering operation;
  • FIG. 6 c is a flowchart of mode detection when perform the buffering operation;
  • FIG. 6 d is a flowchart of priority determination when perform the buffering operation;
  • FIG. 6 e is an embodiment of calculating priority value of the invention;
  • FIG. 7 is a flowchart of a read command handling process;
  • FIG. 8 is a flowchart of recording start condition determination;
  • FIG. 9 is an exemplary flowchart of a recording operation;
  • FIG. 10 is a flowchart of a conventional defect handling process;
  • FIG. 11 is a flowchart of an embodiment of a defect handling process; and
  • FIG. 12 shows and embodiment of a defect list.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
  • FIG. 2 a shows an embodiment of an optical disc drive according to the invention. While a processor 122 processes read and write commands #R and #W issued from the host computer 110, a buffer 140 is deployed in the memory device 124 to temporarily store associated data blocks. Data blocks requested by the read command #R are referred to as read data block #DR, whereas those associated with write commands #W are write data blocks #DW. The buffer 140 is partitioned into several blocks. Each block serves as a unit for data storage. And several blocks are collected as a section. The buffer 140 serves as a cache to store the read data block #DR and the write data block #DW, and in the embodiment, a buffer management system and approach is disclosed to optimize a recording operation using management tables, such as a write list 136, a latest list, a defect list and a free list 138. The driving module 126 is accordingly controlled to perform the recording operation. When a start recording condition is met, the data blocks in the buffer 140 are transferred and recorded to the optical disc.
  • When the optical disc drive 120 receives a write command #W designated to record one or more write data blocks #DW onto the optical disc, the write data blocks #DW are first buffered in the memory device 124 before the physical recording operation is performed. And the write list 136, the latest list and free list 138 are updated accordingly. The write list 136 serves as a lookup table for maintaining relationship of all write data blocks #DW buffered in the buffer 140. Likewise, the free list 138 serves as another lookup table containing unallocated blocks of the buffer 140 that direct to free spaces or available spaces. Furthermore, a latest list 137 is provided to maintain blocks of latest accessed data blocks in the buffer 140, and a defect list 139 is used to maintain blocks of those failed to be recorded onto the optical disc. The write list 136, latest list 137, free list 138 and defect list 139 may be established by tables, but other data structure such as link list also adaptable. Implementations of the proposed architecture of FIG. 2 a are further described in the embodiments hereafter.
  • FIG. 2 b is a flowchart of an embodiment of buffer management method according to the invention. The fundamental steps are summarized into steps 201 to 207. In step 201, the optical disc drive 120 is initialized. After initialization, a buffering operation is recursively processed in step 203. The buffering operation buffers write data blocks #DW transferred from the host 110 according to the write command #W. Meanwhile, the buffering operation also buffers read data blocks #DR transferred from the optical disc according to the read command #R. In step 205, a start recording condition is checked. Only when the start recording condition is met, the optical disc drive 120 enters a physical recording operation in step 207. Otherwise the process loops back to step 203.
  • While the buffering operation is being processed, the host computer 110 may randomly issue read commands #R or write commands #W designated to request certain read data blocks #DR from the optical disc, or to record write data blocks #DW onto the optical disc. The read and write data blocks #DR and #DW may be buffered into buffer 140. And the write list 136, latest list 137 and free list 138 are updated accordingly for maintenance thereof. It is well known, continuity of data blocks is excessively desirable when performing the recording operation. In the embodiment, a recording operation which successively record at least one write data blocks #DW onto consecutive destination area of the optical disc is defined as a disc write task. To minimize the seeking operation and to maximize performance of a recording operation, the processor 122 collects unrecorded data blocks having consecutive destination addresses and successively records those collected data blocks onto the optical disc in a disc write task.
  • Specifically, the write list 136 is created from the buffer 140, and contents of write list 136 are utilized to assistance in establishing the disc write tasks. FIG. 3 a shows an embodiment of a write list 136 a and a free list 142. In FIG. 3, a plurality of data blocks are stored in the buffer 140. The labels of A, B and C in each block denotes destination addresses of the certain write data blocks #DW. As shown, there are pluralities of write data blocks #DW stored in the buffer 140, in which those of consecutive destination addresses are categorized into one disc write task. As an example, addresses denoted as A, A+1 and A+2 are discovered and categorized into a first disc write task. Likewise, the write data blocks #DW of destination addresses denoted as B and B+1, and C, C+1 and C+2 can construct two other disc write tasks. It is shown that the write list 136 a includes buffer index and corresponding destination addresses of the write data blocks #DW. Although the write data blocks #DW may be distributed randomly in different blocks of the buffer 140. With the write list 136 a, when an incoming write data block #DW is received, it can be easily determined whether the incoming write data block #DW corresponding to any of the existed disc write tasks. As shown, free blocks or available blocks of the buffer 140 denoted as “FREE” are maintained by the free list 138.
  • FIG. 3 b shows another embodiment of a write list 136 b. The write list 136 b is a sorted version of write list 136 a in FIG. 3 a, in which elements are rearranged based on destination addresses of the write data block #DW. Since the write list 136 b is implemented in the memory device 124, the cost of sorting the contents is ignorable while manageability of the write list 136 b is thereby increased. For example, if an incoming write data block #DW denoted as “A+3” is input, one of the free entries in the free list 138, such as “FREE1”, is assigned for storage of it, and in the write list 136 b, an additional column is appended to record its destination address “A+3” and a pointer pointing to its newly assigned entry. In another embodiment, the write list 136 b with the newly added entry “A+3” could be further sorted to be an updated write list.
  • FIG. 4 shows an embodiment of a link list 400. In practice, the data structure of the buffer 140 can be implemented with a link list. A link list has various types, basically a forward type and a backward type. In a link list of forward type, each element itself is associated with a next index pointing to an address where the next element is located. Alternatively, in a link list of backward type, each element itself is bound with a previous index to indicate where a previous element is located. The advantage of link list is, there is no need to sort the elements, and in addition, costs of adding or removing an element is almost ignorable since only relative indices need to be changed. Practically, the forward and backward types can be simultaneously implemented to form a bi-directional link list.
  • The architecture of the link list can be adapted to enhance the embodiments in FIGS. 3 a and 3 b. FIG. 5 a show embodiments of a buffer 140 using a forward type link list. The write list 150 a maintains several disc write tasks by recording their task entries. A task entry indicates where a first write data block #DW of the disc write task is buffered. In the buffer 140, each block is bound with a pointer linking to another block. For example, for the disc write task A, its task entry points to where write data block #DW with a beginning designation addresses A locates, and the write data block #DW A has a pointer linking to a successive write data block #DW with a designation addresses A+1. Likewise, the pointer in write data block #DW A+1 links to a following write data block #DW with another designation addresses A+2. Free spaces in the buffer 140 can also be managed in this way. The free list 144 only records an entry indicating a first free block, and through a pointer, its successions are linked. The link list structure facilitates data additions and removals, while complexities of managing the write list 150 a and free list 144 are also reduced.
  • FIG. 5 b shows another embodiment of a write list 150 b. A backward type link list is used, and the mechanism is very similar to the embodiment of FIG. 5 a except for the pointer directions. The task entry in write list 150 b indicates where a last write data block #DW of the disc write task is buffered. Taking disc write task A as an example, the last write data block #DW with destination address A+2 is located at block index “2”, and the write data block #DW A+2 has a pointer linking to a previous write data block #DW A+1. Likewise, the pointer in the write data block #DW A+1 links to write data block #DW A.
  • FIG. 6 a is a flowchart of an embodiment of a buffering operation. The buffering operation in step 203 of FIG. 2 b, in detail, further comprises a plurality of steps. In step 601, when the buffering operation of step 203 is initialized, write commands #W are randomly issued from the host computer 110 and handled in different procedures. Step 603 discusses when a specific write command #W is received, a block reception procedure is performed in step 605 to store its corresponding write data blocks #DW into the buffer 140. A detailed embodiment of the block reception is described in FIG. 6 b.
  • Upon completion of receiving a write data block #DW, a mode detection procedure is triggered in step 607. In the embodiment, the optical disc drive 120 supports two modes when buffering the write data block #DW and the read data blocks #DR. One is the conventional sequential mode, and the other is a random mode. Assume the arrangement of all buffered write data block #DW conforms to a conventional sequential structure, it is more efficient to record the write data blocks #DW in sequential access mode. However, when destination addresses of the buffered write data blocks #DW are not continuous, the recording operation is more complex, thus, it is processed in random mode in which various approaches such as disc write tasks are used to optimize the performance. The determination of the modes is described in an embodiment in FIG. 6 c.
  • If random mode is set in step 607, a plurality of disc write tasks will be established. To schedule the disc write tasks, priorities of each disc write task are required. A priority calculation process is therefore executed in step 609 to prioritize all disc write tasks. The priorities may be determined by various buffer statuses of each disc write task, and a detailed embodiment is described in FIG. 6 d.
  • One write command #W may be associated with more than one write data block #DW. In step 611, it is determined whether write data block #DW corresponding to a write command #W are pending buffered in the buffer 140. If yes, the process loops back to step 605 for buffering another data blocks. Otherwise, the buffering operation is concluded, followed by a start recording condition determination process as described in step 205 of FIG. 2 b.
  • FIG. 6 b is a flowchart of data block reception when performing the buffering operation. The block reception procedure as described in step 605 of FIG. 6 a is initialized in step 621 to handle an incoming write data block #DW. In step 623, the processor checks the write list 136 to determine whether the incoming write data block #DW has a previous copy in the buffer 140. If so, overwriting is required, so step 625 is processed, whereby the processor overwrites the previous copy by the incoming write data block #DW. Otherwise, a free block should be allocated to store the incoming write data block #DW. Before allocating the free entry, capacity of the memory device 124 is checked in step 627. If there is not enough space left for further storage, a release procedure is triggered in step 629 to release more spaces for storing data. A cache policy may be previously defined, whereby the processor releases a certain blocks accordingly to acquire additional capacity. There already exist various algorithms to release cached data depending on usages such as hit rates or idle time, so detailed example is not introduced herein. After the capacity is assured available, step 627 is followed by step 631, the block allocation step. In step 631, the processor 122 acquires a free block from the free list 138 to store the incoming write data block #DW.
  • In step 633, it is determined whether the incoming write data block #DW hits an existing disc write task. According to the write command #W transmitted with the incoming write data block #DW, a particular destination address where the write data block #DW is bound to can be deduced. By checking the write list 136, the processor 122 can identify whether the particular destination address successive to or precedes whatever previously was buffered in the buffer 140. For example, if the incoming write data block #DW has a destination address consecutive to those contained in an existing disc write task, step 637 is processed, in which the existing disc write task should be updated to include the incoming write data block #DW.
  • If the incoming write data block #DW having destination address allocated between the end of one existing disc task and the beginning of another existing disc write task, the two disc write tasks are therefore merged into one new disc write task. On the other hand, in step 635, if there is no adjacency detected, a new disc write task may be created in the write list 136 to handle the incoming write data block #DW. As a supplement example, in step 625, the write list may not need an update, though, but its last access time may be refreshed in order to count tasks such as time-outs or hit rate of the disc write task. Upon completion of buffering the incoming write data block #DW, a latest list 137 is also updated in step 639.
  • Similar to maintenance of the write list 136 and free list 138, a latest list 137 is established as a read cache, recording entries of data blocks associated with latest certain amount of received read and write commands #R and #W. As an example, the latest list 137 may utilize the described link list architecture in FIG. 5 a, with additional pointers implemented in the buffer 140 to link certain write data blocks #DW and read data blocks #DR. Therefore, the write list 136 and the latest list 137 are both deployed on the basis of the buffer 140. In other words, the architecture allows one buffer 140 to function as read and write caches at the same time. In step 640, the block reception is concluded.
  • FIG. 6 c is a flowchart of mode detection when performing the buffering operation. In step 641, the mode detection procedure as described in step 607 of FIG. 6 a is initialized. Various conditions are considered to decide which mode to set. In step 643, the processor 122 determines the current mode. If the current mode is the sequential mode, the process jumps to step 649. Otherwise, step 645 is processed, in which a total of disc write tasks are counted. If there are more than one disc write tasks, random mode is set in step 651. In step 647, if there is no disc write tasks left in the buffer 140, the sequential mode is set in step 653. In step 649, if there is only one disc write task left, the last write data block #DW buffered in the block reception procedure is checked whether the block belongs to the only one disc write task. If not, a new disc write task is created, so the mode should be set to random mode in step 651. In step 651, if the previous mode is sequential mode, the processor 122 creates the write list 136, the latest list 137, the free list 138 accordingly. Otherwise, step 649 is still followed by step 653. However, the buffer reception may be a continuous process, so steps 605 and 607 may be executed in parallel. In this case, whether the mode is set, should be dependent on the latest status of the buffer 140. Steps 651 and 653 are followed by step 655, in which the mode detection procedure is concluded after the mode is set. In another embodiment, in step 645, if there are one or more disc write tasks, then goes to step 651, random mode is set in step 651.
  • FIG. 6 d is a flowchart of priority determination when performing the buffering operation. As described in step 609 of FIG. 6 a, priority calculation is required for scheduling all of the disc write tasks to determine the sequence of recording of those disc write tasks. In step 661, the priority calculation procedure is initialized. In step 663, hit rates of each disc write task are counted. Any action involved in any write data block #DW in a disc write task shall count as a hit, such as overwriting, reading or adding a write data block #DW. A buffered write data block #DW may be requested by a read command #R before it being recorded, so the reading operation is also counted in the hit rate. In one embodiment, the hit rates can further be categorized into write and read types. In the write list 136, write hit rates are counted per disc write task, and for latest list 137, read hit rates may be counted per read data block #DR.
  • In step 665, for each disc write task, the total number of data blocks is considered as a factor to determine the priority. Physically, one disc write task corresponds to one sequential recording operation for the driving module 126, in which track seeking and locking are performed once, so it is more preferable and efficient to have more data blocks recorded at one time. The counted numbers can directly indicate potential performance of a disc write task, thus is taken as a factor for establishing priority.
  • In step 667, distances between the currently position of the PUH and task destination area on the optical disc are also considered as a factor of their priorities. A task destination area is exactly the destination physical address of the first write data block #DW in a disc write task. When a disc write task is to be recorded, the distance the PUH moves also affects the performance. It is desirable to schedule an optimized recording operation so that the PUH moves as less as possible to complete all disc write tasks. Thus, the PUH distances are factors of their priorities. In step 669, priority value of each disc write task are calculated based on hit rates, number of data blocks #DW, and PUH distances. The method to calculate these factors can be dependent on predetermined performance policies defined in firmware of the optical disc drive 120, and the implementation is not limited as described in the embodiment.
  • FIG. 6 e is an embodiment of calculating priority value of the invention. The factors of hit rates, task length and PUH distances are respectively multiplied with weighting factors Wa, Wb, Wc, and then summed together to generate the priority value. The weighting factors Wa, Wb, Wc are adjustable depending on the actions of host computer 110. For example, if host computer 110 issues lots of write commands #W with consecutive destination addresses whereby number of data blocks #DW of a disc write task is big enough, the weighting factor Wb could be set up to equal to weighting factor Wc, and the weighting factor Wb may greater than weighting factor Wa. In another embodiment, the weighting factors Wa, Wb, Wc can be modified by processor 122. And the weighting factors Wa, Wb, Wc can be optimized via checking the data throughput of the optical pick head.
  • FIG. 7 is a flowchart of a read command handling process. In step 203 of FIG. 2 b, the buffering operation is introduced, and step 603 of FIG. 6 a already discussed a write command #W handling process. Alternatively the buffering operation corresponding to the read command #R is introduced in step 701. The buffering operation as step 203 is initialized in step 701. In step 703, a read command #R is received by the optical disc drive 120, requesting for a certain read data block #DR from a specific address on the optical disc. In step 705, the processor 122 first checks whether the read data block #DR is already cached in the buffer 140. Items maintained in the latest list 137 are checked, in which the read data block #DR is acquired from the buffer 140 and transferred to the host computer 110. Generally, hit rates and time-outs are factors used by cache policies. When a block is hit, its usage history such as last access time or access frequency is renewed. Therefore, after step 707, the entry corresponding to the read data block #DR in the latest list 137 is renewed in step 709.
  • On the other hand, if the read data block #DR is not hit in the buffer 140, it shall be directly acquired from the optical disc. In step 715, a reading operation is performed to acquire the read data block #DR from the optical disc, and stored in the buffer 140. Then in step 717, the latest list 137 is updated accordingly. Before buffering the accessed read data blocks #DR, the capacity of the buffer 140 may be checked in step 711. If capacity is not enough, a cache release procedure is performed in step 713. In other word, if capacity is not enough for buffering current reading data blocks from the disc, the processor 122 would search the blocks according to the latest list 137 and the write list 136 to release the blocks that is not write data blocks. A read command #R may request more than one read data block #DR, so in step 719, it is determined whether all requested read data block #DR are acquired. If not, the process loops back to step 705. Upon completion of the read data blocks acquiring read data block #DR, the buffering operation is concluded in step 721.
  • FIG. 8 is a flowchart of recording start condition determination. As described in FIG. 2 b, step 205 determines whether a recording operation can be initialized. The recording start condition comprises considerations of various factors, such as capacity usages of the buffer 140, an idle time since last activity of the buffer 140, duration since the last recording operation, and total number of disc write tasks.
  • In step 801, the recording start condition determination of step 205 is triggered. In step 803, the available capacity of the buffer 140 is compared with a capacity threshold. A recording operation may be triggered if the write data block #DW buffered therein are sufficient for recording, so the start recording condition is deemed satisfactory when the available capacity of buffer 140 is smaller than the capacity threshold, and the process jumps to step 813. In step 813, the processor determines that disc drive 120 is ready to perform the recording operation. The capacity threshold varies with mode. Generally, in random mode, it is desirable to gather more write data blocks #DW before recording because the consecutiveness may be thereby increased, so the capacity threshold is set to a smaller value in random mode than that in sequential mode.
  • In step 805, the idle time is compared with an idle threshold. The idle time may be specifically referred to as a period from last activity of the buffer 140, such as data buffering and data output, is conducted. In sequential mode, logically there is only one disc write task, so that the buffered write data blocks #DW are ready to be recorded at any time. In random mode, however, since the complexity of a recording operation is higher, it is desirable to wait longer to allow more write data blocks #DW to be collected. Thus, the idle threshold is set to a higher value in random mode than that in sequential mode.
  • In step 807, the duration since the last recording operation compares with a duration threshold. Normally, the buffered write data blocks #DW are periodically flushed into the optical disc if no other specific event occurs. The duration threshold value is also dependent on the mode. In the embodiment, the duration threshold is set to a higher value in random mode than that in sequential mode.
  • In step 809, the numbers of disc write tasks are counted. The number is irrelevant in sequential mode because there is only one disc write task. In random mode, however, the tasks number is proportional to randomness of the buffer 140. Also, the capacity of write list 136 may be limited to manage a certain number of disc write tasks, so a task threshold is set. When the number of disc write tasks exceeds the task threshold, the recording operation is triggered in step 813.
  • If all of the criterions from step 803 to 809 are not met, the processor 122, in step 811, determines that disc drive 120 is not yet ready to perform the recording operation. Then, step 815 concludes the criterion determination step.
  • FIG. 9 is an exemplary flowchart of a recording operation. When the buffering operation is complete, and at least one of the recording start conditions is met, the recording operation of step 203 is initialized in step 901. In step 903, the mode is detected. For sequential mode, the case is simpler, whereby a conventional sequential recording operation is performed in step 913. The buffered write data blocks #DW in the buffer 140 are recorded and flushed if no error is detected.
  • If the mode is random mode, the disc write tasks are handled one by one in steps 905 to 911. In step 905, a disc write task having the highest priority value is first selected for recording. In another embodiment, disc write tasks having priority value exceeding a threshold are selected for recording. And the threshold is adjustable according to the status of the buffer 140, such as available capacity of buffer 140, and/or total numbers of existing disc write tasks. If the available capacity of buffer 140 is low, the threshold should be adjusted to be lower. If the total numbers of existing disc write tasks is high, the threshold should be adjusted to be low. Step 907 is an optional step, in which a ring buffer may be provided in the memory device 124 as a second level cache. Write data blocks #DW of the selected disc write task to be recorded may be copied to the ring buffer whereby further steps are processed. Alternatively, the ring buffer may not be necessary, and the write data blocks #DW are directly processed in the buffer 140. In step 909, the write data blocks #DW are individually encoded into error correction code (ECC) blocks and sequentially recorded onto destination area of the optical disc. The encoding of the ECC blocks varies with standards, and detailed information is well know for the person skilled in the art, so the embodiments are not described herein.
  • In step 911, upon completion of a disc write task, the processor 122 determines whether more disc write tasks are to be processed. If so, the process loops to step 905 to select and process a disc write task of highest priority among the unprocessed ones. If all disc write tasks are done, the recording operation is concluded in step 915.
  • In step 909, when recording the write data blocks #DW, defects may be found on the optical disc where data could not be correctly recorded. Conventionally, write data blocks #DW are written one by one. When a defect is found at where a write data block #DW should be recorded, the PUH moves to a spare area to record the write data block #DW, and moves back to an address successive to the defected address to recorded further write data blocks #DW. Alternatively, when defects are detected, the write data blocks #DW are copied to another buffer, and another disc write task should be scheduled to rewrite them.
  • FIG. 10 is a flowchart of a conventional defect handling process. In step A01, a recording procedure for a disc write task is initialized. Write data blocks #DW of a disc write task are sequentially processed through steps A03 to A09. In step A03, one write data block #DW is recorded to the optical disc, and in step A05, the recorded write data block #DW are checked. If an error is found, step A07 is processed, in which the PUH moves to a spare area to rewrite the write data block #DW. Alternatively, the write data block #DW may be copied to another buffer and wait for rewriting. The spare area is preserved space for defect management during recording procedure, and the implementation varies with standards. When the write data block #DW is successfully recorded onto the spare area, the PUH moves back to a successive address where the defect is detected to process a next write data block #DW. In step A09, it is determined whether all write data blocks #DW in the disc write task are recorded. If not, the process loops to step A03. Otherwise, the recording procedure is concluded in step A11.
  • Obviously, step A07 becomes a performance bottleneck because when the PUH moves to and from the spare area. If defects are multiple, complex mechanical burdens are induced by frequent track seeking and locking, therefore seriously degrade the performance. Alternatively, additional buffer spaces may be consumed to buffer the write data blocks #DW in need of rewriting.
  • To improve inefficient design, a defect list 139 is provided in the invention to maintain blocks of failed to be recorded onto the optical disc. FIG. 11 is a flowchart of an embodiment of a defect handling process. Steps A01, A03 and A05 are similar to those in FIG. 10, whereby a write data block #DW is recorded and verified. In step A08, if a defect is found on the destination area, the PUH is not moved to the spare area. On the contrary, the processor 122 adds the block of the write data block #DW to the defect list 139. Thereafter, steps A09 is proceeded, continuing to process all of the write data block #DW in the disc write task. In this way, the PUH continuously processes all write data blocks #DW of a disc write task without interruption and overheads induced by moving to and from the spare area. All the write data blocks #DW failing to be recorded due to defects are collected in the defect list 139 to form an extra disc write task. The write data blocks #DW failed to record on their destination area are reallocated to the spare area with continuity. In step A10, an additional recording operation as step 909 can be triggered to record the write data blocks #DW to the spare area according to the extra disc write task. Thereafter, the recording procedure is concluded in step A11. In this way, no matter how bad the optical disc is damaged, continuity of the recording operation is almost unaffected.
  • FIG. 12 shows and embodiment of a defect list 139. The link list structure may also be used to construct the defect list 139. When a defect is found at address A+1, the defect list 139 creates an entry pointing to the write data block #DW of A+1. Thereafter, another defect is found when recording a write data block #DW to C+1, and the defect list 139 links the write data block #DW of address A+1 to the write data block #DW of address C+1. Yet, C+2 is found defective, so the link list is further extended. Although the concept of a link list is visualized as FIG. 12, a practical implementation may not need to be identical to what is shown.
  • In the embodiments, a so called data block may have a basic unit in sectors or clusters, which are not exactly limited. The write list 136, latest list 137, free list 138 and defect list 139 may be stored in the memory device 124 or other devices. While the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims (24)

1. A buffer management method adaptable in an optical disc drive to access an optical disc, comprising:
receiving write commands designated to record one or more data blocks onto the optical disc;
buffering the data blocks in a buffer of the optical disc drive;
organizing the write commands to establish at least one write task each associated with a group of the buffered data blocks, the group having consecutive destination addresses;
scheduling a recording operation according to the write tasks;
performing the recording operation to record the group of data blocks onto the optical disc.
2. The buffer management method as claimed in claim 1, wherein the step of organizing the write commands comprise:
maintaining a write list of the buffer comprising entries of where each data block are allocated;
scanning the write list to link entries of data blocks having consecutive destination addresses into at least one link list; and
establishing write tasks from the link lists, each write task comprising allocation of a first data block.
3. The buffer management method as claimed in claim 2, further comprising maintaining a free list containing entries of free spaces in the buffer.
4. The buffer management method as claimed in claim 2, wherein the step of buffering data blocks comprises:
scanning the write list to determine whether an incoming data block has a previous copy in the buffer;
if so, overwriting the previous copy by the incoming data block; and
if not, acquiring a free entry from the free list to store the incoming data block.
5. The buffer management method as claimed in claim 4, further comprising, when buffering the data blocks, releasing a certain amount data blocks from the buffer based on a cache policy if capacity of the buffer runs out.
6. The buffer management method as claimed in claim 3, wherein the step of buffering data blocks further comprises:
determining whether the incoming data block has a destination address consecutive to those contained in an existing write task;
if so, updating the existing write task; and
if not, creating a new write task for the incoming data block.
7. The buffer management method as claimed in claim 2, further comprising maintaining a latest list comprising entries of data blocks associated with the latest certain amount of received read and write commands.
8. The buffer management method as claimed in claim 7, wherein the step of buffering data blocks further comprises:
receiving a read command designated to acquire a data block from a destination address of the optical disc;
determining whether the latest list is hit according to the destination address;
if the latest list is hit, outputting the data block from the buffer to respond to the read command, and renewing the latest list;
if the latest list is not hit, outputting the data block from the optical disc to respond to the read command, allocating an entry to buffer the data block, and adding the entry to the latest list.
9. The buffer management method as claimed in claim 2, wherein the step of scheduling the recording operation comprises:
counting hit rates of each write task;
counting numbers of data blocks contained in each write task;
counting pick up head (PUH) distance to destination addresses of each write task; and
prioritizing the write tasks based on their hit rates, number of data blocks and PUH distances.
10. The buffer management method as claimed in claim 9, wherein:
the recording operation comprises processing the write tasks by their priorities, the processing of a write task comprising encoding data blocks into error correction code (ECC) blocks and burning them onto their destination addresses of the optical disc; and
upon completion of the recording operation, flushing successfully burnt data blocks from the buffer.
11. The buffer management method as claimed in claim 10, wherein the execution of the write task further comprises:
maintaining a defect list comprising entries of data blocks having destination addresses where defects are found;
detecting whether an error is found when burning a data block; and
if the error is found, adding entry of the data block into the defect list.
12. The buffer management method as claimed in claim 11, wherein the recording operation further comprises, upon completion of all prioritized write tasks, executing a further write task to burn up data blocks listed in the defect list.
13. An optical disc drive operative to access an optical disc, comprising:
a memory device, comprising a buffer for storing data blocks associated with incoming read or write commands;
a processor, processing the read and write commands and scheduling a recording operation;
a driver unit, controlled by the processor to perform the recording operation to record the data blocks to the optical disc; wherein:
the optical disc drive receives write commands designated to record one or more data blocks on the optical disc, and buffers data blocks corresponding to the write commands in the buffer;
the processor organizes the write commands to establish at least one write task each associated with a group of data blocks having consecutive destination addresses, and schedules the recording operation based on the write tasks.
14. The optical disc drive as claimed in claim 13, wherein:
the processor maintains a write list in the memory device, and the write list comprises entries of each data block allocated in the buffer; and
the processor links entries of data blocks having consecutive destination addresses to form at least one link list according to the write list, such that one or more write tasks are established from the link lists, each write task comprising allocation of a first data block, and.
15. The optical disc drive as claimed in claim 14, wherein the processor further maintains a free list in the memory device, and the free list contains unallocated entries of the buffer.
16. The optical disc drive as claimed in claim 15, wherein when buffering the data blocks:
the processor scans the write list to determine whether an incoming data block has a previous copy in the buffer;
if so, the processor overwrites the previous copy by the incoming data block, and if not, the processor acquires a free entry from the free list to store the incoming data block.
17. The optical disc drive as claimed in claim 16, wherein when buffering the data blocks, the processor releases a certain amount data blocks from the buffer based on a cache policy if capacity of the buffer runs out.
18. The optical disc drive as claimed in claim 15, wherein when buffering data blocks, the processor determines whether the incoming data block has a destination address consecutive to those in an existing write task: if so, the processor updates the existing write task associated with the incoming data block; and if not, the processor creates a new write task in the memory device for the incoming data block.
19. The optical disc drive as claimed in claim 14, the processor further maintains a latest list in the memory device, and the latest list comprises entries of data blocks associated with the latest certain amount of read and write commands.
20. The optical disc drive as claimed in claim 19, wherein when buffering data blocks:
the processor receives a read command designated to read a data block from a destination address of the optical disc, and determines whether the latest list is hit according to the destination address;
if the latest list is hit, the processor outputs the data block from the buffer to respond to the read command, and renews the latest list;
if the latest list is not hit, the driver unit acquires the data block from the optical disc to respond to the read command and stores it in the buffer, and the processor updates the latest list with an entry where the data block is buffered.
21. The optical disc drive as claimed in claim 14, wherein when scheduling the recording operation:
the processor counts hit rates of each write task, numbers of data blocks contained in each write task, and pick up head (PUH) distances according to first data blocks' destination addresses of each write task; and
the processor calculates a priority order the write tasks based on their hit rates, number of data blocks and PUH distances.
22. The optical disc drive as claimed in claim 21, wherein when performing the recording operation:
the processor follows the priority order to execute the write tasks, whereby data blocks corresponding to a write task are encoded into error correction code (ECC) blocks and burnt onto a destination address on the optical disc; and
upon completion of the recording operation, the processor successfully flushes burnt data blocks from the buffer.
23. The optical disc drive as claimed in claim 22, wherein when executing the write task:
the processor maintains a defect list in the memory device, and the defect list comprises entries of data blocks having destination addresses where defects exist;
the processor detects whether an error is found when burning an ECC block; and
if an error is found, the processor adds the entry where a data block corresponding to the ECC block is buffered into the defect list.
24. The optical disc drive as claimed in claim 23, wherein when performing the recording operation, the processor executes a further write task to burn the data blocks listed in the defect list upon completion of all prioritized write tasks.
US12/032,722 2007-02-16 2008-02-18 Buffer management method and optical disc drive Abandoned US20080201522A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/032,722 US20080201522A1 (en) 2007-02-16 2008-02-18 Buffer management method and optical disc drive

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US89020407P 2007-02-16 2007-02-16
US12/032,722 US20080201522A1 (en) 2007-02-16 2008-02-18 Buffer management method and optical disc drive

Publications (1)

Publication Number Publication Date
US20080201522A1 true US20080201522A1 (en) 2008-08-21

Family

ID=39706534

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/032,722 Abandoned US20080201522A1 (en) 2007-02-16 2008-02-18 Buffer management method and optical disc drive
US12/032,719 Expired - Fee Related US8205059B2 (en) 2007-02-16 2008-02-18 Buffer management method and optical disc drive
US13/474,671 Abandoned US20120233362A1 (en) 2007-02-16 2012-05-17 Buffer management method and optical disc drive

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/032,719 Expired - Fee Related US8205059B2 (en) 2007-02-16 2008-02-18 Buffer management method and optical disc drive
US13/474,671 Abandoned US20120233362A1 (en) 2007-02-16 2012-05-17 Buffer management method and optical disc drive

Country Status (3)

Country Link
US (3) US20080201522A1 (en)
CN (2) CN101246727B (en)
TW (2) TWI360113B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100142350A1 (en) * 2008-12-04 2010-06-10 Byung-Hoon Chung Hybrid optical disk drive, method of operating the same, and electronic system adopting the hybrid optical disk drive

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7738329B2 (en) * 2007-02-16 2010-06-15 Mediatek Inc. Random access control method and optical disc drive
US8416657B2 (en) * 2007-12-03 2013-04-09 Mediatek Inc. Method and system for managing data from host to optical disc
US20100020654A1 (en) * 2008-07-28 2010-01-28 Mediatek Inc. Method and apparatus for data buffer control of optical disc player
US8332608B2 (en) 2008-09-19 2012-12-11 Mediatek Inc. Method of enhancing command executing performance of disc drive
US20110004718A1 (en) 2009-07-02 2011-01-06 Ross John Stenfort System, method, and computer program product for ordering a plurality of write commands associated with a storage device
US8745320B2 (en) * 2012-05-04 2014-06-03 Riverbed Technology, Inc. Ensuring write operation consistency using multiple storage devices
US10180901B2 (en) * 2012-10-19 2019-01-15 Oracle International Corporation Apparatus, system and method for managing space in a storage device
US20160055003A1 (en) * 2014-08-19 2016-02-25 Qualcomm Incorporated Branch prediction using least-recently-used (lru)-class linked list branch predictors, and related circuits, methods, and computer-readable media
US10061531B2 (en) 2015-01-29 2018-08-28 Knuedge Incorporated Uniform system wide addressing for a computing system
KR102333220B1 (en) 2015-09-24 2021-12-01 삼성전자주식회사 Operation method of nonvolatile memory system
US10346049B2 (en) 2016-04-29 2019-07-09 Friday Harbor Llc Distributed contiguous reads in a network on a chip architecture
US10453076B2 (en) * 2016-06-02 2019-10-22 Facebook, Inc. Cold storage for legal hold data
US10564890B2 (en) 2017-07-07 2020-02-18 Seagate Technology Llc Runt handling data storage system
GB2575290B (en) * 2018-07-04 2020-12-02 Graphcore Ltd Gateway Fabric Ports
CN111475345B (en) * 2019-01-24 2023-03-31 旺宏电子股份有限公司 Memory and memory operation method

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5408654A (en) * 1992-05-27 1995-04-18 Cdb Software, Inc. Method to reorganize an index file without sorting by changing the physical order of pages to match the logical order determined from the index structure
US5606688A (en) * 1994-08-31 1997-02-25 International Business Machines Corporation Method and apparatus for dynamic cache memory allocation via single-reference residency times
US5860089A (en) * 1996-09-19 1999-01-12 Kabushiki Kaisha Toshiba Disk system with command processing function
US5892937A (en) * 1993-06-04 1999-04-06 Digital Equipment Corporation Real-time data cache flushing threshold adjustment in a server computer
US6330640B1 (en) * 1999-12-22 2001-12-11 Seagate Technology Llc Buffer management system for managing the transfer of data into and out of a buffer in a disc drive
US6336150B1 (en) * 1998-10-30 2002-01-01 Lsi Logic Corporation Apparatus and method for enhancing data transfer rates using transfer control blocks
US6357030B1 (en) * 1997-12-16 2002-03-12 International Business Machines Corporation ECC block format for storage device
US6412045B1 (en) * 1995-05-23 2002-06-25 Lsi Logic Corporation Method for transferring data from a host computer to a storage media using selectable caching strategies
US6480936B1 (en) * 1998-06-15 2002-11-12 Fujitsu Limited Storing apparatus having a dynamic buffer for random or sequential access
US6513142B1 (en) * 2000-06-27 2003-01-28 Adaptec, Inc. System and method for detecting of unchanged parity data
US6516380B2 (en) * 2001-02-05 2003-02-04 International Business Machines Corporation System and method for a log-based non-volatile write cache in a storage controller
US6725329B1 (en) * 2000-04-19 2004-04-20 Western Digital Technologies, Inc. Cache control system and method having hardware-based tag record allocation
US6798599B2 (en) * 2001-01-29 2004-09-28 Seagate Technology Llc Disc storage system employing non-volatile magnetoresistive random access memory
US6834329B2 (en) * 2001-07-10 2004-12-21 Nec Corporation Cache control method and cache apparatus
US6845405B1 (en) * 2002-12-24 2005-01-18 Western Digital Technologies, Inc. Disk drive executing part of a linked disk command
US6871272B2 (en) * 2000-09-09 2005-03-22 International Business Machines Corporation Data sorting in information storage systems
US6907499B2 (en) * 2002-01-31 2005-06-14 Seagate Technology Llc Interrupting disc write operations to service read commands
US6940796B2 (en) * 2001-07-02 2005-09-06 Nec Electronics Corporation Optical disk device using a new alternate list for defect correction
US6944717B2 (en) * 2001-07-27 2005-09-13 Fujitsu Limited Cache buffer control apparatus and method using counters to determine status of cache buffer memory cells for writing and reading data therefrom
US6988165B2 (en) * 2002-05-20 2006-01-17 Pervasive Software, Inc. System and method for intelligent write management of disk pages in cache checkpoint operations
US20070283086A1 (en) * 2006-06-06 2007-12-06 Seagate Technology Llc Write caching random data and sequential data simultaneously
US7392340B1 (en) * 2005-03-21 2008-06-24 Western Digital Technologies, Inc. Disk drive employing stream detection engine to enhance cache management policy
US7738329B2 (en) * 2007-02-16 2010-06-15 Mediatek Inc. Random access control method and optical disc drive
US7913034B2 (en) * 2004-07-27 2011-03-22 International Business Machines Corporation DRAM access command queuing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4026518B2 (en) 2003-03-12 2007-12-26 ソニー株式会社 Recording medium, recording apparatus, and recording method

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5408654A (en) * 1992-05-27 1995-04-18 Cdb Software, Inc. Method to reorganize an index file without sorting by changing the physical order of pages to match the logical order determined from the index structure
US5892937A (en) * 1993-06-04 1999-04-06 Digital Equipment Corporation Real-time data cache flushing threshold adjustment in a server computer
US5606688A (en) * 1994-08-31 1997-02-25 International Business Machines Corporation Method and apparatus for dynamic cache memory allocation via single-reference residency times
US6412045B1 (en) * 1995-05-23 2002-06-25 Lsi Logic Corporation Method for transferring data from a host computer to a storage media using selectable caching strategies
US5860089A (en) * 1996-09-19 1999-01-12 Kabushiki Kaisha Toshiba Disk system with command processing function
US6357030B1 (en) * 1997-12-16 2002-03-12 International Business Machines Corporation ECC block format for storage device
US6480936B1 (en) * 1998-06-15 2002-11-12 Fujitsu Limited Storing apparatus having a dynamic buffer for random or sequential access
US6336150B1 (en) * 1998-10-30 2002-01-01 Lsi Logic Corporation Apparatus and method for enhancing data transfer rates using transfer control blocks
US6330640B1 (en) * 1999-12-22 2001-12-11 Seagate Technology Llc Buffer management system for managing the transfer of data into and out of a buffer in a disc drive
US6725329B1 (en) * 2000-04-19 2004-04-20 Western Digital Technologies, Inc. Cache control system and method having hardware-based tag record allocation
US6513142B1 (en) * 2000-06-27 2003-01-28 Adaptec, Inc. System and method for detecting of unchanged parity data
US6871272B2 (en) * 2000-09-09 2005-03-22 International Business Machines Corporation Data sorting in information storage systems
US6798599B2 (en) * 2001-01-29 2004-09-28 Seagate Technology Llc Disc storage system employing non-volatile magnetoresistive random access memory
US6516380B2 (en) * 2001-02-05 2003-02-04 International Business Machines Corporation System and method for a log-based non-volatile write cache in a storage controller
US6940796B2 (en) * 2001-07-02 2005-09-06 Nec Electronics Corporation Optical disk device using a new alternate list for defect correction
US6834329B2 (en) * 2001-07-10 2004-12-21 Nec Corporation Cache control method and cache apparatus
US6944717B2 (en) * 2001-07-27 2005-09-13 Fujitsu Limited Cache buffer control apparatus and method using counters to determine status of cache buffer memory cells for writing and reading data therefrom
US6907499B2 (en) * 2002-01-31 2005-06-14 Seagate Technology Llc Interrupting disc write operations to service read commands
US6988165B2 (en) * 2002-05-20 2006-01-17 Pervasive Software, Inc. System and method for intelligent write management of disk pages in cache checkpoint operations
US6845405B1 (en) * 2002-12-24 2005-01-18 Western Digital Technologies, Inc. Disk drive executing part of a linked disk command
US7913034B2 (en) * 2004-07-27 2011-03-22 International Business Machines Corporation DRAM access command queuing
US7392340B1 (en) * 2005-03-21 2008-06-24 Western Digital Technologies, Inc. Disk drive employing stream detection engine to enhance cache management policy
US20070283086A1 (en) * 2006-06-06 2007-12-06 Seagate Technology Llc Write caching random data and sequential data simultaneously
US7738329B2 (en) * 2007-02-16 2010-06-15 Mediatek Inc. Random access control method and optical disc drive

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100142350A1 (en) * 2008-12-04 2010-06-10 Byung-Hoon Chung Hybrid optical disk drive, method of operating the same, and electronic system adopting the hybrid optical disk drive
US8626985B2 (en) * 2008-12-04 2014-01-07 Toshiba Samsung Storage Technology Korea Corporation Hybrid optical disk drive, method of operating the same, and electronic system adopting the hybrid optical disk drive

Also Published As

Publication number Publication date
TWI360113B (en) 2012-03-11
CN101246727A (en) 2008-08-20
CN101246726A (en) 2008-08-20
TW200836179A (en) 2008-09-01
US20120233362A1 (en) 2012-09-13
US8205059B2 (en) 2012-06-19
CN100583277C (en) 2010-01-20
US20080198706A1 (en) 2008-08-21
CN101246727B (en) 2011-05-04
TW200836180A (en) 2008-09-01
TWI360114B (en) 2012-03-11

Similar Documents

Publication Publication Date Title
US8205059B2 (en) Buffer management method and optical disc drive
JP3699166B2 (en) Method for monitoring data loss in hierarchical data storage
CN102576293B (en) Data management in solid storage device and Bedding storage system
US7137038B2 (en) System and method for autonomous data scrubbing in a hard disk drive
US5864655A (en) Managing removable media in raid and rail environments
CN103135940B (en) Implementing enhanced fragmented stream handling in a shingled disk drive
US20070118695A1 (en) Decoupling storage controller cache read replacement from write retirement
JPH0877073A (en) Collective optical disk device
CN1512353A (en) Performance improved data storage and method
JP4490451B2 (en) Request scheduling method, request scheduling apparatus, and program in hierarchical storage management system
US8086793B2 (en) Optical disc recorder and buffer management method thereof
CN1145488A (en) Recoverable disk control system with nonvolatile memory
JPH06110772A (en) Method and apparatus for assignment of direct access memory device for storing computer data
US6336164B1 (en) Method and system for preventing deadlock in a log structured array
US20100232048A1 (en) Disk storage device
JP2003196032A (en) Write cache control method of storage device, and storage device
JP4933722B2 (en) Disk control device, disk patrol method, and disk patrol program
JP3431581B2 (en) Disk control system and data relocation method
RU2004125865A (en) OPTICAL DISK OF UNRESIGNABLE TYPE AND METHOD AND DEVICE FOR MANAGING DEFECTIVE ZONES ON THE OPTICAL DISK OF UNRESIGNABLE TYPE
US20090190448A1 (en) Data managing method for an optical disc drive writing user data into an optical disc having defects
US7613867B2 (en) Information recording apparatus, information recording method and recording medium recording program
US20080022060A1 (en) Data recording apparatus, program product, and data recording method
US20080013418A1 (en) Method for defect management in rewritable optical storage media
US10503651B2 (en) Media cache band cleaning
KR100979938B1 (en) Method for managing a defect area on recordable optical disc

Legal Events

Date Code Title Description
AS Assignment

Owner name: MEDIATEK INC., TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WU, TSE-HONG;CHEN, SHIH-HSIN;HUNG, SHIH-TA;AND OTHERS;REEL/FRAME:020521/0250

Effective date: 20080107

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION