US5907672A - System for backing up computer disk volumes with error remapping of flawed memory addresses - Google Patents

System for backing up computer disk volumes with error remapping of flawed memory addresses Download PDF

Info

Publication number
US5907672A
US5907672A US08/539,315 US53931595A US5907672A US 5907672 A US5907672 A US 5907672A US 53931595 A US53931595 A US 53931595A US 5907672 A US5907672 A US 5907672A
Authority
US
United States
Prior art keywords
storage means
sectors
sector
backup
primary storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/539,315
Inventor
John E. G. Matze
Douglas L. Whiting
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Veritas Technologies LLC
Original Assignee
STAC Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by STAC Inc filed Critical STAC Inc
Priority to US08/539,315 priority Critical patent/US5907672A/en
Assigned to STAC ELECTRONICS reassignment STAC ELECTRONICS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MATZE, JOHN E.G., WHITING, DOUGLAS L.
Priority to EP96307287A priority patent/EP0767431A1/en
Priority to JP8264578A priority patent/JPH1055298A/en
Assigned to STAC, INC. reassignment STAC, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: STAC ELECTRONICS, INC.
Application granted granted Critical
Publication of US5907672A publication Critical patent/US5907672A/en
Assigned to ALTIRIS, INC. reassignment ALTIRIS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PREVIO, INC.
Assigned to PREVIO, INC. reassignment PREVIO, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: STAC SOFTWARE, INC.
Assigned to STAC, INC. reassignment STAC, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: STAC ELECTRONICS
Assigned to STAC SOFTWARE, INC. reassignment STAC SOFTWARE, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: STAC, INC.
Assigned to SYMANTEC CORPORATION reassignment SYMANTEC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALTIRIS, INC.
Anticipated expiration legal-status Critical
Assigned to VERITAS US IP HOLDINGS LLC reassignment VERITAS US IP HOLDINGS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SYMANTEC CORPORATION
Assigned to WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VERITAS US IP HOLDINGS LLC
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VERITAS US IP HOLDINGS LLC
Assigned to VERITAS TECHNOLOGIES LLC reassignment VERITAS TECHNOLOGIES LLC MERGER AND CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: VERITAS TECHNOLOGIES LLC, VERITAS US IP HOLDINGS LLC
Assigned to VERITAS US IP HOLDINGS, LLC reassignment VERITAS US IP HOLDINGS, LLC TERMINATION AND RELEASE OF SECURITY IN PATENTS AT R/F 037891/0726 Assignors: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1448Management of the data involved in backup or backup restore
    • G06F11/1451Management of the data involved in backup or backup restore by selection of backup contents

Definitions

  • the present invention relates to a system for backing up data at high speed from a computer disk volume onto a backup medium and subsequently restoring some or all of said data in the event of data loss or corruption.
  • the backup device is a tape drive, although floppy disk drives and other removable disk drive technologies are also used.
  • Tape has the advantage of having a lower cost per byte of storage and thus is preferred in most applications, particularly those where large amounts of data are involved (e.g., network file servers, such as those running Novell's NetWare software).
  • network file servers such as those running Novell's NetWare software.
  • tape also has several inherent limitations which must be addressed in order to make its performance acceptable to a user.
  • tape is a sequential access medium, with any attempt at random access requiring times on the order of tens of seconds (if not minutes), as opposed to milliseconds for a disk drive.
  • Second, and somewhat related, the time to stop a tape drive and back up a little is on the order of seconds, which is again very large compared to disk times.
  • a third problem can arise, dealing with the transfer rate of the tape.
  • One of the most critical parameters of a backup system is the amount of time (known as the "backup window") required to back up a given disk volume. This is particularly true in multi-user systems or network file servers, where the system may be effectively shut down while the backup is occurring.
  • the backup time is by far the most important criterion to a user, since restore is by definition a somewhat extraordinary event (although the restore time is nonetheless of some interest). If the tape data rate is too slow, it will be easy to keep the drive supplied with enough data so that the tape can stream, but a backup and/or restore operation will be take too much time.
  • disk drive transfer rates have been much higher than tape transfer rates for mass-market devices.
  • a DAT (digital audio tape) 4 mm tape drive using the DDS-2 format has a native transfer rate of 366K bytes/second, and current Exabyte 8 mm tape drives have a 500 K byte/second transfer rate.
  • recent advances in tape drive technology are pushing the tape transfer rates higher.
  • current Quantum DLT (digital linear tape) drives achieve transfer rates of 1.25-1.5 M bytes/second, and the next generation of 4 mm and 8 mm tape drives promises to increase transfer rates substantially over current capabilities.
  • file-by-file backup i.e., a backup of all files on the disk
  • an "incremental" backup can fairly easily be performed, in which only those files which have changed since the last backup are written to tape.
  • changed files represent only a small fraction of the overall disk contents, in which case an incremental backup can be completed relatively quickly, and most operating systems maintain an "archive" bit that can easily be used to tell whether each file has changed or not.
  • a typical scenario involves performing a full backup once per week (often over the weekend on a network file server), with daily incremental backups to minimize the backup window. Full backups still need to be performed fairly regularly, because recovering the current file contents from an initial full backup and a large set of incrementals can be very time consuming.
  • the file system component of the computer's operating system gets involved in each step, which adds overhead time. Even worse, in general the files are not pulled from the disk in an optimal order with respect to their physical location on disk. Thus, the disk seek time required to move the disk head to read the file contents usually significantly degrades the overall data rate from the disk, particularly in the case of smaller files where much more time is spent moving the head to the right location than actually reading data.
  • An alternate backup method that has been used in the past to minimize backup time is to perform the backup on an "image" basis instead of a file-by-file basis.
  • the disk image is read sequentially on a sector-by-sector basis, resulting in disk transfer times that match the drive's rated throughput and are thus much faster than current tape drive technology, and this speed advantage appears to be sustainable as technology improves.
  • an image backup can thus easily keep a tape drive streaming.
  • image backup has never become popular.
  • image backup One major historical problem with image backup is that the only option for restoring has almost always been an image restore, wherein the entire disk image is copied from tape back to disk. While such an approach makes sense in the case of catastrophic failure, it is extremely inconvenient for the most frequent purpose of restore: to retrieve copies of a few lost or corrupted files. In order to perform such a partial restore, the user must either overwrite his entire existing disk (including any files modified since the backup), which is totally unacceptable, or he must have available an extra empty disk to which the image can be restored, which is expensive and often impractical. Clearly, the complete image restore may take considerably longer in general than would a selective file restore in a file-by-file system.
  • the disk to which the image is to be restored must have a flaw map which is identical to (or a subset of) the flaw map of the original disk. While most modern disks perform some level of defect mapping inside the drive, this approach cannot handle all flaws which develop after production test (e.g., during shipment), and such flaw mapping is normally handled by the operating system's file system code. Often, image restore software has required the physical disk geometries of the original backup disk and the restore disk to match, which is also problematic in the case of catastrophic failure, because it may not be possible to purchase an identical disk given the rapid change in the capacity (and thus geometry) of disk drives.
  • SnapBack from Columbia Data Systems (see LAN Times, Feb. 13, 1995, p. 89), has attempted to make image backup more acceptable to the user.
  • This product performs image backups of one or more physical disk partitions to tape and allows subsequent image restores to the same (or larger) partitions.
  • SnapBack runs its backups and restores under the MS-DOS operating system, although it also contains a scheduler for NetWare which will shut down the NetWare server code at a user-selected time, exit to MS-DOS to perform the backup, and then reenter NetWare.
  • Each hard disk on personal computer contains a partition table, typically on the first sector of the disk, which identifies the locations, sizes, and types of all the physical partitions on the disk.
  • these partition types include MS-DOS FAT partitions, Novell NetWare partitions, OS/2 HPFS partitions, Microsoft Windows NT NTFS partitions, Unix partitions, etc. SnapBack claims to be able to back up these partition types, and it works at the physical level by reading the physical disk sectors and saving this "image" to tape.
  • SnapBack includes the typical full image restore mechanism, along with the concomitant flaw map problem, although it does allow the restore target disk to have different physical geometry than the backup source disk (as long as it is no smaller than the source). Snapback includes no way to perform any type of incremental backup, but it does include a feature whereby a Novell NetWare partition image tape can be "mounted" as a read-only drive, allowing the user to access individual files on the tape for restore.
  • SnapBack The physical nature of SnapBack's operation allows it to function after a fashion for a wide variety of operating system disk partitions, but its lack of operating system specific knowledge also places some severe limits on functionality. For example, to use SnapBack, the operating system (e.g., NetWare) must be entirely shut down during the backup process, which is totally unacceptable for many users. Further, because SnapBack operates at a physical level instead of a logical level, it is not aware of any logical information contained within the partition. Thus, the backup process will always back up the entire disk image even if the disk is largely empty, slowing performance considerably. Also, the tape image mount mechanism suffers from the same severe performance problem discussed previously.
  • the operating system e.g., NetWare
  • the slowness is exacerbated by the fact that, during the mount process, NetWare actually reads in the entire set of directory and control structures for the entire disk. Since these structures are not guaranteed to be contiguous on the disk, the mount process from tape can easily take tens of minutes, which is particularly disconcerting if the user only wishes to restore a small handful of files. In fact, this mount time may well be longer than the time required for a full image restore
  • SnapBack image backup An additional limitation caused by the physical nature of the SnapBack image backup is that a NetWare volume which is split into segments on multiple physical disks (a configuration commonly used to increase volume size and performance) cannot easily be restored except to a set of physically identical disks, since there are logical and physical pointers included in the NetWare disk structures which specify where the segments reside, and SnapBack is unaware of such pointers. Similarly, a multi-segment volume cannot be mounted for file-by-file restore in SnapBack. These limitations are quite severe for the NetWare market, which currently has by far the largest number of file server installations and constitutes the dominant market for network backup software. While the SnapBack product contains some significant advances in the image backup, it still leaves some very significant barriers to user acceptance.
  • the backup process of the present invention reads sectors from the source disk at the logical sector level, thus removing any reliance on the underlying physical characteristics of the disk or its interface. Because the sectors are read sequentially, a backup performed using the present invention is capable of sustaining a data rate high enough to insure streaming of even very high-speed tape devices.
  • the system does not have to be shut down during the backup operation: the software can allow for the operating system and file system to continue operation, although access to portions of the disk volume will be temporarily delayed.
  • a log is kept of all files which are opened for write since those files may not contain consistent information at restore time.
  • the backup image takes advantage of any flaw management performed by the operating system's file system software, thus making it possible to restore to the logical image later to a disk with a different flaw map.
  • Another advantage of using logical sectors in the present invention is that a disk volume which spans multiple physical disk segments is saved as a single logical image and thus can easily be restored to an entirely different physical disk configuration.
  • the backup software may exclude unused or deleted areas of the disk to minimize backup time significantly.
  • a backup image of the present invention may be restored by completely restoring the logical image to a disk, but it may also be "mounted" as a read-only volume directly on the backup tape, allowing the user to restore only selected files from the backup.
  • the time required for this mount process is substantially minimized by saving all the volume control and directory sectors at the beginning of the tape, so that only a single tape seek is required to complete the mount.
  • the backup process of the present invention can determine which sectors need to be included at the beginning of the tape by understanding the on-disk volume format, or it can use a pseudo-volume technique for determining this sector set automatically without having any knowledge of the on-disk volume format.
  • An incremental image backup which supports all the functionality and performance of a full image backup, can also be performed as part of the present invention.
  • a software module is kept resident at all times to monitor which parts of the disk volume have been updated, thus allowing only those changed portions of the disk volume to be backed up.
  • This approach speeds up the backup for a largely unchanged volume: instead of being limited by tape transfer speeds, it is limited by the (typically much higher) disk transfer speeds.
  • a checksum method is used on the contents of the disk sectors to detect changes without requiring any resident software.
  • FIG. 1 is a diagram of the layout of a typical NetWare disk drive
  • FIG. 2 is a table of sample NetWare FAT (File Allocation Table) entries
  • FIG. 3 is a diagram of the contents of a NetWare disk volume and a diagram of the image of the disk volume stored on tape in accordance with the present invention
  • FIG. 4 is a block diagram of a pseudo-volume mount in accordance with the present invention.
  • FIG. 5 is a diagram of the format on tape of a full image backup and an incremental image backup stored in accordance with the present invention
  • FIG. 6 is a block diagram of a file-by-file restore from an image backup, in accordance with the present invention.
  • FIG. 7 is a flowchart illustrating the file-by-file restore process of the present invention.
  • FIG. 8 is a flowchart illustrating the servicing of logical sector requests during file-by-file restore, in accordance with the present invention
  • FIG. 9 is a flowchart illustrating the image restore process of the present invention.
  • FIG. 10 is an outline of the format of a Novell NetWare volume table.
  • the preferred embodiment of the present invention is a backup software package for a file server running the Novell NetWare operating system.
  • NetWare has by far the largest installed base of network file servers, and the market for NetWare backup software is therefore quite substantial.
  • the general techniques described below can be readily applied to other operating systems, such as Microsoft Windows NT, IBM OS/2, MS-DOS (or compatible operating system, hereafter referred to generically as DOS), etc., so the discussion here is not meant to limit to scope of the present invention to any particular operating system or file system.
  • the file server has one or more physical disks attached, as well as a tape drive.
  • each physical disk contains a physical partition table 100, typically placed on the first sector(s) of the disk. This table identifies the physical partitions located on each disk, including the starting point, size, and partition type (e.g., DOS, NetWare, etc.).
  • partition type e.g., DOS, NetWare, etc.
  • the system first boots from a DOS diskette or a DOS bootable partition 101 on the hard disk.
  • the NetWare server code is loaded as a DOS application which takes over the entire computer, effectively taking control away from DOS.
  • NetWare then loads its device drivers, including those drivers which allow sector reading and writing of the disk and tape drives, mounts any NetWare disk volumes found on the NetWare partition(s) 102, and offers its file services on the network.
  • NetWare currently allows only one physical NetWare disk partition 102 per physical drive. This physical partition is broken up into two logical regions.
  • the first region 103 contains the "hotfix" sectors. These are sectors set aside to map out bad sectors in the main data region and typically constitute a small percentage (1-2%) of the overall physical partition.
  • the second region 105 which comprises the remainder of the physical partition, is used for storing volume data.
  • Each time a write occurs to a NetWare disk the operating system performs a physical write-with-verify operation to the disk. If the verify fails, the bad portion of the disk is then mapped out by assigning a portion of the hotfix area 103 to replace it at the logical level. Obviously, if enough disk flaws develop over time, the pool of unused sectors in the hotfix area could be exhausted, but presumably Novell has enough experience in selecting the appropriate amount to allocate to the hotfix area that such an occurrence is extremely unlikely.
  • This technique of dynamically mapping out bad areas of the disk costs a little in performance, since the verify pass requires an extra rotation of the disk head, but it has several notable advantages.
  • a volume resides in one or more segments, and the mapping between volumes and segments is established by a volume table 104, which resides at the beginning of the main volume region 105. There is one entry in the volume table 104 for each segment 106 of the partition 102.
  • NetWare stores multiple copies of the volume table 104. Since there may be multiple physical drives in a system, each with a NetWare partition containing multiple segments, a NetWare volume can easily be spread across physical drives. NetWare has utilities that allow an existing volume to be extended onto other segments, so it is fairly easy (and quite common) to add a new disk drive and grow an existing volume onto the new drive, which has its own hotfix region.
  • the NetWare volume table format is not currently documented by Novell, although Novell has indicated that it will be documented in the near future.
  • the exact format of this structure is outlined in FIG. 10 and was determined during the development of the present invention by examining the contents of the logical partitions, with ready help from Novell engineers.
  • the definition is given as a C++ language structure statement, with the double slash (//) indicating a comment to the end of line.
  • the definition starts at 300 for the VOLUME -- TABLE -- ENTRY array (319).
  • the volume header is a single sector (512 bytes), which is an array of up to 8 of these records, as shown at 319. Each record describes one segment of one logical volume.
  • the volume header is placed at logical sector number 160 in the logical partition, and it is replicated three more times for robustness at 32-sector intervals thereafter.
  • the NameLen field 301 contains the length of the Name string 302, which is the volume name and must be unique across volumes.
  • the Sync field 304 and the Flags field 308 are unused in the present invention.
  • the NumberOfSegments field 305 indicates how many segments are contained in the volume.
  • the SegmentPosition field 306 indicates which segment of the volume is described by this entry; for example, a value of zero here indicates the first segment of a given volume, the value one indicates the second segment, etc.
  • the StartingSector field 309 indicates which physical sector number in the partition corresponds to the start of this segment.
  • the SectorsInSegment field 310 contains the number of sectors in this segment.
  • the FirstBlockInSegment field 312 indicates the first logical block number contained in this segment. The remaining fields are all identical for each entry of segments contained in a given volume.
  • the BlockShiftFactor field 305 is the base 2 logarithm of the number of sectors per logical block in the volume.
  • the BlocksInVolume field 311 indicates the total number of blocks in the volume.
  • the FirstPrimaryFATBlock 313 indicates which logical block in the volume contains the first FAT block of the volume; the FirstMirrorFATBlock field 314 indicates the start of the mirror copy of the FAT.
  • the FirstPrimaryDirectoryBlock field 315 indicates the start of the directory blocks for the volume and the FirstMirrorDirectoryBlock 316 indicates the start of the mirror copy of the directory blocks. All of these fields can easily be used to identify the segments of all volumes on the partition, as well as their FAT and directory block chains.
  • NetWare views each volume as a linear group of sectors. All the mapping of parts of volumes to segments and the flaw mapping into the hotfix region 103 are transparent at this level.
  • the preferred embodiment performs its sector reads and writes at this logical level, using an internal NetWare call (LogicalPartitionIO).
  • a file allocation table similar in spirit to the well-known DOS FAT, is used to record which logical blocks of the volume are currently in use. All space in the volume is allocated in terms of blocks, which are analogous to a DOS cluster.
  • a block is a logically contiguous set of M sectors, where M is always chosen to be a power of two. Typical block sizes range from 8 sectors (4096 bytes) to 128 sectors (65536 bytes).
  • the NetWare FAT is itself spread throughout the volume in blocks, which are linked together using FAT entries.
  • Each segment of the volume contains enough FAT blocks to manage its own blocks, thus allowing for simple extension of existing volumes to new segments. For data integrity purposes, multiple copies of the FAT are stored.
  • the volume table entry for each segment contains pointers to the first FAT block of that segment, but all FAT blocks of the volume are logically linked together into a single chain via FAT entries. Thus, space for the FAT is effectively allocated as if the FAT itself were a file.
  • FIG. 2 shows a table of sample NetWare FAT entries.
  • Each FAT entry consists of eight bytes.
  • the first four bytes indicate the sequence number 110 of the FAT entry, or the block number within the file of the associated block. Normally, these sequence numbers are sequential (0,1,2, . . . ) for sequential blocks in the FAT chain of each file, but they may not be sequential if sparse files are used, as illustrated at entry 114.
  • the second four bytes, which correspond most closely to the DOS FAT entry contain the block number 111 of the next FAT entry in this file.
  • a zero 112 in a FAT entry indicates that the associated block is unallocated (i.e., available), while a value of all ones (hex FFFFFF) 113 indicates the end of the FAT chain for this file.
  • hex FFFFFFFF a value of all ones
  • sub-block allocation is also used to minimize the unused ("slack") space in the last block of a file. This is indicated by setting the most significant bit of the next block number without the entire entry being FFFFFF; the remaining bits indicate where the final partial block is stored. For our purposes, this fact is not relevant other than to know that the upper bit of a next block number being set indicates that the block in question is in use and is the end of a FAT chain!.
  • Directory entries for files and subdirectories are also stored in blocks. Each directory entry contains the file name, size, and other attributes, including pointers that indicate its position in the volume directory tree. All directory blocks are linked together using FAT entries as if the directory blocks themselves were a file.
  • the volume table entry for the first segment of the volume contains the logical block number of the first directory block of the volume.
  • Both the FAT and the directory entry blocks for the entire volume can thus be identified by reading the volume table entry for the first segment of the volume, then using the FAT entries to follow the singly linked chains for each set of blocks.
  • NetWare does just this. It reads the entire FAT into memory, then reads in all the directory blocks, performing checks on the integrity of the volume structure. If a problem is found during the mount process, NetWare informs the system administrator that a repair utility (VREPAIR) should be run in order to enable mounting the volume.
  • the directory blocks are cached in memory, but are not required to fit in memory all at once.
  • the backup software of the preferred embodiment runs as a NetWare Loadable Module (NLM), with an accompanying disk driver.
  • NLM NetWare Loadable Module
  • the user specifies when a backup is to occur, either by scheduling a backup operation in advance or by manually invoking an immediate backup.
  • the NLM then backs up each physical partition in turn to the tape (or to whatever backup medium is used).
  • the disk contains a small DOS (boot) partition followed by a NetWare partition.
  • Physical disk devices other than the boot drive usually contain only a NetWare partition.
  • DOS FAT partition exists on the drive, it is backed up using an image backup in the preferred embodiment. As discussed in the next section, this approach greatly facilitates a complete restoration of both partitions of a failed disk drive, which is otherwise a very painful and time-consuming process on a NetWare system.
  • the DOS partition may be backed up on a file-by-file basis.
  • each NetWare volume is backed up as a single logical image.
  • the volume table is read and interpreted to understand which segments correspond to each volume, and the volume table is also saved at the beginning of the tape to allow a restoration to an identical physical segment/volume configuration if desired.
  • each volume image can also be independently restored to any physical disk configuration with enough space to hold the image. Because each volume is read via the internal NetWare call (LogicalPartitionIO) that reads logical sectors, the hotfix map is automatically and transparently used by NetWare to present an image which is (normally) error-free and independent of any physical flaws.
  • the logical sector image of the volume is not stored in linear sector order on the tape. Instead, as shown in FIG. 3, all the logical sectors necessary for NetWare to perform the mount are saved in a FAT/directory header 122 at the beginning of the volume image on tape. Control information 120 identifying these sectors, as well as other information such as the time of the backup, is written along with the header.
  • the set of sectors saved in the FAT/directory header 122 includes all the FAT blocks and the directory blocks of the volume. These blocks are identified by reading the volume table entry for the first segment of the volume, which contains pointers to the first FAT and directory blocks of the volume, and the FAT chain is then followed to identify all subsequent FAT and directory blocks.
  • NetWare stores a duplicate ("mirror") copy of both the FAT and directory blocks, but these mirror copies are not included in the header, although they are backed up as part of the main volume data. After this header, the remaining logical sectors, comprising the file data 123, are appended in a monotonically increasing sector order.
  • the preferred embodiment in order to minimize backup time of partially full volumes, the preferred embodiment by default excludes logical blocks (and thus the associated logical sectors) which do not contain any file data, such as 124 in FIG. 3.
  • the "empty" blocks are identified by scanning the FAT to see which FAT entries are zero. The user may override this operation to force all sectors to be included in the backup if desired.
  • the backup software will scan the directory entries for deleted files, which are retained by NetWare on a temporary basis. The data blocks, such as 125, associated with those deleted files will be excluded from the backup image to minimize backup time, unless the user overrides this default behavior.
  • a block map table 121 is pre-computed using the FAT/directory information and stored along with the header, with one entry per logical block.
  • Each entry 126 in this table indicates which tape block in the backup image corresponds to a given logical block. The table thus allows for instant lookup of the position of each logical block on the tape at restore time.
  • the backup software can identify all the sectors required for mount (and save them in the tape FAT/directory header 122) using the technique shown in the block diagram of FIG. 4.
  • the backup process presents a "pseudo-volume" 139 to the operating system 133 to be mounted read-only.
  • the "disk driver" logic for the pseudo-volume 139 performs the read by instead reading logical sectors from the actual logical volume to be backed up 135.
  • the pseudo-volume disk driver 139 maintains a log of which logical sectors are read during the mount process.
  • the backup application 138 uses this log to build the header 122 and proceeds to backup in a manner basically identical to that of the preferred embodiment.
  • the preferred embodiment also includes a mechanism to perform "incremental" image backups.
  • a list of modified ("dirty") blocks is maintained by a separate NLM which tracks block write calls. With this technique only the blocks of the disk which have changed are read during an incremental backup and stored on the tape. It is absolutely imperative that this NLM be present at all times when the volume is mounted, or some writes may be missed, totally negating the integrity of all subsequent incremental backups until a new full backup is performed.
  • a complete block map table 151 together with all directory and FAT blocks 152, whether they have changed or not, are included in an incremental backup image 150, so that mounting the tape image is still fast.
  • Each block map table entry points to the modified block in the incremental backup 154 if that block has changed, else it points to the original block 153 in the previous backup.
  • the NLM simply maintains a bitmap (one bit per block) indicating which blocks in each volume have been written. For a 10 GB volume with 4 KB blocks, this amounts to only 320 Kbytes of bitmap, which can easily be kept in memory.
  • the bitmap file which is protected by a cyclic redundancy check (CRC) to verify that its contents have not been corrupted, is read from the DOS partition at startup (before any writes to the NetWare volume can have occurred) and then immediately deleted. At shutdown, after all the volume has been dismounted so that no further writes can occur, a new bitmap file is written back out to the DOS partition. Thus, if a power failure or some other disorderly shutdown occurs, the absence of a valid bitmap file indicates that the next backup must be a full backup. Otherwise, the bitmap indicates exactly which blocks have changed and therefore which blocks need to be included in the incremental backup.
  • CRC cyclic redundancy check
  • a checksum or CRC is also stored in a table which is appended to the backup image.
  • Each checksum is large enough to provide a very high level of confidence that blocks with matching checksums are identical. For example, if each checksum consists of 128 bits, the probability of a false match for any given block is approximately 10 -38 ; this actually gives much better reliability than the underlying tape and disk storage media, which typically have error rates on the order of 10 -20 .
  • high end CPUs such as a 486 or Pentium, such checksums can be computed much faster than data can be read from disk, assuming that the backup process is allowed to consume a significant fraction of the available CPU bandwidth.
  • the checksums are used as follows. On backups subsequent to the original full backup, the checksums for each block are computed and compared to that of the original backup image. If the two checksums match, it is assumed that the two blocks match, so the new block is not stored on tape, but a pointer to the old block is saved in the block map table for this backup, which cannot be pre-computed and is therefore appended to the tape image. If the two checksums do not match, the new block is included in the image backup. Note that this method does require that the entire disk image be read and thus is slower than the preferred embodiment.
  • this technique allows the incremental backup to proceed at speeds limited only by the disk read time, which is considerably faster than the tape write throughput which limits the speed of a full backup. While it has some obvious disadvantages, this embodiment is probably somewhat easier to implement than the preferred embodiment because it only involves application level code while the latter requires system-level resident code.
  • any backup file system consistency and integrity issues can arise if any files on the disk are modified.
  • the backup application typically skips that file and adds its name to an exception list that can be perused by the administrator. This situation alone is normally tolerable, although there are often files that are nearly always kept open (e.g., some database files) and therefore would never be backed up, which would clearly make the backup useless with respect to those files.
  • An even more insidious situation can arise when dealing with files whose contents are inter-related, such as a database data file and its index file(s).
  • the driver maintains a small separate cache which is filled with "original" copies of blocks which are written during the backup. These original copies are then written to tape instead of the modified versions on disk, at which point the original block copy can be discarded to free up space in the cache. As long as the cache never fills up, no write operations will ever block, so this alternate approach may significantly limit (or even eliminate) the amount of time spent with blocked write calls in many cases, although clearly this depends on the size of the cache and the amount of write activity.
  • the backup software of the preferred embodiment By monitoring system file status and file calls, the backup software of the preferred embodiment also keeps a list of files which were opened for write at the time the backup began and those which are created or opened for write during the backup. This list becomes the exception log, similar to that of a conventional file-by-file backup, which identifies those files whose contents on the backup may be invalid or inconsistent. There are, however, two significant differences between this exception log and that of a "conventional" exception log. First, the bad news: the time "window" during which a file will be added to the exception log in the preferred embodiment is considerably longer than in the conventional case, where the window for each file consists only of the time required to back up that one file. In other words, the exception log will tend to be somewhat longer in the preferred embodiment, all other things being equal.
  • the backup image of the preferred embodiment contains at least a version (albeit possibly invalid) of the contents of files on the exception list. In many instances, this version is actually perfectly good, but it almost always allows for partial recovery of the file contents which is often quite welcome after a catastrophic failure. By contrast, in the conventional case there is not even an inconsistent version available.
  • the preferred embodiment provides two simple methods for the user to recover individual files from tape without performing a full image restore. Both mechanisms are based on mounting the tape image as a NetWare volume, using a pseudo-disk driver. This is accomplished as shown in the block diagram of FIG. 6 and the flow charts of FIG. 7 and FIG. 8.
  • the entire tape header is read from the tape drive 171 via the tape driver software 170 into memory and entered into the cache 169. Since the header may be too large to fit into the memory allocated for the cache 169, the cache logic writes any excess data to a cache file on a NetWare volume 165 via calls to the operating system 163 and maintains data structures that can be used to locate the appropriate cache blocks in the cache file.
  • the logical read/write logic of FIG. 8 is enabled, as discussed below.
  • the restore software creates a (pseudo) internal NetWare drive 168 which is somewhat larger (by 50% in the preferred embodiment) than the original volume size. As shown in FIG.
  • the software "disk” driver for this new drive is added to the system using the NetWare AddDiskDevice call; the driver effectively reads from the tape image to process logical read requests 161 from the file system, but the cache 169 is used for the tape image header to minimize tape seek time.
  • the cache 169 is used for the tape image header to minimize tape seek time.
  • an access to the cache file on disk drive 166 is required, which is much faster than accessing the same block on tape would be.
  • a NetWare disk driver cannot make file i/o calls directly, access to the cache file is achieved by posting a request to a separate cooperative thread 172 which does not operate at the driver level and thus can fulfill the request.
  • the driver also loads in the block map table 121 from tape 171 and holds it in memory so that the location of each block on the tape can be instantly determined.
  • Logical sector reads and writes 161 are handled by the pseudo-disk driver 168 as outlined in FIG. 8.
  • the disk driver continually polls at 210 and 215 for any pending read or write requests from the operating system 164. When a read request is found, processing continues at block 216. At this point, if the requested disk blocks are in the cache, processing continues at block 217, where the blocks are read directly from the cache 169, which may result in an access to the disk volume 165 via the cooperative thread 172. If the requested disk blocks are not in the cache at 216, processing continues at block 218, where the blocks are read from tape. After blocks 217 and 218, processing continues back to the beginning at block 215.
  • processing continues to block 211, where a check is made for the presence of the disk blocks in the cache. If the disk blocks are already in the cache, processing continues at block 213. If the disk blocks are not already in the cache, processing continues to block 212, where any partial disk blocks of the request are read from tape into the cache. Note that full disk blocks to be written do not need to be fetched from tape into the cache, since the entire disk block contents will be overwritten in the cache anyway. From block 212, processing continues to block 213, where the requested disk block writes are posted to the cache. All of these cache operations may result in blocks being read from or written to the disk volume 165 via the cooperative thread 172. Such cache operations are well understood in the art, and there are well-known caching strategies that may be employed without affecting the scope of the invention. From block 213, processing continues back to the beginning at block 215.
  • the driver next creates a NetWare partition (using the MM -- CreatePartition NetWare call) large enough to hold a default hotfix size and the volume size. Creation of this (pseudo) partition will result in writes to initialize the hotfix and volume table areas of the partition. These writes are also cached by the cache logic 169, and will effectively be discarded when the tape volume is eventually dismounted.
  • the driver issues calls at 204 to create a NetWare volume (writing the volume information using the LogicalPartitionIO call) with a size matching the size of the volume that was backed up, which results in a new volume table entry being written to the partition (and cached by the driver).
  • a command-line request is issued to NetWare to mount the new volume at 205.
  • the driver for the "tape" volume 168 enters a loop 206 processing logical sector i/o requests 161; since the driver knows the exact location of each block (in the cache memory 169, in the cache file on disk 167, or on the tape 171), it can easily satisfy all read/write requests, as shown in FIG. 8. Only reads/writes of file contents will result in accessing the tape 171 at blocks 218 and 212, since all the directory and FAT information is in the cache (169 or 167). Note that, if the header blocks were not consolidated in one contiguous region at the beginning of the tape image, this mounting process could require many minutes of tape seeking. Given the way the header blocks are stored in the preferred embodiment, only a single tape seek is required, to the beginning of the tape image, so the additional overhead beyond that required for mounting a similar disk volume is usually measured in seconds (or tens of seconds) instead of minutes.
  • NetWare file read accesses to the "tape" volume 168 often result in sector-level write accesses. For example, NetWare maintains a last-accessed date for each file which is updated (i.e., written) each time a file is accessed. Similarly, under NetWare version 4, files may be compressed, and read accesses may result in the file contents being decompressed and written to disk. Thus, the cache 169 and its associated logic allow for arbitrary write access, since the cache can grow dynamically (limited by the amount of free space on the disk volume 165).
  • the user is not given write access to the volume 168, simply because of the possible confusion caused by the transient nature of such writing, but in an alternate embodiment this somewhat arbitrary restriction can be removed to allow the user to modify the transient image of the mounted volume 168.
  • the user may access files on the "tape volume" using any of his normal file tools, such as Windows file manager. Applications can even be run from the tape volume just as if they resided on disk.
  • Windows file manager any of his normal file tools
  • applications can even be run from the tape volume just as if they resided on disk.
  • retrieving files from the tape volume is very slow compared to retrieval times from a disk volume, the time required to restore only a few files or a single subdirectory seems to be quite acceptable; i.e., comparable to the restore time from a conventional file-by-file backup. In fact, often the total restore time is less, because the user can easily peruse the file/directory tree using his own tools to decide which files to restore instead of using a "foreign" restore tool.
  • the preferred embodiment offers an alternate method for restoring individual files which, from the user's perspective, operates identically to a conventional restore from a file-by-file backup.
  • a dedicated restore application allows the user to select ("tag") the files he wishes to restore. This application then examines the volume structure, looking at the FAT and directory entries for the tagged files to determine an optimal ordering for restore.
  • the restore application can guarantee that the entire tagged file set is restored with no more than a single pass over the tape, which is as good as the guarantee of any file-by-file system.
  • the present invention allows greater flexibility in restoring individual files than a conventional file-by-file approach, while at the same time offering comparable (or better) restore performance.
  • a set of disaster-recover floppy disks can be created which allow the user to boot DOS from floppy and load enough of NetWare to access the original tape drivers so that the disk partitions can be restored.
  • This set of boot floppies typically only needs to be built once, or at most every time the NetWare device driver configuration is changed.
  • the user invokes the restore procedure shown in the flow chart of FIG. 9 by installing a new (unformatted) hard disk, inserting the disaster-recovery floppies, allowing a full restore of the entire disk configuration and contents as they were at the time of the last backup.
  • the next step in restoring the volume image from tape is to partition the disk into a DOS and a NetWare partition, as shown in block 221. From block 221, processing continues to block 222, where the contents of the DOS partition are restored. Since the on-disk structure for a DOS FAT volume is entirely documented, the methods described here for allowing mount of a volume tape image could easily be applied to allow a file-by-file restore from the image backup of the DOS partition. However, the DOS partition on a NetWare system is typically quite small and does not contain many files that are accessed directly by the administrator, so in the preferred embodiment this functionality is not implemented.
  • the DOS partition is so small, usually no disk flaws are encountered during a conventional image restore of the DOS partition, particularly given that a replacement disk would almost certainly be a modern disk drive in which initial flaw mapping can be performed automatically and transparently.
  • the DOS restore logic would have to interpret the disk structure from the tape image to pull off the DOS files and restore them to the newly formatted partition, avoiding the flaws.
  • the system is rebooted from the DOS partition at block 223 to bring up the full NetWare environment that existed at the time of the image backup.
  • the restore software calls NetWare (MM -- CreatePartition, MM -- InitializePartitionTable) at block 224 to initialize the NetWare partition(s) on the physical disk drive(s); this step builds the hotfix area and an empty volume table.
  • the restore software calls NetWare (using LogicalPartitionIO) at 225 to create a new (empty) volume of equal or greater size, which may span multiple segments, depending on the disk configuration and the user's preferences.
  • the logical sector image of the original volume is then read from tape at 226 and written to the appropriate segment(s) via the internal NetWare logical sector i/o call (LogicalPartitionIO).
  • the restore software issues a NetWare command-line call at 227 to mount the restored volume. At this point, the volume is available for access.
  • the restore software exits and the system is back in its original state at the time of the backup.
  • this entire process including booting from floppy and restoring the DOS and Novell volumes, is totally automatic, other than the fact that the user must specify which volumes get restored and remove the boot diskette to allow the final reboot to occur.
  • the process is so much simpler than a full system restore from a conventional file-by-file backup that several interesting applications of this type of restore become feasible. For example, it is possible to restore the volumes to a separate (“spare") server computer just to peruse and use the backup data without affecting the original server.
  • this technique can be used to transfer the file contents of an existing server to a new server, presumably with higher performance and capacity, which is to replace the existing server.
  • an image backup tape would allow a vendor or technician to install a new server containing a pre-configured set of network applications at a customer site.
  • Today such an operation usually involves the painful procedure of partitioning the disk, installing DOS, installing NetWare, then installing the applications, and this process must be repeated for each new customer.
  • the vendor could perform the installation once at his headquarters, then have a technician simply perform the image restore at each customer site, resulting in a considerable savings in time and money.

Abstract

A system for backing up data from a computer disk volume at very high speed by saving a logical image copy of the volume to a backup medium such as magnetic tape. This logical image copy can later be restored in its entirety to a disk volume with a different physical geometry and flaw map in a disaster recovery mode, significantly reducing the time required for such disaster recovery compared to other backup techniques. In addition, the logical image copy on the backup medium also allows selective file restore with performance comparable to that achievable using traditional file-by-file backup/restore methods. The backup process can thus run considerably faster than conventional approaches without sacrificing the restore flexibility normally associated with those approaches.

Description

FIELD OF THE INVENTION
The present invention relates to a system for backing up data at high speed from a computer disk volume onto a backup medium and subsequently restoring some or all of said data in the event of data loss or corruption.
BACKGROUND OF THE INVENTION
Backing up data and program files (often together referred to as "data" here) from computer disks has been a well-known practice for many years. There are two major reasons why data needs to be backed up. The first reason is that the disk hardware may fail, resulting in an inability to access any of the valuable data stored on the disk. This disastrous type of event is often referred to as a catastrophic failure; in this case, assuming that backups have been performed, the computer operator typically "restores" all his files from the most recent backup. Fortunately, new computer disks and controllers have become more reliable over the years, but the possibility of such a disaster still cannot be ignored. The second reason for backup is that users may inadvertently delete or overwrite important data files. This type of problem occurs much more frequently than a catastrophic hardware failure, and the computer operator typically restores only the destroyed files from the backup medium (e.g., tapes or floppy disks) to the original disk.
In general, the backup device is a tape drive, although floppy disk drives and other removable disk drive technologies are also used. Tape has the advantage of having a lower cost per byte of storage and thus is preferred in most applications, particularly those where large amounts of data are involved (e.g., network file servers, such as those running Novell's NetWare software). However, tape also has several inherent limitations which must be addressed in order to make its performance acceptable to a user. First, tape is a sequential access medium, with any attempt at random access requiring times on the order of tens of seconds (if not minutes), as opposed to milliseconds for a disk drive. Second, and somewhat related, the time to stop a tape drive and back up a little is on the order of seconds, which is again very large compared to disk times. The result of all this is that, once the tape drive starts moving the tape, any attempt to stop, back up, or skip forward will result in a very large time penalty. Thus, the most desirable way to use a tape drive is to keep it "streaming"--in other words, to read or write very large sequential blocks of data.
In this context, a third problem can arise, dealing with the transfer rate of the tape. One of the most critical parameters of a backup system is the amount of time (known as the "backup window") required to back up a given disk volume. This is particularly true in multi-user systems or network file servers, where the system may be effectively shut down while the backup is occurring. Normally, the backup time is by far the most important criterion to a user, since restore is by definition a somewhat extraordinary event (although the restore time is nonetheless of some interest). If the tape data rate is too slow, it will be easy to keep the drive supplied with enough data so that the tape can stream, but a backup and/or restore operation will be take too much time. On the other hand, if the data rate is too high, the disk drive will not be able to keep up with the tape, which will then fall out of streaming and backup time will increase unacceptably. While most tape drives include memory buffers to attempt to smooth out any loss of streaming due to instantaneous variations in the rate of data coming from the disk, such buffers only mildly alleviate the problem. In a word, a tape drive should be just fast enough but no faster, or performance will suffer. This balancing act can lead to problems as technology evolves, as discussed below.
Historically, disk drive transfer rates have been much higher than tape transfer rates for mass-market devices. For example, a DAT (digital audio tape) 4 mm tape drive using the DDS-2 format has a native transfer rate of 366K bytes/second, and current Exabyte 8 mm tape drives have a 500 K byte/second transfer rate. By contrast, it is not uncommon for disk drive raw transfer rates to be on the order of 3-5 M bytes/second (although this number does not take into account any seek latency, as discussed below). However, recent advances in tape drive technology are pushing the tape transfer rates higher. For example, current Quantum DLT (digital linear tape) drives achieve transfer rates of 1.25-1.5 M bytes/second, and the next generation of 4 mm and 8 mm tape drives promises to increase transfer rates substantially over current capabilities.
Unfortunately, using conventional backup techniques, such tape technology advances are not always good news. Almost all popular backup programs, such as Cheyenne's ArcServe and Arcada's Backup Exec, work on a file-by-file basis. In other words, during the backup process, the backup program copies one file at a time from the disk to the tape. This approach collects pieces of each file, which may not be contiguous on the disk, into a single sequential block that is stored on the tape, thus simplifying and speeding up a future restore process. One useful consequence of this method is that the data is thus stored on the tape in a format that may allow files to be transported between computers with different operating systems. With current technologies, it is not uncommon in a file-by-file approach on network servers for a full backup (i.e., a backup of all files on the disk) to consume more time than is available overnight. Fortunately, an important benefit of file-by-file backup is that an "incremental" backup can fairly easily be performed, in which only those files which have changed since the last backup are written to tape. Normally, changed files represent only a small fraction of the overall disk contents, in which case an incremental backup can be completed relatively quickly, and most operating systems maintain an "archive" bit that can easily be used to tell whether each file has changed or not. A typical scenario involves performing a full backup once per week (often over the weekend on a network file server), with daily incremental backups to minimize the backup window. Full backups still need to be performed fairly regularly, because recovering the current file contents from an initial full backup and a large set of incrementals can be very time consuming.
As each file is opened and read from the disk in a file-by-file backup, the file system component of the computer's operating system gets involved in each step, which adds overhead time. Even worse, in general the files are not pulled from the disk in an optimal order with respect to their physical location on disk. Thus, the disk seek time required to move the disk head to read the file contents usually significantly degrades the overall data rate from the disk, particularly in the case of smaller files where much more time is spent moving the head to the right location than actually reading data. The net result is that, while the disk has a raw (i.e., sequential) transfer rate of several megabytes per second, once the file system software and disk seek overheads come into play, the average disk read data rate can easily fall below that of the tape drive, which then falls out of streaming, slowing down the backup process substantially. The paradoxical conclusion is that a doubling of the tape data rate may in fact slow down the backup time considerably. Current trends indicate that tape drive transfer rates are increasing faster than the disk seek times are decreasing, making it even harder for file-by-file backup methods to keep future tape drives streaming. For example, using Cheyenne Arcserve on a NetWare server with a Quantum DLT drive, which inherently is capable of storing 90 MB/minute, typically results in throughputs which are only a fraction of the theoretical speed, meaning that the tape drive is constantly stopping and starting instead of streaming.
An alternate backup method that has been used in the past to minimize backup time is to perform the backup on an "image" basis instead of a file-by-file basis. In this approach, the disk image is read sequentially on a sector-by-sector basis, resulting in disk transfer times that match the drive's rated throughput and are thus much faster than current tape drive technology, and this speed advantage appears to be sustainable as technology improves. Without the extra file system software overhead and without extraneous disk head movements, an image backup can thus easily keep a tape drive streaming. However, for several notable reasons, image backup has never become popular.
One major historical problem with image backup is that the only option for restoring has almost always been an image restore, wherein the entire disk image is copied from tape back to disk. While such an approach makes sense in the case of catastrophic failure, it is extremely inconvenient for the most frequent purpose of restore: to retrieve copies of a few lost or corrupted files. In order to perform such a partial restore, the user must either overwrite his entire existing disk (including any files modified since the backup), which is totally unacceptable, or he must have available an extra empty disk to which the image can be restored, which is expensive and often impractical. Clearly, the complete image restore may take considerably longer in general than would a selective file restore in a file-by-file system. Also, the disk to which the image is to be restored must have a flaw map which is identical to (or a subset of) the flaw map of the original disk. While most modern disks perform some level of defect mapping inside the drive, this approach cannot handle all flaws which develop after production test (e.g., during shipment), and such flaw mapping is normally handled by the operating system's file system code. Often, image restore software has required the physical disk geometries of the original backup disk and the restore disk to match, which is also problematic in the case of catastrophic failure, because it may not be possible to purchase an identical disk given the rapid change in the capacity (and thus geometry) of disk drives.
Another problem is that, from a bottom-line perspective, for several reasons the speed of image backup has not even always been faster than that of a file-by-file backup. For example, with typical image backup it is not possible to perform an incremental backup, so that each backup session is a full image backup and thus may be slower than a file-by-file incremental backup. Also, if the disk is only partially full, an image backup may be slower than a file-by-file backup because the former will write a lot of "unused" disk sectors to tape. Most importantly, in the past the tape drive transfer rates have often been low enough that file-by-file backups were able to keep the tape streaming, removing the one major objection to the file-by-file approach.
Some attempts have been made to allow file-by-file restore from an image backup, normally by "mounting" the tape image as a disk drive (often in a read-only mode). A few such products have been commercialized without meeting any significant market acceptance, mainly because the tape seek times incurred in reading the disk control and directory structures are so painfully slow compared to disk drive speeds. These structures are in general not physically contiguous on the disk, which costs milliseconds when looking through the directory structures on the disk, but this same discontiguity costs tens of seconds when performing the same operation on the tape image.
Recently, one software backup product, SnapBack from Columbia Data Systems (see LAN Times, Feb. 13, 1995, p. 89), has attempted to make image backup more acceptable to the user. This product performs image backups of one or more physical disk partitions to tape and allows subsequent image restores to the same (or larger) partitions. SnapBack runs its backups and restores under the MS-DOS operating system, although it also contains a scheduler for NetWare which will shut down the NetWare server code at a user-selected time, exit to MS-DOS to perform the backup, and then reenter NetWare. Each hard disk on personal computer contains a partition table, typically on the first sector of the disk, which identifies the locations, sizes, and types of all the physical partitions on the disk. On an IBM-compatible personal computer, these partition types include MS-DOS FAT partitions, Novell NetWare partitions, OS/2 HPFS partitions, Microsoft Windows NT NTFS partitions, Unix partitions, etc. SnapBack claims to be able to back up these partition types, and it works at the physical level by reading the physical disk sectors and saving this "image" to tape.
For restore, SnapBack includes the typical full image restore mechanism, along with the concomitant flaw map problem, although it does allow the restore target disk to have different physical geometry than the backup source disk (as long as it is no smaller than the source). Snapback includes no way to perform any type of incremental backup, but it does include a feature whereby a Novell NetWare partition image tape can be "mounted" as a read-only drive, allowing the user to access individual files on the tape for restore.
The physical nature of SnapBack's operation allows it to function after a fashion for a wide variety of operating system disk partitions, but its lack of operating system specific knowledge also places some severe limits on functionality. For example, to use SnapBack, the operating system (e.g., NetWare) must be entirely shut down during the backup process, which is totally unacceptable for many users. Further, because SnapBack operates at a physical level instead of a logical level, it is not aware of any logical information contained within the partition. Thus, the backup process will always back up the entire disk image even if the disk is largely empty, slowing performance considerably. Also, the tape image mount mechanism suffers from the same severe performance problem discussed previously. In this case, the slowness is exacerbated by the fact that, during the mount process, NetWare actually reads in the entire set of directory and control structures for the entire disk. Since these structures are not guaranteed to be contiguous on the disk, the mount process from tape can easily take tens of minutes, which is particularly disconcerting if the user only wishes to restore a small handful of files. In fact, this mount time may well be longer than the time required for a full image restore|
An additional limitation caused by the physical nature of the SnapBack image backup is that a NetWare volume which is split into segments on multiple physical disks (a configuration commonly used to increase volume size and performance) cannot easily be restored except to a set of physically identical disks, since there are logical and physical pointers included in the NetWare disk structures which specify where the segments reside, and SnapBack is unaware of such pointers. Similarly, a multi-segment volume cannot be mounted for file-by-file restore in SnapBack. These limitations are quite severe for the NetWare market, which currently has by far the largest number of file server installations and constitutes the dominant market for network backup software. While the SnapBack product contains some significant advances in the image backup, it still leaves some very significant barriers to user acceptance.
Thus, there are two well-known backup strategies: file-by-file, which has well-accepted usability characteristics but whose performance is proving extremely difficult to maintain as technology advances, and image, whose performance can keep up with technology but which has met with almost universal rejection in the market for the reasons discussed above.
SUMMARY OF THE INVENTION
It is the goal of the present invention to overcome the problems historically associated with image backup, thus allowing for new high-speed tape devices to stream during the backup process, without forcing the user to accept compromises in the flexibility or performance of the restore process.
The backup process of the present invention reads sectors from the source disk at the logical sector level, thus removing any reliance on the underlying physical characteristics of the disk or its interface. Because the sectors are read sequentially, a backup performed using the present invention is capable of sustaining a data rate high enough to insure streaming of even very high-speed tape devices. The system does not have to be shut down during the backup operation: the software can allow for the operating system and file system to continue operation, although access to portions of the disk volume will be temporarily delayed. During the backup process, a log is kept of all files which are opened for write since those files may not contain consistent information at restore time. By saving logical sectors, the backup image takes advantage of any flaw management performed by the operating system's file system software, thus making it possible to restore to the logical image later to a disk with a different flaw map. Another advantage of using logical sectors in the present invention is that a disk volume which spans multiple physical disk segments is saved as a single logical image and thus can easily be restored to an entirely different physical disk configuration. In addition, by understanding the on-disk volume format, the backup software may exclude unused or deleted areas of the disk to minimize backup time significantly.
A backup image of the present invention may be restored by completely restoring the logical image to a disk, but it may also be "mounted" as a read-only volume directly on the backup tape, allowing the user to restore only selected files from the backup. The time required for this mount process is substantially minimized by saving all the volume control and directory sectors at the beginning of the tape, so that only a single tape seek is required to complete the mount. The backup process of the present invention can determine which sectors need to be included at the beginning of the tape by understanding the on-disk volume format, or it can use a pseudo-volume technique for determining this sector set automatically without having any knowledge of the on-disk volume format.
An incremental image backup, which supports all the functionality and performance of a full image backup, can also be performed as part of the present invention. A software module is kept resident at all times to monitor which parts of the disk volume have been updated, thus allowing only those changed portions of the disk volume to be backed up. This approach speeds up the backup for a largely unchanged volume: instead of being limited by tape transfer speeds, it is limited by the (typically much higher) disk transfer speeds. In an alternate embodiment, a checksum method is used on the contents of the disk sectors to detect changes without requiring any resident software.
BRIEF DESCRIPTION OF THE DRAWINGS
A preferred embodiment of the present invention is illustrated in and by the following drawings, in which like reference numerals indicate like parts and in which:
FIG. 1 is a diagram of the layout of a typical NetWare disk drive;
FIG. 2 is a table of sample NetWare FAT (File Allocation Table) entries;
FIG. 3 is a diagram of the contents of a NetWare disk volume and a diagram of the image of the disk volume stored on tape in accordance with the present invention;
FIG. 4 is a block diagram of a pseudo-volume mount in accordance with the present invention;
FIG. 5 is a diagram of the format on tape of a full image backup and an incremental image backup stored in accordance with the present invention;
FIG. 6 is a block diagram of a file-by-file restore from an image backup, in accordance with the present invention;
FIG. 7 is a flowchart illustrating the file-by-file restore process of the present invention;
FIG. 8 is a flowchart illustrating the servicing of logical sector requests during file-by-file restore, in accordance with the present invention;
FIG. 9 is a flowchart illustrating the image restore process of the present invention; and
FIG. 10 is an outline of the format of a Novell NetWare volume table.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The preferred embodiment of the present invention is a backup software package for a file server running the Novell NetWare operating system. NetWare has by far the largest installed base of network file servers, and the market for NetWare backup software is therefore quite substantial. However, the general techniques described below can be readily applied to other operating systems, such as Microsoft Windows NT, IBM OS/2, MS-DOS (or compatible operating system, hereafter referred to generically as DOS), etc., so the discussion here is not meant to limit to scope of the present invention to any particular operating system or file system.
1. NetWare File System
In the preferred embodiment, the file server has one or more physical disks attached, as well as a tape drive. As shown in FIG. 1 (where the layout of a typical NetWare disk drive is shown in a diagram), in a NetWare system, each physical disk contains a physical partition table 100, typically placed on the first sector(s) of the disk. This table identifies the physical partitions located on each disk, including the starting point, size, and partition type (e.g., DOS, NetWare, etc.). In a NetWare server, the system first boots from a DOS diskette or a DOS bootable partition 101 on the hard disk. After DOS has booted, the NetWare server code is loaded as a DOS application which takes over the entire computer, effectively taking control away from DOS. NetWare then loads its device drivers, including those drivers which allow sector reading and writing of the disk and tape drives, mounts any NetWare disk volumes found on the NetWare partition(s) 102, and offers its file services on the network.
NetWare currently allows only one physical NetWare disk partition 102 per physical drive. This physical partition is broken up into two logical regions. The first region 103 contains the "hotfix" sectors. These are sectors set aside to map out bad sectors in the main data region and typically constitute a small percentage (1-2%) of the overall physical partition. The second region 105, which comprises the remainder of the physical partition, is used for storing volume data. Each time a write occurs to a NetWare disk, the operating system performs a physical write-with-verify operation to the disk. If the verify fails, the bad portion of the disk is then mapped out by assigning a portion of the hotfix area 103 to replace it at the logical level. Obviously, if enough disk flaws develop over time, the pool of unused sectors in the hotfix area could be exhausted, but presumably Novell has enough experience in selecting the appropriate amount to allocate to the hotfix area that such an occurrence is extremely unlikely.
This technique of dynamically mapping out bad areas of the disk costs a little in performance, since the verify pass requires an extra rotation of the disk head, but it has several notable advantages. First, it allows instant, reliable use of a new NetWare partition without any extended burn-in periods that attempt to map out present (and future) disk flaws, as were required in older versions of NetWare. Second, it significantly reduces the (already low) probability of failure on subsequent reads, since the sectors are guaranteed to have been read successfully from the disk at least once. Further, the performance disadvantage mentioned above is mitigated by the fact that most disk accesses are reads, so the overhead from effectively slowing down write operations is not very noticeable.
As shown in FIG. 1, the main volume region 105 of a NetWare partition 102 may be split into multiple segments 106 (currently up to M=8 segments per NetWare partition). A volume resides in one or more segments, and the mapping between volumes and segments is established by a volume table 104, which resides at the beginning of the main volume region 105. There is one entry in the volume table 104 for each segment 106 of the partition 102. To improve data integrity, NetWare stores multiple copies of the volume table 104. Since there may be multiple physical drives in a system, each with a NetWare partition containing multiple segments, a NetWare volume can easily be spread across physical drives. NetWare has utilities that allow an existing volume to be extended onto other segments, so it is fairly easy (and quite common) to add a new disk drive and grow an existing volume onto the new drive, which has its own hotfix region.
The NetWare volume table format is not currently documented by Novell, although Novell has indicated that it will be documented in the near future. The exact format of this structure is outlined in FIG. 10 and was determined during the development of the present invention by examining the contents of the logical partitions, with ready help from Novell engineers. The definition is given as a C++ language structure statement, with the double slash (//) indicating a comment to the end of line. The definition starts at 300 for the VOLUME-- TABLE-- ENTRY array (319). The volume header is a single sector (512 bytes), which is an array of up to 8 of these records, as shown at 319. Each record describes one segment of one logical volume. The volume header is placed at logical sector number 160 in the logical partition, and it is replicated three more times for robustness at 32-sector intervals thereafter. The NameLen field 301 contains the length of the Name string 302, which is the volume name and must be unique across volumes. The Sync field 304 and the Flags field 308 are unused in the present invention. The NumberOfSegments field 305 indicates how many segments are contained in the volume. The SegmentPosition field 306 indicates which segment of the volume is described by this entry; for example, a value of zero here indicates the first segment of a given volume, the value one indicates the second segment, etc. The StartingSector field 309 indicates which physical sector number in the partition corresponds to the start of this segment. The SectorsInSegment field 310 contains the number of sectors in this segment. The FirstBlockInSegment field 312 indicates the first logical block number contained in this segment. The remaining fields are all identical for each entry of segments contained in a given volume. The BlockShiftFactor field 305 is the base 2 logarithm of the number of sectors per logical block in the volume. The BlocksInVolume field 311 indicates the total number of blocks in the volume. The FirstPrimaryFATBlock 313 indicates which logical block in the volume contains the first FAT block of the volume; the FirstMirrorFATBlock field 314 indicates the start of the mirror copy of the FAT. Similarly, the FirstPrimaryDirectoryBlock field 315 indicates the start of the directory blocks for the volume and the FirstMirrorDirectoryBlock 316 indicates the start of the mirror copy of the directory blocks. All of these fields can easily be used to identify the segments of all volumes on the partition, as well as their FAT and directory block chains.
At a logical level, NetWare views each volume as a linear group of sectors. All the mapping of parts of volumes to segments and the flaw mapping into the hotfix region 103 are transparent at this level. The preferred embodiment performs its sector reads and writes at this logical level, using an internal NetWare call (LogicalPartitionIO).
At this logical level, a file allocation table (FAT), similar in spirit to the well-known DOS FAT, is used to record which logical blocks of the volume are currently in use. All space in the volume is allocated in terms of blocks, which are analogous to a DOS cluster. A block is a logically contiguous set of M sectors, where M is always chosen to be a power of two. Typical block sizes range from 8 sectors (4096 bytes) to 128 sectors (65536 bytes). In contrast with DOS, where the FAT is a table of fixed size stored at the beginning of the volume, the NetWare FAT is itself spread throughout the volume in blocks, which are linked together using FAT entries. Each segment of the volume contains enough FAT blocks to manage its own blocks, thus allowing for simple extension of existing volumes to new segments. For data integrity purposes, multiple copies of the FAT are stored. The volume table entry for each segment contains pointers to the first FAT block of that segment, but all FAT blocks of the volume are logically linked together into a single chain via FAT entries. Thus, space for the FAT is effectively allocated as if the FAT itself were a file.
FIG. 2 shows a table of sample NetWare FAT entries. Each FAT entry consists of eight bytes. The first four bytes indicate the sequence number 110 of the FAT entry, or the block number within the file of the associated block. Normally, these sequence numbers are sequential (0,1,2, . . . ) for sequential blocks in the FAT chain of each file, but they may not be sequential if sparse files are used, as illustrated at entry 114. The second four bytes, which correspond most closely to the DOS FAT entry, contain the block number 111 of the next FAT entry in this file. A zero 112 in a FAT entry indicates that the associated block is unallocated (i.e., available), while a value of all ones (hex FFFFFFFF) 113 indicates the end of the FAT chain for this file. Note: In NetWare 4.x, sub-block allocation is also used to minimize the unused ("slack") space in the last block of a file. This is indicated by setting the most significant bit of the next block number without the entire entry being FFFFFFFF; the remaining bits indicate where the final partial block is stored. For our purposes, this fact is not relevant other than to know that the upper bit of a next block number being set indicates that the block in question is in use and is the end of a FAT chain!.
Directory entries for files and subdirectories are also stored in blocks. Each directory entry contains the file name, size, and other attributes, including pointers that indicate its position in the volume directory tree. All directory blocks are linked together using FAT entries as if the directory blocks themselves were a file. The volume table entry for the first segment of the volume contains the logical block number of the first directory block of the volume.
Both the FAT and the directory entry blocks for the entire volume can thus be identified by reading the volume table entry for the first segment of the volume, then using the FAT entries to follow the singly linked chains for each set of blocks. When a volume is mounted, NetWare does just this. It reads the entire FAT into memory, then reads in all the directory blocks, performing checks on the integrity of the volume structure. If a problem is found during the mount process, NetWare informs the system administrator that a repair utility (VREPAIR) should be run in order to enable mounting the volume. The directory blocks are cached in memory, but are not required to fit in memory all at once.
2. Backup
The backup software of the preferred embodiment runs as a NetWare Loadable Module (NLM), with an accompanying disk driver. Either from the NetWare console or from an application running on a network workstation, the user specifies when a backup is to occur, either by scheduling a backup operation in advance or by manually invoking an immediate backup. The NLM then backs up each physical partition in turn to the tape (or to whatever backup medium is used). In most instances, the disk contains a small DOS (boot) partition followed by a NetWare partition. Physical disk devices other than the boot drive usually contain only a NetWare partition.
It is possible, although rare, that other types of disk partitions (e.g., OS/2 HPFS, Window NTFS) exist on a NetWare drive. The preferred embodiment will perform a "dumb" (i.e., conventional) image backup of such partitions, without using any knowledge of the native operating system or file system associated with that partition, and with all the concomitant problems of a traditional image backup. This limitation in the preferred embodiment is accepted as a conscious decision solely because such partitions are so rare in the NetWare environment as to be of little commercial interest, but clearly the techniques of the present invention could be applied to these other partition types if desired.
If a DOS FAT partition exists on the drive, it is backed up using an image backup in the preferred embodiment. As discussed in the next section, this approach greatly facilitates a complete restoration of both partitions of a failed disk drive, which is otherwise a very painful and time-consuming process on a NetWare system. In an alternate embodiment, the DOS partition may be backed up on a file-by-file basis.
In the preferred embodiment, each NetWare volume is backed up as a single logical image. The volume table is read and interpreted to understand which segments correspond to each volume, and the volume table is also saved at the beginning of the tape to allow a restoration to an identical physical segment/volume configuration if desired. However, each volume image can also be independently restored to any physical disk configuration with enough space to hold the image. Because each volume is read via the internal NetWare call (LogicalPartitionIO) that reads logical sectors, the hotfix map is automatically and transparently used by NetWare to present an image which is (normally) error-free and independent of any physical flaws.
To minimize the time required for subsequently mounting the backup image, the logical sector image of the volume is not stored in linear sector order on the tape. Instead, as shown in FIG. 3, all the logical sectors necessary for NetWare to perform the mount are saved in a FAT/directory header 122 at the beginning of the volume image on tape. Control information 120 identifying these sectors, as well as other information such as the time of the backup, is written along with the header. The set of sectors saved in the FAT/directory header 122 includes all the FAT blocks and the directory blocks of the volume. These blocks are identified by reading the volume table entry for the first segment of the volume, which contains pointers to the first FAT and directory blocks of the volume, and the FAT chain is then followed to identify all subsequent FAT and directory blocks. Actually, NetWare stores a duplicate ("mirror") copy of both the FAT and directory blocks, but these mirror copies are not included in the header, although they are backed up as part of the main volume data. After this header, the remaining logical sectors, comprising the file data 123, are appended in a monotonically increasing sector order.
Note that, in the preferred embodiment, it is not the case that all logical sectors are always included somewhere in the backup image. For example, in order to minimize backup time of partially full volumes, the preferred embodiment by default excludes logical blocks (and thus the associated logical sectors) which do not contain any file data, such as 124 in FIG. 3. The "empty" blocks are identified by scanning the FAT to see which FAT entries are zero. The user may override this operation to force all sectors to be included in the backup if desired. Similarly, in the preferred embodiment, the backup software will scan the directory entries for deleted files, which are retained by NetWare on a temporary basis. The data blocks, such as 125, associated with those deleted files will be excluded from the backup image to minimize backup time, unless the user overrides this default behavior.
Because the blocks are not logically ordered on tape, a block map table 121 is pre-computed using the FAT/directory information and stored along with the header, with one entry per logical block. Each entry 126 in this table indicates which tape block in the backup image corresponds to a given logical block. The table thus allows for instant lookup of the position of each logical block on the tape at restore time.
In an alternate embodiment, where the on-disk structure for the file system is not known, the backup software can identify all the sectors required for mount (and save them in the tape FAT/directory header 122) using the technique shown in the block diagram of FIG. 4. First, the backup process presents a "pseudo-volume" 139 to the operating system 133 to be mounted read-only. Whenever a logical sector read call 131 is issued by the file system 134, the "disk driver" logic for the pseudo-volume 139 performs the read by instead reading logical sectors from the actual logical volume to be backed up 135. The pseudo-volume disk driver 139 maintains a log of which logical sectors are read during the mount process. If the file system mount process automatically reads all the directory and control structures for the disk (as in NetWare), after the pseudo-volume mount is completed this sector log identifies all the necessary sectors to be included in the tape image header. Otherwise, the backup application 138 will need to issue file system calls to force all such areas of the disk to be accessed so that these areas can be logged. For example, it may be necessary to "walk" over the entire directory tree structure of the disk using the normal findfirst/findnext file calls. Once the sector logging is complete, the backup application 138 uses this log to build the header 122 and proceeds to backup in a manner basically identical to that of the preferred embodiment. While this pseudo-volume approach does require a knowledge of the operating system entry points for logical sector reads 131, these entry points are normally well-documented as part of the device driver interface specifications, so this method requires much less effort than trying to understand completely an undocumented on-disk format.
The preferred embodiment also includes a mechanism to perform "incremental" image backups. A list of modified ("dirty") blocks is maintained by a separate NLM which tracks block write calls. With this technique only the blocks of the disk which have changed are read during an incremental backup and stored on the tape. It is absolutely imperative that this NLM be present at all times when the volume is mounted, or some writes may be missed, totally negating the integrity of all subsequent incremental backups until a new full backup is performed.
As shown in FIG. 5, a complete block map table 151, together with all directory and FAT blocks 152, whether they have changed or not, are included in an incremental backup image 150, so that mounting the tape image is still fast. Each block map table entry points to the modified block in the incremental backup 154 if that block has changed, else it points to the original block 153 in the previous backup. To keep track of the modified blocks, the NLM simply maintains a bitmap (one bit per block) indicating which blocks in each volume have been written. For a 10 GB volume with 4 KB blocks, this amounts to only 320 Kbytes of bitmap, which can easily be kept in memory. The bitmap file, which is protected by a cyclic redundancy check (CRC) to verify that its contents have not been corrupted, is read from the DOS partition at startup (before any writes to the NetWare volume can have occurred) and then immediately deleted. At shutdown, after all the volume has been dismounted so that no further writes can occur, a new bitmap file is written back out to the DOS partition. Thus, if a power failure or some other disorderly shutdown occurs, the absence of a valid bitmap file indicates that the next backup must be a full backup. Otherwise, the bitmap indicates exactly which blocks have changed and therefore which blocks need to be included in the incremental backup. Note that using this incremental backup technique does not significantly affect restore time, although there is a small performance degradation on restore due to having what would otherwise be contiguous parts of the image on discontiguous portions of the tape. It is therefore recommended that full backups be performed regularly, perhaps on a weekly basis, to minimize the small cumulative performance degradation on restore.
In an alternate embodiment of incremental image backup, for each block stored on the tape, a checksum or CRC is also stored in a table which is appended to the backup image. Each checksum is large enough to provide a very high level of confidence that blocks with matching checksums are identical. For example, if each checksum consists of 128 bits, the probability of a false match for any given block is approximately 10-38 ; this actually gives much better reliability than the underlying tape and disk storage media, which typically have error rates on the order of 10-20. Fortunately, on high end CPUs such as a 486 or Pentium, such checksums can be computed much faster than data can be read from disk, assuming that the backup process is allowed to consume a significant fraction of the available CPU bandwidth. The checksums are used as follows. On backups subsequent to the original full backup, the checksums for each block are computed and compared to that of the original backup image. If the two checksums match, it is assumed that the two blocks match, so the new block is not stored on tape, but a pointer to the old block is saved in the block map table for this backup, which cannot be pre-computed and is therefore appended to the tape image. If the two checksums do not match, the new block is included in the image backup. Note that this method does require that the entire disk image be read and thus is slower than the preferred embodiment. However, assuming that only a small fraction of the blocks on the disk has changed, this technique allows the incremental backup to proceed at speeds limited only by the disk read time, which is considerably faster than the tape write throughput which limits the speed of a full backup. While it has some obvious disadvantages, this embodiment is probably somewhat easier to implement than the preferred embodiment because it only involves application level code while the latter requires system-level resident code.
During any backup, file system consistency and integrity issues can arise if any files on the disk are modified. For example, in conventional file-by-file backup, if a file is open for write, the backup application typically skips that file and adds its name to an exception list that can be perused by the administrator. This situation alone is normally tolerable, although there are often files that are nearly always kept open (e.g., some database files) and therefore would never be backed up, which would clearly make the backup useless with respect to those files. An even more insidious situation can arise when dealing with files whose contents are inter-related, such as a database data file and its index file(s). If some of the files are backed up and then all the files are updated before the remaining files are backed up, the set of files on the backup tape are at best inconsistent and at worst dangerous to system integrity should they ever be restored and used. There is no perfect solution to all these problems other than to dismount the volume during backup, but only after each application responds to a broadcast of the impending dismount by updating and closing all its files in a consistent manner. However, such a solution is problematic because there are in general no such broadcasts or protocols used in NetWare, and because in many installations it is unacceptable to dismount the volume since some applications are required to be on-line at all times. Note that merely dismounting the volume without cooperation from applications is also an imperfect solution, because the applications may need to write some data to close their files in a consistent state.
In the preferred embodiment, there are two different user-selectable ways to handle this problem. Neither solution is perfect, but the combination of the two gives the user flexibility comparable to that of conventional file-by-file backup systems. The first option forces the volume being backed up to be dismounted while the image backup takes place. This approach has the potential disadvantages discussed above, but in some environments it provides a very acceptable solution. The second and more novel option is to "freeze" the volume during the image backup. In this case, the volume is kept on-line at all times, but all writes to the volume are temporarily suspended. Under NetWare 3.12, this suspension is implemented at the logical sector i/o call level (LogicalPartitionIO), which is already hooked by the backup software to read logical sectors. In Netware 4.1, in order to support Directory Services properly, the WriteFile and ModifyDirectoryEntry calls also need to be suspended in a similar fashion. Any application, including the operating system itself, which attempts to write to the drive will have its operation temporarily blocked, which does not hang the system since NetWare is a multi-tasking operating system. However, instead of suspending all writes to the volume during the entire backup process, which could be quite lengthy for large volumes, each write is suspended only until the point at which the logical sector number being read for backup exceeds the logical sector range of the requested write. Using this approach, the backup image is guaranteed to be identical to the disk image at the time when the backup started, but the system can resume somewhat normal operation before the operation is complete. In an alternate embodiment of this second approach, the driver maintains a small separate cache which is filled with "original" copies of blocks which are written during the backup. These original copies are then written to tape instead of the modified versions on disk, at which point the original block copy can be discarded to free up space in the cache. As long as the cache never fills up, no write operations will ever block, so this alternate approach may significantly limit (or even eliminate) the amount of time spent with blocked write calls in many cases, although clearly this depends on the size of the cache and the amount of write activity.
By monitoring system file status and file calls, the backup software of the preferred embodiment also keeps a list of files which were opened for write at the time the backup began and those which are created or opened for write during the backup. This list becomes the exception log, similar to that of a conventional file-by-file backup, which identifies those files whose contents on the backup may be invalid or inconsistent. There are, however, two significant differences between this exception log and that of a "conventional" exception log. First, the bad news: the time "window" during which a file will be added to the exception log in the preferred embodiment is considerably longer than in the conventional case, where the window for each file consists only of the time required to back up that one file. In other words, the exception log will tend to be somewhat longer in the preferred embodiment, all other things being equal. While this is a disadvantage of the present invention, it is not a very significant one in most cases. Second, the good news: the backup image of the preferred embodiment contains at least a version (albeit possibly invalid) of the contents of files on the exception list. In many instances, this version is actually perfectly good, but it almost always allows for partial recovery of the file contents which is often quite welcome after a catastrophic failure. By contrast, in the conventional case there is not even an inconsistent version available.
3. File-By-File Restore
Once a backup image has been written to tape, the preferred embodiment provides two simple methods for the user to recover individual files from tape without performing a full image restore. Both mechanisms are based on mounting the tape image as a NetWare volume, using a pseudo-disk driver. This is accomplished as shown in the block diagram of FIG. 6 and the flow charts of FIG. 7 and FIG. 8.
During pseudo-disk driver initialization at blocks 200 and 201 of FIG. 7, the entire tape header is read from the tape drive 171 via the tape driver software 170 into memory and entered into the cache 169. Since the header may be too large to fit into the memory allocated for the cache 169, the cache logic writes any excess data to a cache file on a NetWare volume 165 via calls to the operating system 163 and maintains data structures that can be used to locate the appropriate cache blocks in the cache file. After block 201, the logical read/write logic of FIG. 8 is enabled, as discussed below. At block 202 of FIG. 7, the restore software creates a (pseudo) internal NetWare drive 168 which is somewhat larger (by 50% in the preferred embodiment) than the original volume size. As shown in FIG. 6, the software "disk" driver for this new drive is added to the system using the NetWare AddDiskDevice call; the driver effectively reads from the tape image to process logical read requests 161 from the file system, but the cache 169 is used for the tape image header to minimize tape seek time. When a block in the header is requested, in most cases it will be in cache memory 169, but in the worst case an access to the cache file on disk drive 166 is required, which is much faster than accessing the same block on tape would be. In the preferred embodiment, since a NetWare disk driver cannot make file i/o calls directly, access to the cache file is achieved by posting a request to a separate cooperative thread 172 which does not operate at the driver level and thus can fulfill the request. During its initialization, the driver also loads in the block map table 121 from tape 171 and holds it in memory so that the location of each block on the tape can be instantly determined.
Logical sector reads and writes 161 are handled by the pseudo-disk driver 168 as outlined in FIG. 8. Starting at block 214, the disk driver continually polls at 210 and 215 for any pending read or write requests from the operating system 164. When a read request is found, processing continues at block 216. At this point, if the requested disk blocks are in the cache, processing continues at block 217, where the blocks are read directly from the cache 169, which may result in an access to the disk volume 165 via the cooperative thread 172. If the requested disk blocks are not in the cache at 216, processing continues at block 218, where the blocks are read from tape. After blocks 217 and 218, processing continues back to the beginning at block 215. When a write request is found at block 210, processing continues to block 211, where a check is made for the presence of the disk blocks in the cache. If the disk blocks are already in the cache, processing continues at block 213. If the disk blocks are not already in the cache, processing continues to block 212, where any partial disk blocks of the request are read from tape into the cache. Note that full disk blocks to be written do not need to be fetched from tape into the cache, since the entire disk block contents will be overwritten in the cache anyway. From block 212, processing continues to block 213, where the requested disk block writes are posted to the cache. All of these cache operations may result in blocks being read from or written to the disk volume 165 via the cooperative thread 172. Such cache operations are well understood in the art, and there are well-known caching strategies that may be employed without affecting the scope of the invention. From block 213, processing continues back to the beginning at block 215.
As shown in block 203 of FIG. 7, the driver next creates a NetWare partition (using the MM-- CreatePartition NetWare call) large enough to hold a default hotfix size and the volume size. Creation of this (pseudo) partition will result in writes to initialize the hotfix and volume table areas of the partition. These writes are also cached by the cache logic 169, and will effectively be discarded when the tape volume is eventually dismounted. Once this partition is created, the driver issues calls at 204 to create a NetWare volume (writing the volume information using the LogicalPartitionIO call) with a size matching the size of the volume that was backed up, which results in a new volume table entry being written to the partition (and cached by the driver). Finally, a command-line request is issued to NetWare to mount the new volume at 205. At this point, the driver for the "tape" volume 168 enters a loop 206 processing logical sector i/o requests 161; since the driver knows the exact location of each block (in the cache memory 169, in the cache file on disk 167, or on the tape 171), it can easily satisfy all read/write requests, as shown in FIG. 8. Only reads/writes of file contents will result in accessing the tape 171 at blocks 218 and 212, since all the directory and FAT information is in the cache (169 or 167). Note that, if the header blocks were not consolidated in one contiguous region at the beginning of the tape image, this mounting process could require many minutes of tape seeking. Given the way the header blocks are stored in the preferred embodiment, only a single tape seek is required, to the beginning of the tape image, so the additional overhead beyond that required for mounting a similar disk volume is usually measured in seconds (or tens of seconds) instead of minutes.
Observe that, under NetWare, file read accesses to the "tape" volume 168 often result in sector-level write accesses. For example, NetWare maintains a last-accessed date for each file which is updated (i.e., written) each time a file is accessed. Similarly, under NetWare version 4, files may be compressed, and read accesses may result in the file contents being decompressed and written to disk. Thus, the cache 169 and its associated logic allow for arbitrary write access, since the cache can grow dynamically (limited by the amount of free space on the disk volume 165). In the preferred embodiment, the user is not given write access to the volume 168, simply because of the possible confusion caused by the transient nature of such writing, but in an alternate embodiment this somewhat arbitrary restriction can be removed to allow the user to modify the transient image of the mounted volume 168.
Once the new volume 168 is mounted, the user may access files on the "tape volume" using any of his normal file tools, such as Windows file manager. Applications can even be run from the tape volume just as if they resided on disk. In practice, although retrieving files from the tape volume is very slow compared to retrieval times from a disk volume, the time required to restore only a few files or a single subdirectory seems to be quite acceptable; i.e., comparable to the restore time from a conventional file-by-file backup. In fact, often the total restore time is less, because the user can easily peruse the file/directory tree using his own tools to decide which files to restore instead of using a "foreign" restore tool.
However, in the worst case of a large set of files or a set of files which is fragmented (i.e., spread all over the tape), the extra tape seeks can significantly degrade restore performance. To handle this case, the preferred embodiment offers an alternate method for restoring individual files which, from the user's perspective, operates identically to a conventional restore from a file-by-file backup. Instead of giving the user direct access to the mounted volume, a dedicated restore application allows the user to select ("tag") the files he wishes to restore. This application then examines the volume structure, looking at the FAT and directory entries for the tagged files to determine an optimal ordering for restore. In fact, simply by ordering the restore process at the block level instead of the file level, the restore application can guarantee that the entire tagged file set is restored with no more than a single pass over the tape, which is as good as the guarantee of any file-by-file system.
Thus, the present invention allows greater flexibility in restoring individual files than a conventional file-by-file approach, while at the same time offering comparable (or better) restore performance.
4. Image Restore
As part of the backup process in the preferred embodiment, a set of disaster-recover floppy disks can be created which allow the user to boot DOS from floppy and load enough of NetWare to access the original tape drivers so that the disk partitions can be restored. This set of boot floppies typically only needs to be built once, or at most every time the NetWare device driver configuration is changed. In the case of a catastrophic hardware failure, the user invokes the restore procedure shown in the flow chart of FIG. 9 by installing a new (unformatted) hard disk, inserting the disaster-recovery floppies, allowing a full restore of the entire disk configuration and contents as they were at the time of the last backup. Using conventional file-by-file backup, such a recovery process requires the user first to re-install DOS, then to re-install NetWare, including all the customizations and drivers which are particular to the given server's configuration, then finally to restore all the files from tape. It is not uncommon for this such a procedure to consume days of experimentation to re-configure the system properly. By contrast, use of the disaster-recovery floppies in the preferred embodiment reduces the time to minutes or hours at most, depending on the backup image size, without any manual intervention or configuring.
Normally, after rebooting from the disaster recovery diskettes at block 220 of FIG. 9, the next step in restoring the volume image from tape is to partition the disk into a DOS and a NetWare partition, as shown in block 221. From block 221, processing continues to block 222, where the contents of the DOS partition are restored. Since the on-disk structure for a DOS FAT volume is entirely documented, the methods described here for allowing mount of a volume tape image could easily be applied to allow a file-by-file restore from the image backup of the DOS partition. However, the DOS partition on a NetWare system is typically quite small and does not contain many files that are accessed directly by the administrator, so in the preferred embodiment this functionality is not implemented. Also, because the DOS partition is so small, usually no disk flaws are encountered during a conventional image restore of the DOS partition, particularly given that a replacement disk would almost certainly be a modern disk drive in which initial flaw mapping can be performed automatically and transparently. In the extremely rare event that the flaw map on the new partition is incompatible with the original image backup and cannot be fixed by internal drive flaw management, the DOS restore logic would have to interpret the disk structure from the tape image to pull off the DOS files and restore them to the newly formatted partition, avoiding the flaws. Those of ordinary skill in the art will understand the steps necessary to implement this. However, because the probability of encountering this potential problem is so small, as explained above, the functionality to handle this worst-case eventuality is unlikely to ever be necessary.
The alternate embodiment discussed in the above section on backup, in which a file-by-file backup is performed on the DOS partition, allows file-by-file restore if desired, as well as the ability to resize the DOS partition on a new disk on restore. Unlike the preferred embodiment, this alternate embodiment would also require some software to format the DOS partition logically before restoring all of the files.
Once the DOS partition is restored, in the preferred embodiment the system is rebooted from the DOS partition at block 223 to bring up the full NetWare environment that existed at the time of the image backup. The restore software calls NetWare (MM-- CreatePartition, MM-- InitializePartitionTable) at block 224 to initialize the NetWare partition(s) on the physical disk drive(s); this step builds the hotfix area and an empty volume table. For each volume selected by the user to be restored from the tape, the restore software calls NetWare (using LogicalPartitionIO) at 225 to create a new (empty) volume of equal or greater size, which may span multiple segments, depending on the disk configuration and the user's preferences. The logical sector image of the original volume is then read from tape at 226 and written to the appropriate segment(s) via the internal NetWare logical sector i/o call (LogicalPartitionIO). Once all the sectors have been restored to the disk, the restore software issues a NetWare command-line call at 227 to mount the restored volume. At this point, the volume is available for access. When all the requested volumes have been restored, the restore software exits and the system is back in its original state at the time of the backup.
In the preferred embodiment, this entire process, including booting from floppy and restoring the DOS and Novell volumes, is totally automatic, other than the fact that the user must specify which volumes get restored and remove the boot diskette to allow the final reboot to occur. The process is so much simpler than a full system restore from a conventional file-by-file backup that several interesting applications of this type of restore become feasible. For example, it is possible to restore the volumes to a separate ("spare") server computer just to peruse and use the backup data without affecting the original server. Similarly, this technique can be used to transfer the file contents of an existing server to a new server, presumably with higher performance and capacity, which is to replace the existing server. As another example, an image backup tape would allow a vendor or technician to install a new server containing a pre-configured set of network applications at a customer site. Today such an operation usually involves the painful procedure of partitioning the disk, installing DOS, installing NetWare, then installing the applications, and this process must be repeated for each new customer. Using the present invention, the vendor could perform the installation once at his headquarters, then have a technician simply perform the image restore at each customer site, resulting in a considerable savings in time and money.
In the case where only the NetWare partition(s) need to be restored (but not the DOS partition), the basic flowchart of FIG. 9 is used, but blocks 220, 222, 223, and part of 221 (creating the DOS partition) are skipped. This case occurs for example when the contents of the NetWare partition are lost or deleted through user error or a system crash, but the DOS partition is not corrupted.
The invention has been described in an exemplary and preferred embodiment, but is not limited thereto. Those skilled in the art will recognize that a number of additional modifications and improvements can be made to the invention without departure from the essential spirit and scope. The scope of the invention should only be limited by the appended set of claims.

Claims (43)

We claim:
1. A method for backing up data in a computer system from a primary storage means to a backup storage means on a sector-by-sector basis and restoring data in a computer system from said backup storage means to a restore storage means on a sector-by-sector basis, said method comprising the steps of:
reading a set of logically contiguous sectors from the primary storage means using a software call of the operating system that provides access to the files stored on said primary storage means, said call of said operating system performing any physical level remapping necessary to avoid previously detected physical flaws on said primary storage means,
writing said set of logically contiguous sectors to said backup storage means,
creating a partition on said restore storage means of a size at least as large as the size of said primary storage means,
reading a set of logically contiguous sectors from a location on said backup storage means,
writing said set of logically contiguous sectors to said partition of said restore storage means using a software call to the operating system that provides access to the files stored on said partition of said restore storage means, said call of said operating system performing any physical level remapping necessary to detect and avoid physical flaws on said restore storage means.
2. The method of claim 1, further including the steps of
writing on said backup storage means a sector directory table containing information sufficient to indicate the size of said primary storage means and the location of each logical sector written to said backup storage means,
reading said sector directory table from said backup storage means,
using said sector directory table to determine the sector numbers and locations of said logically contiguous blocks to be read.
3. The method of claim 2 wherein said primary storage means consists one or more disk drive partition(s) in said computer system, and wherein said operating system call to read said logically contiguous sectors performs the mapping necessary to locate said logically contiguous sectors on said disk drive partition(s).
4. The method of claim 2 wherein said partition created on said restore storage means is larger than the size of said original primary storage means.
5. The method of claim 2 wherein said partition created on said restore storage means spans multiple physical disk drives.
6. The method of claim 2 wherein unused sectors that do not contain file data are not read from said primary storage means and are not stored on said backup storage means, and wherein the absence of said unused sectors on the backup storage means is indicated in said sector directory table.
7. The method of any of claims 1-6 wherein deleted sectors that contain data from deleted files are not read from said primary storage means and are not stored on said backup storage means, and wherein the absence of said deleted sectors on the backup storage means is indicated in said sector directory table.
8. The method of any of claims 1-6 wherein an open file log is maintained of all files which are opened for write while the backup method is in process.
9. The method of claim 8 wherein said open file log is written to said backup storage means.
10. The method of any of claims 1-6 wherein said operating system allows multi-tasking, further including the step of:
temporarily suspending execution of any tasks that attempt to write a set of sectors to said primary storage means until said set of sectors has been read from said primary storage means by the backup task in preparation for writing said sectors to said backup storage means.
11. The method of claim 10, further including the steps of:
maintaining a cache of sectors read from said primary storage means to be written to said backup storage means,
detecting an attempted write by a task to a set of sectors of said primary storage means which has not yet been backed up,
operative when said sector cache is full, temporarily suspending executing of said task,
operative when said sector cache is not full, reading said set of sectors and adding said set of sectors to said sector cache and then allowing said task to continue execution without suspension,
checking said sector cache for the presence of any portion of said set of logically contiguous sectors to be read from said primary storage means,
operative when no such portion is found in said sector cache, reading said set of logically contiguous sectors from said primary storage means,
operative when such portion is found in said sector cache, reading said portion(s) of said set of logically contiguous sectors from said sector cache, and reading remaining portions not found in said sector cache from said primary storage means,
whereby no said tasks attempting to write to said primary storage means will be suspended unless said sector cache is full.
12. The method of claim 11 wherein said portions of said set of logically contiguous set of sectors found in said sector cache are removed from said sector cache after said portions are read from said sector cache, whereby portions of said sector cache may be re-used in order to minimize the number of times that tasks are suspended.
13. The method of any of claims 1-6 wherein only sectors that have changed since the last backup are written to the backup storage means.
14. The method of claim 13 wherein detection of changed sectors further includes the following steps:
computing a checksum (or similar type of function) on groups of sectors read from said primary storage means,
comparing said checksum with the corresponding checksum stored from the previous backup,
operative when the two checksums do not match,
writing said group of sectors to said backup storage means,
writing said checksum to said backup storage means,
operative when the two checksums do match,
setting the entry (or entries) in said sector directory table corresponding to said group of sectors to point to the corresponding group of sectors from said previous backup.
15. The method of claim 13 wherein detection of changed sectors further includes the following steps:
activating monitor software to detect all writes to said primary storage means,
maintaining a dirty sector table indicating which groups of sectors on said primary storage means have been modified,
using said dirty sector table to determine which groups of sectors have been changed,
operative when said dirty sector table indicates that said group of sectors to be backed up has been modified,
writing said group of sectors to said backup storage means,
operative when said dirty sector table indicates that said group of sectors to be backed up has not been modified,
setting the entry in said sector directory table corresponding to said group of sectors to point to the corresponding group of sectors from said previous backup,
saving said dirty sector table to an auxiliary storage means when said monitor software is deactivated at system shutdown.
16. The method of claim 15 wherein said auxiliary storage means is the same as said primary storage means.
17. The method of any of claims 15 further including the following steps:
operative when said monitor software is deactivated,
computing a checksum on the contents of said dirty sector table,
saving said checksum on said auxiliary storage means,
operative when said monitor software is activated,
performing a validity check on said contents of said dirty sector table using said checksum,
invalidating said checksum on said auxiliary storage means.
18. The method of claim 17 further including the steps of:
operative when said monitor software is deactivated,
saving an indicator of the time of said deactivation on said auxiliary storage means,
operative when said monitor software is activated,
verifying that the operating system has not been active to allow writes to said primary storage means since the last time a valid dirty sector table was written to said auxiliary storage means,
operative when said verification fails, invalidating the contents of said dirty sector table.
19. The method of any of claims 18 wherein the failure of any checks on the validity of the contents of said dirty sector table results in all sectors being marked as having been modified, whereby a complete backup is performed.
20. The method of any of claims 1-6, further including the steps of:
creating a removable disk which contains all files necessary to boot said computer system into said operating system, including software drivers that allow access to said primary storage means and said backup storage means,
booting said computer system using said removable disk.
21. A method for backing up data in a computer system from a primary storage means to a backup storage means on a sector-by-sector basis and for providing file-by-file access to said data on said backup storage means, said method comprising the steps of:
reading a set of logically contiguous sectors from the primary storage means using a software call of the operating system that provides access to the files stored on said primary storage means, said call of said operating system performing any physical level remapping necessary to avoid previously detected physical flaws on said primary storage means,
writing said set of logically contiguous sectors to said backup storage means,
identifying a control set of logical sectors of said primary storage means, said control set including sectors required to mount said primary storage means for file access by said operating system or to traverse the directory structure of the files on said primary storage means,
re-ordering the sequence of writing said sets of logically contiguous sectors on said backup storage means in order to group sectors of said control set in closer physical proximity to one another on said backup storage means than would occur if said sequence were ordered strictly by logical sector number,
caching said control set of logical sectors from said backup storage means to allow fast random access to said control set,
creating a virtual disk partition of said operating system,
servicing logical sector read requests on said virtual disk partition,
operative when a sector of said read request is part of said control set, reading said sector from said control cache,
operative when a sector of said read request is not part of said control set, reading said sector from said backup storage means,
mounting said virtual disk partition as a disk volume of said operating system,
whereby files on said disk volume may be accessed using normal operating system calls and utilities.
22. The method of claim 21, further including the steps of:
writing on said backup storage means a sector directory table containing information sufficient to indicate the size of said primary storage means and the location of each logical sector written to said backup storage means,
reading said sector directory table from said backup storage means,
using said sector directory table to determine the locations of said sectors when servicing logical sector read requests.
23. The method of claim 22 wherein said primary storage means consists one or more disk drive partition(s) in said computer system, and wherein said operating system call to read said logically contiguous sectors performs the mapping necessary to locate said logically contiguous sectors on said disk drive partition(s).
24. The method of claim 22 wherein unused sectors that do not contain file data are not read from said primary storage means and are not stored on said backup storage means, and wherein the absence of said unused sectors on the backup storage means is indicated in said sector directory table.
25. The method of claim 24 wherein deleted sectors that contain data from deleted files are not read from said primary storage means and are not stored on said backup storage means, and wherein the absence of said deleted sectors on the backup storage means is indicated in said sector directory table.
26. The method of claim 21 wherein an open file log is maintained of all files which are opened for write while the backup method is in process.
27. The method of any of claims 21-26 wherein said open file log is written to said backup storage means.
28. The method of any of claims 21-26 wherein said operating system allows multi-tasking and further including the step of:
temporarily suspending execution of any tasks that attempt to write a set of sectors to said primary storage means until said set of sectors has been read from said primary storage means by the backup task in preparation for writing said sectors to said backup storage means.
29. The method of claim 28, further including the steps of:
maintaining a cache of sectors read from said primary storage means to be written to said backup storage means,
detecting an attempted write by a task to a set of sectors of said primary storage means which has not yet been backed up,
operative when said sector cache is full, temporarily suspending executing of said task,
operative when said sector cache is not full, reading said set of sectors and adding said set of sectors to said sector cache and then allowing said task to continue execution without suspension,
checking said sector cache for the presence of any portion of said set of logically contiguous sectors to be read from said primary storage means,
operative when no such portion is found in said sector cache, reading said set of logically contiguous sectors from said primary storage means,
operative when such portion is found in said sector cache, reading said portion(s) of said set of logically contiguous sectors from said sector cache, and reading remaining portions not found in said sector cache from said primary storage means,
whereby no said tasks attempting to write to said primary storage means will be suspended unless said sector cache is full.
30. The method of claim 29 wherein said portions of said set of logically contiguous set of sectors found in said sector cache are removed from said sector cache after said portions are read from said sector cache, whereby portions of said sector cache may be re-used in order to minimize the number of times that tasks are suspended.
31. The method of any of claims 21-26 wherein only sectors that have changed since the last backup are written to the backup storage means.
32. The method of claim 31 wherein detection of changed sectors further includes the following steps:
computing a checksum (or similar type of function) on groups of sectors read from said primary storage means,
comparing said checksum with the corresponding checksum stored from the previous backup,
operative when the two checksums do not match,
writing said group of sectors to said backup storage means,
writing said checksum to said backup storage means,
operative when the two checksums do match,
setting the entry (or entries) in said sector directory table corresponding to said group of sectors to point to the corresponding group of sectors from said previous backup.
33. The method of claim 31 wherein detection of changed sectors further includes the following steps:
activating monitor software to detect all writes to said primary storage means,
maintaining a dirty sector table indicating which groups of sectors on said primary storage means have been modified,
using said dirty sector table to determine which groups of sectors have been changed,
operative when said dirty sector table indicates that said group of sectors to be backed up has been modified,
writing said group of sectors to said backup storage means,
operative when said dirty sector table indicates that said group of sectors to be backed up has not been modified,
setting the entry in said sector directory table corresponding to said group of sectors to point to the corresponding group of sectors from said previous backup,
saving said dirty sector table to an auxiliary storage means when said monitor software is deactivated at system shutdown.
34. The method of claim 33 wherein said auxiliary storage means is the same as said primary storage means.
35. The method of claim 33 further including the following steps:
operative when said monitor software is deactivated,
computing a checksum on the contents of said dirty sector table,
saving said checksum on said auxiliary storage means,
operative when said monitor software is activated,
performing a validity check on said contents of said dirty sector table using said checksum,
invalidating said checksum on said auxiliary storage means.
36. The method of claim 35 further including the steps of:
operative when said monitor software is deactivated,
saving an indicator of the time of said deactivation on said auxiliary storage means,
operative when said monitor software is activated,
verifying that the operating system has not been active to allow writes to said primary storage means since the last time a valid dirty sector table was written to said auxiliary storage means,
operative when said verification fails, invalidating the contents of said dirty sector table.
37. The method of claim 36 wherein the failure of any checks on the validity of the contents of said dirty sector table results in all sectors being marked as having been modified, whereby a complete backup is performed.
38. The methods of claims 21-26 wherein said control set is identified by using a knowledge of the file and allocation format of said primary storage means under said operating system.
39. The method of any of claims 21-26 wherein said control set is identified without a complete knowledge of said file and allocation format of said primary storage means, using a pseudo-drive technique which includes the following steps:
creating a temporary virtual disk partition of said operating system,
servicing logical sector read requests on said temporary virtual disk partition by performing reads of the corresponding sectors of said primary storage means,
monitoring the set of logical sectors that are read from said temporary virtual disk partition and adding each sector read to said control set,
mounting said temporary virtual disk partition as a temporary disk volume of said operating system.
40. The method of claim 39, further including the step of using operating system calls to traverse the entire directory tree of said temporary disk volume.
41. The method of any of claims 21-26 wherein some knowledge of said file and allocation format is used to eliminate duplicate copies of structures in said primary storage means from said control set, whereby the size of said control set is minimized.
42. The method of any of claims 21-26 wherein writes to said disk volume are allowed by caching said writes to a temporary storage means.
43. The methods of claim 31 wherein said backup storage means can also be used to perform a sector-by-sector restore as in claim 1.
US08/539,315 1995-10-04 1995-10-04 System for backing up computer disk volumes with error remapping of flawed memory addresses Expired - Lifetime US5907672A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US08/539,315 US5907672A (en) 1995-10-04 1995-10-04 System for backing up computer disk volumes with error remapping of flawed memory addresses
EP96307287A EP0767431A1 (en) 1995-10-04 1996-10-04 System for backing up computer disk volumes
JP8264578A JPH1055298A (en) 1995-10-04 1996-10-04 System for backing up disk volume of computer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/539,315 US5907672A (en) 1995-10-04 1995-10-04 System for backing up computer disk volumes with error remapping of flawed memory addresses

Publications (1)

Publication Number Publication Date
US5907672A true US5907672A (en) 1999-05-25

Family

ID=24150707

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/539,315 Expired - Lifetime US5907672A (en) 1995-10-04 1995-10-04 System for backing up computer disk volumes with error remapping of flawed memory addresses

Country Status (3)

Country Link
US (1) US5907672A (en)
EP (1) EP0767431A1 (en)
JP (1) JPH1055298A (en)

Cited By (294)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6041334A (en) * 1997-10-29 2000-03-21 International Business Machines Corporation Storage management system with file aggregation supporting multiple aggregated file counterparts
US6065053A (en) 1997-10-01 2000-05-16 Micron Electronics, Inc. System for resetting a server
US6073255A (en) 1997-05-13 2000-06-06 Micron Electronics, Inc. Method of reading system log
US6088816A (en) 1997-10-01 2000-07-11 Micron Electronics, Inc. Method of displaying system status
US6101585A (en) * 1997-11-04 2000-08-08 Adaptec, Inc. Mechanism for incremental backup of on-line files
US6108697A (en) * 1997-10-06 2000-08-22 Powerquest Corporation One-to-many disk imaging transfer over a network
US6119212A (en) * 1997-04-23 2000-09-12 Advanced Micro Devices, Inc. Root size decrease on a UNIX based computer system
US6122758A (en) 1997-05-13 2000-09-19 Micron Electronics, Inc. System for mapping environmental resources to memory for program access
WO2000055735A1 (en) * 1999-03-15 2000-09-21 Powerquest Corporation Manipulation of computer volume segments
US6134668A (en) 1997-05-13 2000-10-17 Micron Electronics, Inc. Method of selective independent powering of portion of computer system through remote interface from remote interface power supply
US6134673A (en) 1997-05-13 2000-10-17 Micron Electronics, Inc. Method for clustering software applications
US6138179A (en) 1997-10-01 2000-10-24 Micron Electronics, Inc. System for automatically partitioning and formatting a primary hard disk for installing software in which selection of extended partition size is not related to size of hard disk
US6138250A (en) 1997-05-13 2000-10-24 Micron Electronics, Inc. System for reading system log
US6145098A (en) 1997-05-13 2000-11-07 Micron Electronics, Inc. System for displaying system status
US6154835A (en) 1997-10-01 2000-11-28 Micron Electronics, Inc. Method for automatically configuring and formatting a computer system and installing software
US6163849A (en) 1997-05-13 2000-12-19 Micron Electronics, Inc. Method of powering up or powering down a server to a maintenance state
US6163853A (en) 1997-05-13 2000-12-19 Micron Electronics, Inc. Method for communicating a software-generated pulse waveform between two servers in a network
US6170028B1 (en) 1997-05-13 2001-01-02 Micron Electronics, Inc. Method for hot swapping a programmable network adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6170067B1 (en) 1997-05-13 2001-01-02 Micron Technology, Inc. System for automatically reporting a system failure in a server
US6173346B1 (en) 1997-05-13 2001-01-09 Micron Electronics, Inc. Method for hot swapping a programmable storage adapter using a programmable processor for selectively enabling or disabling power to adapter slot in response to respective request signals
US6179486B1 (en) 1997-05-13 2001-01-30 Micron Electronics, Inc. Method for hot add of a mass storage adapter on a system including a dynamically loaded adapter driver
US6182180B1 (en) 1997-05-13 2001-01-30 Micron Electronics, Inc. Apparatus for interfacing buses
US6185666B1 (en) 1999-09-11 2001-02-06 Powerquest Corporation Merging computer partitions
US6189109B1 (en) 1997-05-13 2001-02-13 Micron Electronics, Inc. Method of remote access and control of environmental conditions
US6192434B1 (en) 1997-05-13 2001-02-20 Micron Electronics, Inc System for hot swapping a programmable adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6195717B1 (en) 1997-05-13 2001-02-27 Micron Electronics, Inc. Method of expanding bus loading capacity
US6199173B1 (en) 1997-10-01 2001-03-06 Micron Electronics, Inc. Method for mapping environmental resources to memory for program access
US6202111B1 (en) 1997-05-13 2001-03-13 Micron Electronics, Inc. Method for the hot add of a network adapter on a system including a statically loaded adapter driver
US6202160B1 (en) 1997-05-13 2001-03-13 Micron Electronics, Inc. System for independent powering of a computer system
US6205503B1 (en) 1998-07-17 2001-03-20 Mallikarjunan Mahalingam Method for the hot swap and add of input/output platforms and devices
US6212585B1 (en) 1997-10-01 2001-04-03 Micron Electronics, Inc. Method of automatically configuring a server after hot add of a device
US6219734B1 (en) 1997-05-13 2001-04-17 Micron Electronics, Inc. Method for the hot add of a mass storage adapter on a system including a statically loaded adapter driver
US6223234B1 (en) 1998-07-17 2001-04-24 Micron Electronics, Inc. Apparatus for the hot swap and add of input/output platforms and devices
US6243838B1 (en) 1997-05-13 2001-06-05 Micron Electronics, Inc. Method for automatically reporting a system failure in a server
US6243773B1 (en) 1997-05-13 2001-06-05 Micron Electronics, Inc. Configuration management system for hot adding and hot replacing devices
US6247079B1 (en) 1997-05-13 2001-06-12 Micron Electronics, Inc Apparatus for computer implemented hot-swap and hot-add
US6247080B1 (en) 1997-05-13 2001-06-12 Micron Electronics, Inc. Method for the hot add of devices
US6249834B1 (en) 1997-05-13 2001-06-19 Micron Technology, Inc. System for expanding PCI bus loading capacity
US6249828B1 (en) 1997-05-13 2001-06-19 Micron Electronics, Inc. Method for the hot swap of a mass storage adapter on a system including a statically loaded adapter driver
US6249885B1 (en) 1997-05-13 2001-06-19 Karl S. Johnson Method for managing environmental conditions of a distributed processor system
US6253300B1 (en) 1997-08-20 2001-06-26 Powerquest Corporation Computer partition manipulation during imaging
US6253334B1 (en) 1997-05-13 2001-06-26 Micron Electronics, Inc. Three bus server architecture with a legacy PCI bus and mirrored I/O PCI buses
US6263387B1 (en) 1997-10-01 2001-07-17 Micron Electronics, Inc. System for automatically configuring a server after hot add of a device
US6266784B1 (en) * 1998-09-15 2001-07-24 International Business Machines Corporation Direct storage of recovery plan file on remote server for disaster recovery and storage management thereof
US6269412B1 (en) 1997-05-13 2001-07-31 Micron Technology, Inc. Apparatus for recording information system events
US6269417B1 (en) 1997-05-13 2001-07-31 Micron Technology, Inc. Method for determining and displaying the physical slot number of an expansion bus device
US6279011B1 (en) * 1998-06-19 2001-08-21 Network Appliance, Inc. Backup and restore for heterogeneous file server environment
US6282673B1 (en) 1997-05-13 2001-08-28 Micron Technology, Inc. Method of recording information system events
US6292905B1 (en) 1997-05-13 2001-09-18 Micron Technology, Inc. Method for providing a fault tolerant network using distributed server processes to remap clustered network resources to other servers during server failure
US6304929B1 (en) 1997-05-13 2001-10-16 Micron Electronics, Inc. Method for hot swapping a programmable adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6324608B1 (en) 1997-05-13 2001-11-27 Micron Electronics Method for hot swapping of network components
US6330690B1 (en) 1997-05-13 2001-12-11 Micron Electronics, Inc. Method of resetting a server
US6330653B1 (en) 1998-05-01 2001-12-11 Powerquest Corporation Manipulation of virtual and live computer storage device partitions
US6341341B1 (en) 1999-12-16 2002-01-22 Adaptec, Inc. System and method for disk control with snapshot feature including read-write snapshot half
US6341322B1 (en) 1997-05-13 2002-01-22 Micron Electronics, Inc. Method for interfacing two buses
US6349356B2 (en) * 1997-12-10 2002-02-19 International Business Machines Corporation Host-available device block map for optimized file retrieval from serpentine tape drives
US6374266B1 (en) * 1998-07-28 2002-04-16 Ralph Shnelvar Method and apparatus for storing information in a data processing system
US20020056031A1 (en) * 1997-07-18 2002-05-09 Storactive, Inc. Systems and methods for electronic data storage management
US6418492B1 (en) 1997-05-13 2002-07-09 Micron Electronics Method for computer implemented hot-swap and hot-add
WO2002061737A2 (en) * 2001-01-29 2002-08-08 Snap Appliance Inc. Dynamically distributed file system
US20020124137A1 (en) * 2001-01-29 2002-09-05 Ulrich Thomas R. Enhancing disk array performance via variable parity based load balancing
US20020138559A1 (en) * 2001-01-29 2002-09-26 Ulrich Thomas R. Dynamically distributed file system
US6460054B1 (en) * 1999-12-16 2002-10-01 Adaptec, Inc. System and method for data storage archive bit update after snapshot backup
US20020156840A1 (en) * 2001-01-29 2002-10-24 Ulrich Thomas R. File system metadata
US20020156974A1 (en) * 2001-01-29 2002-10-24 Ulrich Thomas R. Redundant dynamically distributed file system
US20020156891A1 (en) * 2001-01-29 2002-10-24 Ulrich Thomas R. Enhancing file system performance
US6473655B1 (en) * 2000-05-02 2002-10-29 International Business Machines Corporation Data processing system and method for creating a virtual partition within an existing partition in a hard disk drive
US20020169934A1 (en) * 2001-03-23 2002-11-14 Oliver Krapp Methods and systems for eliminating data redundancies
US20020194523A1 (en) * 2001-01-29 2002-12-19 Ulrich Thomas R. Replacing file system processors by hot swapping
US20020194528A1 (en) * 2001-05-22 2002-12-19 Nigel Hart Method, disaster recovery record, back-up apparatus and RAID array controller for use in restoring a configuration of a RAID device
US6499073B1 (en) 1997-05-13 2002-12-24 Micron Electronics, Inc. System using programmable processor for selectively enabling or disabling power to adapter in response to respective request signals
US6510491B1 (en) 1999-12-16 2003-01-21 Adaptec, Inc. System and method for accomplishing data storage migration between raid levels
US20030033051A1 (en) * 2001-08-09 2003-02-13 John Wilkes Self-disentangling data storage technique
US6535998B1 (en) * 1999-07-26 2003-03-18 Microsoft Corporation System recovery by restoring hardware state on non-identical systems
US6542975B1 (en) * 1998-12-24 2003-04-01 Roxio, Inc. Method and system for backing up data over a plurality of volumes
US6560615B1 (en) * 1999-12-17 2003-05-06 Novell, Inc. Method and apparatus for implementing a highly efficient, robust modified files list (MFL) for a storage system volume
US20030097454A1 (en) * 2001-11-02 2003-05-22 Nec Corporation Switching method and switch device
US6574591B1 (en) 1998-07-31 2003-06-03 Network Appliance, Inc. File systems image transfer between dissimilar file systems
US6574705B1 (en) 2000-11-16 2003-06-03 International Business Machines Corporation Data processing system and method including a logical volume manager for storing logical volume data
US20030110157A1 (en) * 2001-10-02 2003-06-12 Nobuhiro Maki Exclusive access control apparatus and method
US20030126327A1 (en) * 2001-12-28 2003-07-03 Pesola Troy Raymond Volume translation apparatus and method
US20030126247A1 (en) * 2002-01-02 2003-07-03 Exanet Ltd. Apparatus and method for file backup using multiple backup devices
US20030145180A1 (en) * 2002-01-31 2003-07-31 Mcneil Daniel D. Method and system for providing direct access recovery using seekable tape device
US6604118B2 (en) 1998-07-31 2003-08-05 Network Appliance, Inc. File system image transfer
US6615365B1 (en) * 2000-03-11 2003-09-02 Powerquest Corporation Storing a computer disk image within an imaged partition
US20030172158A1 (en) * 2001-06-28 2003-09-11 Pillai Ananthan K. Information replication system mounting partial database replications
US20030177324A1 (en) * 2002-03-14 2003-09-18 International Business Machines Corporation Method, system, and program for maintaining backup copies of files in a backup storage device
US20030196052A1 (en) * 2002-04-10 2003-10-16 International Business Machines Corporation Method, system, and program for grouping objects
US6636879B1 (en) 2000-08-18 2003-10-21 Network Appliance, Inc. Space allocation in a write anywhere file system
US20030200482A1 (en) * 2002-04-23 2003-10-23 Gateway, Inc. Application level and BIOS level disaster recovery
US6640233B1 (en) * 2000-08-18 2003-10-28 Network Appliance, Inc. Reserving file system blocks
US6643741B1 (en) * 2000-04-19 2003-11-04 International Business Machines Corporation Method and apparatus for efficient cache management and avoiding unnecessary cache traffic
US6654912B1 (en) 2000-10-04 2003-11-25 Network Appliance, Inc. Recovery of file system data in file servers mirrored file system volumes
US6665779B1 (en) * 1998-12-24 2003-12-16 Roxio, Inc. Image backup method for backing up disk partitions of a storage device
US6668264B1 (en) 2001-04-03 2003-12-23 Network Appliance, Inc. Resynchronization of a target volume with a source volume
US20040002999A1 (en) * 2002-03-25 2004-01-01 David Leroy Rand Creating a backup volume using a data profile of a host volume
WO2004010242A2 (en) * 2002-07-23 2004-01-29 Object Interactive Technologies Limited Software tool to detect and restore damaged or lost software components
US20040030668A1 (en) * 2002-08-09 2004-02-12 Brian Pawlowski Multi-protocol storage appliance that provides integrated support for file and block access protocols
US6701450B1 (en) * 1998-08-07 2004-03-02 Stephen Gold System backup and recovery
US6701453B2 (en) 1997-05-13 2004-03-02 Micron Technology, Inc. System for clustering software applications
US20040078704A1 (en) * 2002-10-22 2004-04-22 Malueg Michael D. Transaction-safe FAT file system
US6728922B1 (en) 2000-08-18 2004-04-27 Network Appliance, Inc. Dynamic data space
US6728735B1 (en) 2001-03-12 2004-04-27 Network Appliance, Inc. Restartable dump that produces a consistent filesystem on tapes
US6732244B2 (en) 2002-01-22 2004-05-04 International Business Machines Corporation Instant virtual copy technique with expedited creation of backup dataset inventory from source dataset inventory
US6732125B1 (en) * 2000-09-08 2004-05-04 Storage Technology Corporation Self archiving log structured volume with intrinsic data protection
US20040123031A1 (en) * 2002-12-19 2004-06-24 Veritas Software Corporation Instant refresh of a data volume copy
US6785219B1 (en) * 1999-03-10 2004-08-31 Matsushita Electric Industrial Co., Ltd. Information recording medium, information recording/reproducing method, and information recording/reproducing device
US6785789B1 (en) 2002-05-10 2004-08-31 Veritas Operating Corporation Method and apparatus for creating a virtual data copy
US6804690B1 (en) * 2000-12-27 2004-10-12 Emc Corporation Method for physical backup in data logical order
US20040210792A1 (en) * 2003-04-17 2004-10-21 International Business Machines Corporation Method and apparatus for recovering logical partition configuration data
US20040210793A1 (en) * 2003-04-21 2004-10-21 International Business Machines Corporation Method and apparatus for recovery of partitions in a logical partitioned data processing system
US6820214B1 (en) * 1999-07-26 2004-11-16 Microsoft Corporation Automated system recovery via backup and restoration of system state
US20040230863A1 (en) * 2001-06-19 2004-11-18 Christoffer Buchhorn Copying procedures including verification in data networks
US20040236984A1 (en) * 2003-05-20 2004-11-25 Yasuo Yamasaki Data backup method in a network storage system
US20040250033A1 (en) * 2002-10-07 2004-12-09 Anand Prahlad System and method for managing stored data
US20040255183A1 (en) * 2003-05-30 2004-12-16 Toshinari Takahashi Data management method and apparatus and program
US20050015415A1 (en) * 2003-07-14 2005-01-20 International Business Machines Corporation Method, system, and program for performing an input/output operation with respect to a logical storage device
US6848037B2 (en) * 2002-04-08 2005-01-25 International Business Machines Corporation Data processing arrangement and method
US6851073B1 (en) * 1999-07-26 2005-02-01 Microsoft Corporation Extensible system recovery architecture
US20050033748A1 (en) * 2000-12-18 2005-02-10 Kazar Michael L. Mechanism for handling file level and block level remote file accesses using the same server
US20050038836A1 (en) * 2001-07-06 2005-02-17 Jianxin Wang Systems and methods of information backup
US20050055512A1 (en) * 2003-09-05 2005-03-10 Kishi Gregory Tad Apparatus, system, and method flushing data from a cache to secondary storage
US20050076063A1 (en) * 2001-11-08 2005-04-07 Fujitsu Limited File system for enabling the restoration of a deffective file
US6898669B2 (en) * 2001-12-18 2005-05-24 Kabushiki Kaisha Toshiba Disk array apparatus and data backup method used therein
US20050114297A1 (en) * 2002-03-22 2005-05-26 Edwards John K. System and method for performing an on-line check of a file system
US6901493B1 (en) * 1998-02-24 2005-05-31 Adaptec, Inc. Method for protecting data of a computer system
US6907507B1 (en) 2002-12-19 2005-06-14 Veritas Operating Corporation Tracking in-progress writes through use of multi-column bitmaps
US6910111B1 (en) 2002-12-20 2005-06-21 Veritas Operating Corporation Volume restoration using an accumulator map
US6912631B1 (en) 2002-09-25 2005-06-28 Veritas Operating Corporation Method and apparatus for restoring a corrupted data volume
US20050144514A1 (en) * 2001-01-29 2005-06-30 Ulrich Thomas R. Dynamic redistribution of parity groups
US6931523B1 (en) * 1999-12-09 2005-08-16 Gateway Inc. System and method for re-storing stored known-good computer configuration via a non-interactive user input device without re-booting the system
US6938135B1 (en) 2002-10-04 2005-08-30 Veritas Operating Corporation Incremental backup of a data volume
US6938180B1 (en) * 2001-12-31 2005-08-30 Emc Corporation Logical restores of physically backed up data
US20050193026A1 (en) * 2003-11-13 2005-09-01 Anand Prahlad System and method for performing an image level snapshot and for restoring partial volume data
US20050246401A1 (en) * 2004-04-30 2005-11-03 Edwards John K Extension of write anywhere file system layout
US20050246382A1 (en) * 2004-04-30 2005-11-03 Edwards John K Extension of write anywhere file layout write allocation
US20050246397A1 (en) * 2004-04-30 2005-11-03 Edwards John K Cloning technique for efficiently creating a copy of a volume in a storage system
US6973553B1 (en) * 2000-10-20 2005-12-06 International Business Machines Corporation Method and apparatus for using extended disk sector formatting to assist in backup and hierarchical storage management
US6978354B1 (en) 2002-12-20 2005-12-20 Veritas Operating Corporation Method for creating a virtual data copy of a volume being restored
US20060026432A1 (en) * 2004-07-30 2006-02-02 Weirauch Charles R Drive tracking system for removable media
US6996687B1 (en) 2002-12-20 2006-02-07 Veritas Operating Corporation Method of optimizing the space and improving the write performance of volumes with multiple virtual copies
US7024527B1 (en) * 2003-07-18 2006-04-04 Veritas Operating Corporation Data restore mechanism
US7039661B1 (en) 2003-12-29 2006-05-02 Veritas Operating Corporation Coordinated dirty block tracking
US7072916B1 (en) 2000-08-18 2006-07-04 Network Appliance, Inc. Instant snapshot
US20060190680A1 (en) * 2000-08-04 2006-08-24 Delbosc Jean-Marc Virtual storage system
US7103737B1 (en) 2003-07-01 2006-09-05 Veritas Operating Corporation Flexible hierarchy of relationships and operations in data volumes
US20060224642A1 (en) * 2005-04-01 2006-10-05 Microsoft Corporation Production server to data protection server mapping
US20070022138A1 (en) * 2005-07-22 2007-01-25 Pranoop Erasani Client failure fencing mechanism for fencing network file system data in a host-cluster environment
US20070027933A1 (en) * 2005-07-28 2007-02-01 Advanced Micro Devices, Inc. Resilient system partition for personal internet communicator
US20070046791A1 (en) * 2002-10-09 2007-03-01 Xpoint Technologies, Inc. Method and system for deploying a software image
US20070101113A1 (en) * 2005-10-31 2007-05-03 Evans Rhys W Data back-up and recovery
CN1316779C (en) * 2002-12-05 2007-05-16 华为技术有限公司 A data disaster recovery solution method producing no interlinked data reproduction
US20070112820A1 (en) * 2002-06-28 2007-05-17 Witt Wesley A Transporting Image Files
CN1329838C (en) * 2004-05-13 2007-08-01 国际商业机器公司 Method and apparatus to eliminate interpartition covert storage channel and partition analysis
US7254682B1 (en) 2004-07-28 2007-08-07 Symantec Corporation Selective file and folder snapshot image creation
US20070186068A1 (en) * 2005-12-19 2007-08-09 Agrawal Vijay H Network redirector systems and methods for performing data replication
US20070255758A1 (en) * 2006-04-28 2007-11-01 Ling Zheng System and method for sampling based elimination of duplicate data
US20070276885A1 (en) * 2006-05-29 2007-11-29 Microsoft Corporation Creating frequent application-consistent backups efficiently
US20070294465A1 (en) * 2006-06-20 2007-12-20 Lenovo (Singapore) Pte. Ltd. IT administrator initiated remote hardware independent imaging technology
US20080005141A1 (en) * 2006-06-29 2008-01-03 Ling Zheng System and method for retrieving and using block fingerprints for data deduplication
US20080005201A1 (en) * 2006-06-29 2008-01-03 Daniel Ting System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US7318135B1 (en) * 2003-07-22 2008-01-08 Acronis Inc. System and method for using file system snapshots for online data backup
US20080016304A1 (en) * 2002-06-28 2008-01-17 Microsoft Corporation Method and System For Creating and Restoring An Image File
US7340645B1 (en) 2001-12-28 2008-03-04 Storage Technology Corporation Data management with virtual recovery mapping and backward moves
US7363540B2 (en) 2002-10-22 2008-04-22 Microsoft Corporation Transaction-safe FAT file system improvements
US20080147755A1 (en) * 2002-10-10 2008-06-19 Chapman Dennis E System and method for file system snapshot of a virtual logical disk
US7401093B1 (en) 2003-11-10 2008-07-15 Network Appliance, Inc. System and method for managing file data during consistency points
US20080172425A1 (en) * 2007-01-16 2008-07-17 Microsoft Corporation FAT directory structure for use in transaction safe file system
US20080172426A1 (en) * 2007-01-16 2008-07-17 Microsoft Corporation Storage system format for transaction safe file system
US20080183775A1 (en) * 2001-09-28 2008-07-31 Anand Prahlad System and method for generating and managing quick recovery volumes
US20080189343A1 (en) * 2006-12-29 2008-08-07 Robert Wyckoff Hyer System and method for performing distributed consistency verification of a clustered file system
US20080270690A1 (en) * 2007-04-27 2008-10-30 English Robert M System and method for efficient updates of sequential block storage
US20080301134A1 (en) * 2007-05-31 2008-12-04 Miller Steven C System and method for accelerating anchor point detection
US20090015735A1 (en) * 2005-11-10 2009-01-15 Michael David Simmonds Display source
US20090034377A1 (en) * 2007-04-27 2009-02-05 English Robert M System and method for efficient updates of sequential block storage
US7523276B1 (en) * 2003-06-30 2009-04-21 Veritas Software Corporation Synchronization of selected data from snapshots stored on different storage volumes
US20090164539A1 (en) * 2004-12-17 2009-06-25 Microsoft Corporation Contiguous file allocation in an extensible file system
US7558840B1 (en) * 2001-01-25 2009-07-07 Emc Corporation Data backup system having a flexible restore architecture
US7590660B1 (en) 2006-03-21 2009-09-15 Network Appliance, Inc. Method and system for efficient database cloning
US7657717B1 (en) 2004-02-02 2010-02-02 Symantec Operating Corporation Coherently sharing any form of instant snapshots separately from base volumes
US7664793B1 (en) 2003-07-01 2010-02-16 Symantec Operating Corporation Transforming unrelated data volumes into related data volumes
US7669064B2 (en) 1997-05-13 2010-02-23 Micron Technology, Inc. Diagnostic and managing distributed processor system
US20100057755A1 (en) * 2008-08-29 2010-03-04 Red Hat Corporation File system with flexible inode structures
US20100070726A1 (en) * 2004-11-15 2010-03-18 David Ngo Using a snapshot as a data source
US7689861B1 (en) 2002-10-09 2010-03-30 Xpoint Technologies, Inc. Data processing recovery system and method spanning multiple operating system
US20100082714A1 (en) * 2008-09-30 2010-04-01 Microsoft Corporation Nested file system support
US7721062B1 (en) 2003-11-10 2010-05-18 Netapp, Inc. Method for detecting leaked buffer writes across file system consistency points
US20100145909A1 (en) * 2008-12-10 2010-06-10 Commvault Systems, Inc. Systems and methods for managing replicated database data
US20100174683A1 (en) * 2009-01-08 2010-07-08 Bryan Wayne Freeman Individual object restore
US7757056B1 (en) 2005-03-16 2010-07-13 Netapp, Inc. System and method for efficiently calculating storage required to split a clone volume
US7783611B1 (en) 2003-11-10 2010-08-24 Netapp, Inc. System and method for managing file metadata during consistency points
US7818299B1 (en) 2002-03-19 2010-10-19 Netapp, Inc. System and method for determining changes in two snapshots and for transmitting changes to a destination snapshot
US7827350B1 (en) 2007-04-27 2010-11-02 Netapp, Inc. Method and system for promoting a snapshot in a distributed file system
US20110022811A1 (en) * 2008-10-02 2011-01-27 Hitachi Software Engineering Co., Ltd. Information backup/restoration processing apparatus and information backup/restoration processing system
US7917481B1 (en) 2005-10-31 2011-03-29 Symantec Operating Corporation File-system-independent malicious content detection
US20110113078A1 (en) * 2006-05-23 2011-05-12 Microsoft Corporation Extending Cluster Allocations In An Extensible File System
US7962455B2 (en) 2005-12-19 2011-06-14 Commvault Systems, Inc. Pathname translation in a data replication system
US20110161299A1 (en) * 2009-12-31 2011-06-30 Anand Prahlad Systems and methods for performing data management operations using snapshots
US20110161295A1 (en) * 2009-12-31 2011-06-30 David Ngo Systems and methods for analyzing snapshots
US7996636B1 (en) 2007-11-06 2011-08-09 Netapp, Inc. Uniquely identifying block context signatures in a storage volume hierarchy
US20110212549A1 (en) * 2005-02-11 2011-09-01 Chen Kong C Apparatus and method for predetermined component placement to a target platform
US8024294B2 (en) 2005-12-19 2011-09-20 Commvault Systems, Inc. Systems and methods for performing replication copy storage operations
US20110307657A1 (en) * 2010-06-14 2011-12-15 Veeam Software International Ltd. Selective Processing of File System Objects for Image Level Backups
US8086572B2 (en) 2004-03-30 2011-12-27 International Business Machines Corporation Method, system, and program for restoring data to a file
US20120005163A1 (en) * 2005-11-04 2012-01-05 Oracle America, Inc. Block-based incremental backup
US8121983B2 (en) 2005-12-19 2012-02-21 Commvault Systems, Inc. Systems and methods for monitoring application data in a data replication system
US8219821B2 (en) 2007-03-27 2012-07-10 Netapp, Inc. System and method for signature based data container recognition
JP2012133769A (en) * 2010-12-17 2012-07-12 Internatl Business Mach Corp <Ibm> Computer program, system and method for restoring deduplicated data objects from sequential backup devices
US8260748B1 (en) * 2007-03-27 2012-09-04 Symantec Corporation Method and apparatus for capturing data from a backup image
US8271830B2 (en) 2005-12-19 2012-09-18 Commvault Systems, Inc. Rolling cache configuration for a data replication system
US8290808B2 (en) 2007-03-09 2012-10-16 Commvault Systems, Inc. System and method for automating customer-validated statement of work for a data storage environment
US8332689B2 (en) 2010-07-19 2012-12-11 Veeam Software International Ltd. Systems, methods, and computer program products for instant recovery of image level backups
US20130007389A1 (en) * 2011-07-01 2013-01-03 Futurewei Technologies, Inc. System and Method for Making Snapshots of Storage Devices
US8352422B2 (en) 2010-03-30 2013-01-08 Commvault Systems, Inc. Data restore systems and methods in a replication environment
CN102880522A (en) * 2012-09-21 2013-01-16 中国人民解放军国防科学技术大学 Hardware fault-oriented method and device for correcting faults in key files of system
US8489656B2 (en) 2010-05-28 2013-07-16 Commvault Systems, Inc. Systems and methods for performing data replication
US8504515B2 (en) 2010-03-30 2013-08-06 Commvault Systems, Inc. Stubbing systems and methods in a data replication environment
US8504517B2 (en) 2010-03-29 2013-08-06 Commvault Systems, Inc. Systems and methods for selective data replication
US8565545B1 (en) * 2011-04-07 2013-10-22 Symantec Corporation Systems and methods for restoring images
US8583594B2 (en) 2003-11-13 2013-11-12 Commvault Systems, Inc. System and method for performing integrated storage operations
US8615495B1 (en) * 2008-08-13 2013-12-24 Symantec Corporation Techniques for providing a differential backup from a storage image
US8655850B2 (en) 2005-12-19 2014-02-18 Commvault Systems, Inc. Systems and methods for resynchronizing information
US8719767B2 (en) 2011-03-31 2014-05-06 Commvault Systems, Inc. Utilizing snapshots to provide builds to developer computing devices
US8725698B2 (en) 2010-03-30 2014-05-13 Commvault Systems, Inc. Stub file prioritization in a data replication system
US8725986B1 (en) 2008-04-18 2014-05-13 Netapp, Inc. System and method for volume block number to disk block number mapping
US8726242B2 (en) 2006-07-27 2014-05-13 Commvault Systems, Inc. Systems and methods for continuous data replication
US8793223B1 (en) 2009-02-09 2014-07-29 Netapp, Inc. Online data consistency checking in a network storage system with optional committal of remedial changes
US8793221B2 (en) 2005-12-19 2014-07-29 Commvault Systems, Inc. Systems and methods for performing data replication
US8935281B1 (en) * 2005-10-31 2015-01-13 Symantec Operating Corporation Optimized content search of files
US8990161B1 (en) * 2008-09-30 2015-03-24 Emc Corporation System and method for single segment backup
US9009114B1 (en) * 2005-10-31 2015-04-14 Symantec Operating Corporation Version mapped incremental backups
US9031908B1 (en) 2009-03-31 2015-05-12 Symantec Corporation Method and apparatus for simultaneous comparison of multiple backup sets maintained in a computer system
US9092500B2 (en) 2009-09-03 2015-07-28 Commvault Systems, Inc. Utilizing snapshots for access to databases and other applications
US9152507B1 (en) * 2014-09-05 2015-10-06 Storagecraft Technology Corporation Pruning unwanted file content from an image backup
US9182969B1 (en) * 2002-04-03 2015-11-10 Symantec Corporation Using disassociated images for computer and storage resource management
US9208817B1 (en) 2015-03-10 2015-12-08 Alibaba Group Holding Limited System and method for determination and reallocation of pending sectors caused by media fatigue
US9262435B2 (en) 2013-01-11 2016-02-16 Commvault Systems, Inc. Location-based data synchronization management
US9298715B2 (en) 2012-03-07 2016-03-29 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US20160134698A1 (en) * 1999-11-11 2016-05-12 Intellectual Ventures Ii Llc Flexible remote data mirroring
US9342537B2 (en) 2012-04-23 2016-05-17 Commvault Systems, Inc. Integrated snapshot interface for a data storage system
US9448731B2 (en) 2014-11-14 2016-09-20 Commvault Systems, Inc. Unified snapshot storage management
US9471578B2 (en) 2012-03-07 2016-10-18 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US9495251B2 (en) 2014-01-24 2016-11-15 Commvault Systems, Inc. Snapshot readiness checking and reporting
US9495382B2 (en) 2008-12-10 2016-11-15 Commvault Systems, Inc. Systems and methods for performing discrete data replication
US9569311B2 (en) 2012-10-01 2017-02-14 Hitachi, Ltd. Computer system for backing up data
US9619335B1 (en) 2016-03-11 2017-04-11 Storagecraft Technology Corporation Filtering a directory enumeration of a directory to exclude files with missing file content from an image backup
US9632874B2 (en) 2014-01-24 2017-04-25 Commvault Systems, Inc. Database application backup in single snapshot for multiple applications
US9639426B2 (en) 2014-01-24 2017-05-02 Commvault Systems, Inc. Single snapshot for multiple applications
US9648105B2 (en) 2014-11-14 2017-05-09 Commvault Systems, Inc. Unified snapshot storage management, using an enhanced storage manager and enhanced media agents
US9690666B1 (en) * 2012-07-02 2017-06-27 Veritas Technologies Llc Incremental backup operations in a transactional file system
US9753812B2 (en) 2014-01-24 2017-09-05 Commvault Systems, Inc. Generating mapping information for single snapshot for multiple applications
US9774672B2 (en) 2014-09-03 2017-09-26 Commvault Systems, Inc. Consolidated processing of storage-array commands by a snapshot-control media agent
US9858156B2 (en) 2012-06-13 2018-01-02 Commvault Systems, Inc. Dedicated client-side signature generator in a networked storage system
US9886346B2 (en) 2013-01-11 2018-02-06 Commvault Systems, Inc. Single snapshot for multiple agents
US9898478B2 (en) 2010-12-14 2018-02-20 Commvault Systems, Inc. Distributed deduplicated storage system
US9898225B2 (en) 2010-09-30 2018-02-20 Commvault Systems, Inc. Content aligned block-based deduplication
US9934238B2 (en) 2014-10-29 2018-04-03 Commvault Systems, Inc. Accessing a file system using tiered deduplication
CN108268380A (en) * 2016-12-30 2018-07-10 北京兆易创新科技股份有限公司 A kind of method and apparatus for reading and writing data
US10042716B2 (en) 2014-09-03 2018-08-07 Commvault Systems, Inc. Consolidated processing of storage-array commands using a forwarder media agent in conjunction with a snapshot-control media agent
US10061663B2 (en) 2015-12-30 2018-08-28 Commvault Systems, Inc. Rebuilding deduplication data in a distributed deduplication data storage system
US10126973B2 (en) 2010-09-30 2018-11-13 Commvault Systems, Inc. Systems and methods for retaining and using data block signatures in data protection operations
US10191816B2 (en) 2010-12-14 2019-01-29 Commvault Systems, Inc. Client-side repository in a networked deduplicated storage system
US10229133B2 (en) 2013-01-11 2019-03-12 Commvault Systems, Inc. High availability distributed deduplicated storage system
US10311150B2 (en) 2015-04-10 2019-06-04 Commvault Systems, Inc. Using a Unix-based file system to manage and serve clones to windows-based computing clients
US10339106B2 (en) 2015-04-09 2019-07-02 Commvault Systems, Inc. Highly reusable deduplication database after disaster recovery
US10380072B2 (en) 2014-03-17 2019-08-13 Commvault Systems, Inc. Managing deletions from a deduplication database
US10474641B2 (en) 2004-12-17 2019-11-12 Microsoft Technology Licensing, Llc Extensible file system
US10481824B2 (en) 2015-05-26 2019-11-19 Commvault Systems, Inc. Replication using deduplicated secondary copy data
US10503753B2 (en) 2016-03-10 2019-12-10 Commvault Systems, Inc. Snapshot replication operations based on incremental block change tracking
US10540327B2 (en) 2009-07-08 2020-01-21 Commvault Systems, Inc. Synchronized data deduplication
US10614032B2 (en) 2004-12-17 2020-04-07 Microsoft Technology Licensing, Llc Quick filename lookup using name hash
US10732885B2 (en) 2018-02-14 2020-08-04 Commvault Systems, Inc. Block-level live browsing and private writable snapshots using an ISCSI server
US10754729B2 (en) * 2018-03-12 2020-08-25 Commvault Systems, Inc. Recovery point objective (RPO) driven backup scheduling in a data storage management system
US10860443B2 (en) 2018-12-10 2020-12-08 Commvault Systems, Inc. Evaluation and reporting of recovery readiness in a data storage management system
US11010258B2 (en) 2018-11-27 2021-05-18 Commvault Systems, Inc. Generating backup copies through interoperability between components of a data storage management system and appliances for data storage and deduplication
US11016859B2 (en) 2008-06-24 2021-05-25 Commvault Systems, Inc. De-duplication systems and methods for application-specific data
CN112988473A (en) * 2021-05-10 2021-06-18 南京云信达科技有限公司 Backup data real-time recovery method and system
US11042318B2 (en) 2019-07-29 2021-06-22 Commvault Systems, Inc. Block-level data replication
US11249858B2 (en) 2014-08-06 2022-02-15 Commvault Systems, Inc. Point-in-time backups of a production application made accessible over fibre channel and/or ISCSI as data sources to a remote application by representing the backups as pseudo-disks operating apart from the production application and its host
US11294768B2 (en) 2017-06-14 2022-04-05 Commvault Systems, Inc. Live browsing of backed up data residing on cloned disks
US11314424B2 (en) 2015-07-22 2022-04-26 Commvault Systems, Inc. Restore for block-level backups
US11321181B2 (en) 2008-06-18 2022-05-03 Commvault Systems, Inc. Data protection scheduling, such as providing a flexible backup window in a data protection system
US11321195B2 (en) 2017-02-27 2022-05-03 Commvault Systems, Inc. Hypervisor-independent reference copies of virtual machine payload data based on block-level pseudo-mount
US11416341B2 (en) * 2014-08-06 2022-08-16 Commvault Systems, Inc. Systems and methods to reduce application downtime during a restore operation using a pseudo-storage device
US11436038B2 (en) 2016-03-09 2022-09-06 Commvault Systems, Inc. Hypervisor-independent block-level live browse for access to backed up virtual machine (VM) data and hypervisor-free file-level recovery (block- level pseudo-mount)
US11442896B2 (en) 2019-12-04 2022-09-13 Commvault Systems, Inc. Systems and methods for optimizing restoration of deduplicated data stored in cloud-based storage resources
US11463264B2 (en) 2019-05-08 2022-10-04 Commvault Systems, Inc. Use of data block signatures for monitoring in an information management system
US11507533B2 (en) * 2018-02-05 2022-11-22 Huawei Technologies Co., Ltd. Data query method and apparatus
US11640339B2 (en) 2020-11-23 2023-05-02 International Business Machines Corporation Creating a backup data set
US11687424B2 (en) 2020-05-28 2023-06-27 Commvault Systems, Inc. Automated media agent state management
US11698727B2 (en) 2018-12-14 2023-07-11 Commvault Systems, Inc. Performing secondary copy operations based on deduplication performance
US11809285B2 (en) 2022-02-09 2023-11-07 Commvault Systems, Inc. Protecting a management database of a data storage management system to meet a recovery point objective (RPO)
US11829251B2 (en) 2019-04-10 2023-11-28 Commvault Systems, Inc. Restore using deduplicated secondary copy data
US11853104B2 (en) 2019-06-27 2023-12-26 Netapp, Inc. Virtual machine backup from computing environment to storage environment

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119208A (en) * 1997-04-18 2000-09-12 Storage Technology Corporation MVS device backup system for a data processor using a data storage subsystem snapshot copy capability
US6167494A (en) * 1998-04-28 2000-12-26 International Business Machine Corporation Method and system for recovering from operating system failure
US6154852A (en) * 1998-06-10 2000-11-28 International Business Machines Corporation Method and apparatus for data backup and recovery
US6654881B2 (en) 1998-06-12 2003-11-25 Microsoft Corporation Logical volume mount manager
WO2000077641A1 (en) * 1999-06-15 2000-12-21 Microsoft Corporation System and method for generating a backup copy of a storage medium
US6553387B1 (en) * 1999-11-29 2003-04-22 Microsoft Corporation Logical volume configuration data management determines whether to expose the logical volume on-line, off-line request based on comparison of volume epoch numbers on each extents of the volume identifiers
US6490651B1 (en) * 2000-03-14 2002-12-03 Maxtor Corporation Host-based virtual disk drive for improving the performance of a hard disk drive's input/output
US6708227B1 (en) * 2000-04-24 2004-03-16 Microsoft Corporation Method and system for providing common coordination and administration of multiple snapshot providers
EP1229433A1 (en) * 2001-01-31 2002-08-07 Hewlett-Packard Company File sort for backup
EP1229434A3 (en) * 2001-01-31 2009-09-09 Hewlett-Packard Company, A Delaware Corporation File sort for backup
US7310654B2 (en) * 2002-01-31 2007-12-18 Mirapoint, Inc. Method and system for providing image incremental and disaster recovery
US7461131B2 (en) 2003-03-07 2008-12-02 International Business Machines Corporation Use of virtual targets for preparing and servicing requests for server-free data transfer operations
FR2850182B1 (en) * 2003-06-10 2006-02-24 Garnier Jean METHOD FOR MANAGING A DIGITAL STORAGE UNIT
US7756833B2 (en) * 2004-09-22 2010-07-13 Microsoft Corporation Method and system for synthetic backup and restore
EP1878017A1 (en) * 2004-12-21 2008-01-16 Koninklijke Philips Electronics N.V. Method and apparatus for error correction of optical disc data
JP2006251989A (en) * 2005-03-09 2006-09-21 Kwok-Yan Leung Data protection device compatible with network by operation system
WO2006095875A1 (en) * 2005-03-10 2006-09-14 Nippon Telegraph And Telephone Corporation Network system, method for controlling access to storage device, management server, storage device, log-in control method, network boot system, and unit storage unit access method
US7823007B2 (en) 2006-02-17 2010-10-26 International Business Machines Corporation Apparatus, system, and method for switching a volume address association in a point-in-time copy relationship
CN102193841B (en) * 2010-03-04 2013-07-31 阿里巴巴集团控股有限公司 Backup method and device of Subversion configuration database
KR101372047B1 (en) * 2013-02-17 2014-03-07 (주)인정보 System and method for preventing data withdrawl by checking disk sectors
US10157103B2 (en) * 2015-10-20 2018-12-18 Veeam Software Ag Efficient processing of file system objects for image level backups
CN111831472B (en) * 2019-04-18 2023-12-22 阿里云计算有限公司 Snapshot creation method and device and electronic equipment
CN112379846B (en) * 2020-12-01 2022-04-29 厦门市美亚柏科信息股份有限公司 Method and system for rapidly reading disk file

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4333162A (en) * 1980-08-04 1982-06-01 National Semiconductor Corporation Bubble memory with conductor programmable transparent error map
US4346454A (en) * 1980-08-22 1982-08-24 National Semiconductor Corporation Bubble memory with on chip error map storage on permalloy disk elements
US4685055A (en) * 1985-07-01 1987-08-04 Thomas Richard B Method and system for controlling use of protected software
US4972316A (en) * 1987-03-30 1990-11-20 International Business Machines Corporation Method of handling disk sector errors in DASD cache
US5088081A (en) * 1990-03-28 1992-02-11 Prime Computer, Inc. Method and apparatus for improved disk access
EP0566967A2 (en) * 1992-04-20 1993-10-27 International Business Machines Corporation Method and system for time zero backup session security
US5261088A (en) * 1990-04-26 1993-11-09 International Business Machines Corporation Managing locality in space reuse in a shadow written B-tree via interior node free space list
US5331616A (en) * 1992-04-17 1994-07-19 Sony Corporation Information recording and reproducing apparatus with self-diagnosis information storage mechanism
WO1995013580A1 (en) * 1993-11-09 1995-05-18 Arcada Software Data backup and restore system for a computer network
US5504857A (en) * 1990-06-08 1996-04-02 International Business Machines Highly available fault tolerant relocation of storage with atomicity

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4333162A (en) * 1980-08-04 1982-06-01 National Semiconductor Corporation Bubble memory with conductor programmable transparent error map
US4346454A (en) * 1980-08-22 1982-08-24 National Semiconductor Corporation Bubble memory with on chip error map storage on permalloy disk elements
US4685055A (en) * 1985-07-01 1987-08-04 Thomas Richard B Method and system for controlling use of protected software
US4972316A (en) * 1987-03-30 1990-11-20 International Business Machines Corporation Method of handling disk sector errors in DASD cache
US5088081A (en) * 1990-03-28 1992-02-11 Prime Computer, Inc. Method and apparatus for improved disk access
US5261088A (en) * 1990-04-26 1993-11-09 International Business Machines Corporation Managing locality in space reuse in a shadow written B-tree via interior node free space list
US5504857A (en) * 1990-06-08 1996-04-02 International Business Machines Highly available fault tolerant relocation of storage with atomicity
US5331616A (en) * 1992-04-17 1994-07-19 Sony Corporation Information recording and reproducing apparatus with self-diagnosis information storage mechanism
EP0566967A2 (en) * 1992-04-20 1993-10-27 International Business Machines Corporation Method and system for time zero backup session security
WO1995013580A1 (en) * 1993-11-09 1995-05-18 Arcada Software Data backup and restore system for a computer network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Restoring Data from DASD Volumes Having Hardware Errors," IBM Technical Disclosure Bulletin, vol. 31, No. 7, Dec. 1988, pp. 313-317.
Restoring Data from DASD Volumes Having Hardware Errors, IBM Technical Disclosure Bulletin, vol. 31, No. 7, Dec. 1988, pp. 313 317. *

Cited By (564)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119212A (en) * 1997-04-23 2000-09-12 Advanced Micro Devices, Inc. Root size decrease on a UNIX based computer system
US6269417B1 (en) 1997-05-13 2001-07-31 Micron Technology, Inc. Method for determining and displaying the physical slot number of an expansion bus device
US6182180B1 (en) 1997-05-13 2001-01-30 Micron Electronics, Inc. Apparatus for interfacing buses
US6598173B1 (en) 1997-05-13 2003-07-22 Micron Technology, Inc. Method of remote access and control of environmental conditions
US6523131B1 (en) 1997-05-13 2003-02-18 Micron Technology, Inc. Method for communicating a software-generated pulse waveform between two servers in a network
US7669064B2 (en) 1997-05-13 2010-02-23 Micron Technology, Inc. Diagnostic and managing distributed processor system
US6499073B1 (en) 1997-05-13 2002-12-24 Micron Electronics, Inc. System using programmable processor for selectively enabling or disabling power to adapter in response to respective request signals
US6122758A (en) 1997-05-13 2000-09-19 Micron Electronics, Inc. System for mapping environmental resources to memory for program access
US6697963B1 (en) 1997-05-13 2004-02-24 Micron Technology, Inc. Method of updating a system environmental setting
US6134668A (en) 1997-05-13 2000-10-17 Micron Electronics, Inc. Method of selective independent powering of portion of computer system through remote interface from remote interface power supply
US6134673A (en) 1997-05-13 2000-10-17 Micron Electronics, Inc. Method for clustering software applications
US6701453B2 (en) 1997-05-13 2004-03-02 Micron Technology, Inc. System for clustering software applications
US6138250A (en) 1997-05-13 2000-10-24 Micron Electronics, Inc. System for reading system log
US6145098A (en) 1997-05-13 2000-11-07 Micron Electronics, Inc. System for displaying system status
US6484226B2 (en) 1997-05-13 2002-11-19 Micron Technology, Inc. System and method for the add or swap of an adapter on an operating computer
US6163849A (en) 1997-05-13 2000-12-19 Micron Electronics, Inc. Method of powering up or powering down a server to a maintenance state
US6163853A (en) 1997-05-13 2000-12-19 Micron Electronics, Inc. Method for communicating a software-generated pulse waveform between two servers in a network
US6170028B1 (en) 1997-05-13 2001-01-02 Micron Electronics, Inc. Method for hot swapping a programmable network adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6170067B1 (en) 1997-05-13 2001-01-02 Micron Technology, Inc. System for automatically reporting a system failure in a server
US6173346B1 (en) 1997-05-13 2001-01-09 Micron Electronics, Inc. Method for hot swapping a programmable storage adapter using a programmable processor for selectively enabling or disabling power to adapter slot in response to respective request signals
US6179486B1 (en) 1997-05-13 2001-01-30 Micron Electronics, Inc. Method for hot add of a mass storage adapter on a system including a dynamically loaded adapter driver
US6742069B2 (en) 1997-05-13 2004-05-25 Micron Technology, Inc. Method of providing an interface to a plurality of peripheral devices using bus adapter chips
US8468372B2 (en) 1997-05-13 2013-06-18 Round Rock Research, Llc Diagnostic and managing distributed processor system
US6189109B1 (en) 1997-05-13 2001-02-13 Micron Electronics, Inc. Method of remote access and control of environmental conditions
US6192434B1 (en) 1997-05-13 2001-02-20 Micron Electronics, Inc System for hot swapping a programmable adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6195717B1 (en) 1997-05-13 2001-02-27 Micron Electronics, Inc. Method of expanding bus loading capacity
US6418492B1 (en) 1997-05-13 2002-07-09 Micron Electronics Method for computer implemented hot-swap and hot-add
US6202111B1 (en) 1997-05-13 2001-03-13 Micron Electronics, Inc. Method for the hot add of a network adapter on a system including a statically loaded adapter driver
US6202160B1 (en) 1997-05-13 2001-03-13 Micron Electronics, Inc. System for independent powering of a computer system
US6341322B1 (en) 1997-05-13 2002-01-22 Micron Electronics, Inc. Method for interfacing two buses
US6332202B1 (en) 1997-05-13 2001-12-18 Micron Technology, Inc. Method of remote access and control of environmental conditions
US6272648B1 (en) 1997-05-13 2001-08-07 Micron Electronics, Inc. System for communicating a software-generated pulse waveform between two servers in a network
US6330690B1 (en) 1997-05-13 2001-12-11 Micron Electronics, Inc. Method of resetting a server
US6243838B1 (en) 1997-05-13 2001-06-05 Micron Electronics, Inc. Method for automatically reporting a system failure in a server
US6243773B1 (en) 1997-05-13 2001-06-05 Micron Electronics, Inc. Configuration management system for hot adding and hot replacing devices
US6247079B1 (en) 1997-05-13 2001-06-12 Micron Electronics, Inc Apparatus for computer implemented hot-swap and hot-add
US6247080B1 (en) 1997-05-13 2001-06-12 Micron Electronics, Inc. Method for the hot add of devices
US6249834B1 (en) 1997-05-13 2001-06-19 Micron Technology, Inc. System for expanding PCI bus loading capacity
US6249828B1 (en) 1997-05-13 2001-06-19 Micron Electronics, Inc. Method for the hot swap of a mass storage adapter on a system including a statically loaded adapter driver
US6249885B1 (en) 1997-05-13 2001-06-19 Karl S. Johnson Method for managing environmental conditions of a distributed processor system
US6282673B1 (en) 1997-05-13 2001-08-28 Micron Technology, Inc. Method of recording information system events
US6253334B1 (en) 1997-05-13 2001-06-26 Micron Electronics, Inc. Three bus server architecture with a legacy PCI bus and mirrored I/O PCI buses
US6304929B1 (en) 1997-05-13 2001-10-16 Micron Electronics, Inc. Method for hot swapping a programmable adapter by using a programmable processor to selectively disabling and enabling power thereto upon receiving respective control signals
US6292905B1 (en) 1997-05-13 2001-09-18 Micron Technology, Inc. Method for providing a fault tolerant network using distributed server processes to remap clustered network resources to other servers during server failure
US6266721B1 (en) 1997-05-13 2001-07-24 Micron Electronics, Inc. System architecture for remote access and control of environmental management
US6269412B1 (en) 1997-05-13 2001-07-31 Micron Technology, Inc. Apparatus for recording information system events
US6604207B2 (en) 1997-05-13 2003-08-05 Micron Technology, Inc. System architecture for remote access and control of environmental management
US6219734B1 (en) 1997-05-13 2001-04-17 Micron Electronics, Inc. Method for the hot add of a mass storage adapter on a system including a statically loaded adapter driver
US6073255A (en) 1997-05-13 2000-06-06 Micron Electronics, Inc. Method of reading system log
US6324608B1 (en) 1997-05-13 2001-11-27 Micron Electronics Method for hot swapping of network components
US20020056031A1 (en) * 1997-07-18 2002-05-09 Storactive, Inc. Systems and methods for electronic data storage management
US6253300B1 (en) 1997-08-20 2001-06-26 Powerquest Corporation Computer partition manipulation during imaging
US6138179A (en) 1997-10-01 2000-10-24 Micron Electronics, Inc. System for automatically partitioning and formatting a primary hard disk for installing software in which selection of extended partition size is not related to size of hard disk
US6263387B1 (en) 1997-10-01 2001-07-17 Micron Electronics, Inc. System for automatically configuring a server after hot add of a device
US6088816A (en) 1997-10-01 2000-07-11 Micron Electronics, Inc. Method of displaying system status
US6199173B1 (en) 1997-10-01 2001-03-06 Micron Electronics, Inc. Method for mapping environmental resources to memory for program access
US6154835A (en) 1997-10-01 2000-11-28 Micron Electronics, Inc. Method for automatically configuring and formatting a computer system and installing software
US6212585B1 (en) 1997-10-01 2001-04-03 Micron Electronics, Inc. Method of automatically configuring a server after hot add of a device
US6065053A (en) 1997-10-01 2000-05-16 Micron Electronics, Inc. System for resetting a server
US6108697A (en) * 1997-10-06 2000-08-22 Powerquest Corporation One-to-many disk imaging transfer over a network
US6041334A (en) * 1997-10-29 2000-03-21 International Business Machines Corporation Storage management system with file aggregation supporting multiple aggregated file counterparts
US6101585A (en) * 1997-11-04 2000-08-08 Adaptec, Inc. Mechanism for incremental backup of on-line files
US6349356B2 (en) * 1997-12-10 2002-02-19 International Business Machines Corporation Host-available device block map for optimized file retrieval from serpentine tape drives
US6901493B1 (en) * 1998-02-24 2005-05-31 Adaptec, Inc. Method for protecting data of a computer system
US6330653B1 (en) 1998-05-01 2001-12-11 Powerquest Corporation Manipulation of virtual and live computer storage device partitions
US20020059172A1 (en) * 1998-06-19 2002-05-16 Mark Muhlestein Backup and restore for heterogeneous file server environment
US6665689B2 (en) * 1998-06-19 2003-12-16 Network Appliance, Inc. Backup and restore for heterogeneous file server environment
US6279011B1 (en) * 1998-06-19 2001-08-21 Network Appliance, Inc. Backup and restore for heterogeneous file server environment
US6223234B1 (en) 1998-07-17 2001-04-24 Micron Electronics, Inc. Apparatus for the hot swap and add of input/output platforms and devices
US6205503B1 (en) 1998-07-17 2001-03-20 Mallikarjunan Mahalingam Method for the hot swap and add of input/output platforms and devices
US6374266B1 (en) * 1998-07-28 2002-04-16 Ralph Shnelvar Method and apparatus for storing information in a data processing system
US6604118B2 (en) 1998-07-31 2003-08-05 Network Appliance, Inc. File system image transfer
US6574591B1 (en) 1998-07-31 2003-06-03 Network Appliance, Inc. File systems image transfer between dissimilar file systems
US6701450B1 (en) * 1998-08-07 2004-03-02 Stephen Gold System backup and recovery
US6266784B1 (en) * 1998-09-15 2001-07-24 International Business Machines Corporation Direct storage of recovery plan file on remote server for disaster recovery and storage management thereof
US6665779B1 (en) * 1998-12-24 2003-12-16 Roxio, Inc. Image backup method for backing up disk partitions of a storage device
US6542975B1 (en) * 1998-12-24 2003-04-01 Roxio, Inc. Method and system for backing up data over a plurality of volumes
US6785219B1 (en) * 1999-03-10 2004-08-31 Matsushita Electric Industrial Co., Ltd. Information recording medium, information recording/reproducing method, and information recording/reproducing device
US6453383B1 (en) 1999-03-15 2002-09-17 Powerquest Corporation Manipulation of computer volume segments
WO2000055735A1 (en) * 1999-03-15 2000-09-21 Powerquest Corporation Manipulation of computer volume segments
US6851073B1 (en) * 1999-07-26 2005-02-01 Microsoft Corporation Extensible system recovery architecture
US6535998B1 (en) * 1999-07-26 2003-03-18 Microsoft Corporation System recovery by restoring hardware state on non-identical systems
US6820214B1 (en) * 1999-07-26 2004-11-16 Microsoft Corporation Automated system recovery via backup and restoration of system state
US6185666B1 (en) 1999-09-11 2001-02-06 Powerquest Corporation Merging computer partitions
US20160134698A1 (en) * 1999-11-11 2016-05-12 Intellectual Ventures Ii Llc Flexible remote data mirroring
US10003647B2 (en) * 1999-11-11 2018-06-19 Intellectual Ventures Ii Llc Flexible remote data mirroring
US6931523B1 (en) * 1999-12-09 2005-08-16 Gateway Inc. System and method for re-storing stored known-good computer configuration via a non-interactive user input device without re-booting the system
US6341341B1 (en) 1999-12-16 2002-01-22 Adaptec, Inc. System and method for disk control with snapshot feature including read-write snapshot half
US6510491B1 (en) 1999-12-16 2003-01-21 Adaptec, Inc. System and method for accomplishing data storage migration between raid levels
US6460054B1 (en) * 1999-12-16 2002-10-01 Adaptec, Inc. System and method for data storage archive bit update after snapshot backup
US6560615B1 (en) * 1999-12-17 2003-05-06 Novell, Inc. Method and apparatus for implementing a highly efficient, robust modified files list (MFL) for a storage system volume
US6615365B1 (en) * 2000-03-11 2003-09-02 Powerquest Corporation Storing a computer disk image within an imaged partition
US6643741B1 (en) * 2000-04-19 2003-11-04 International Business Machines Corporation Method and apparatus for efficient cache management and avoiding unnecessary cache traffic
US6473655B1 (en) * 2000-05-02 2002-10-29 International Business Machines Corporation Data processing system and method for creating a virtual partition within an existing partition in a hard disk drive
US20060190680A1 (en) * 2000-08-04 2006-08-24 Delbosc Jean-Marc Virtual storage system
US7660954B2 (en) * 2000-08-04 2010-02-09 Emc Corporation Techniques for saving data
US6728922B1 (en) 2000-08-18 2004-04-27 Network Appliance, Inc. Dynamic data space
US20080028011A1 (en) * 2000-08-18 2008-01-31 Network Appliance, Inc. Space allocation in a write anywhere file system
US7072916B1 (en) 2000-08-18 2006-07-04 Network Appliance, Inc. Instant snapshot
US6636879B1 (en) 2000-08-18 2003-10-21 Network Appliance, Inc. Space allocation in a write anywhere file system
US7930326B2 (en) 2000-08-18 2011-04-19 Network Appliance, Inc. Space allocation in a write anywhere file system
US6640233B1 (en) * 2000-08-18 2003-10-28 Network Appliance, Inc. Reserving file system blocks
US6732125B1 (en) * 2000-09-08 2004-05-04 Storage Technology Corporation Self archiving log structured volume with intrinsic data protection
US20040107226A1 (en) * 2000-09-08 2004-06-03 Storage Technology Corporation Self archiving log structured volume with intrinsic data protection
US6915315B2 (en) 2000-09-08 2005-07-05 Storage Technology Corporation Self archiving log structured volume with intrinsic data protection
US6654912B1 (en) 2000-10-04 2003-11-25 Network Appliance, Inc. Recovery of file system data in file servers mirrored file system volumes
US20040153736A1 (en) * 2000-10-04 2004-08-05 Network Appliance, Inc. Recovery of file system data in file servers mirrored file system volumes
US7096379B2 (en) 2000-10-04 2006-08-22 Network Appliance, Inc. Recovery of file system data in file servers mirrored file system volumes
US6973553B1 (en) * 2000-10-20 2005-12-06 International Business Machines Corporation Method and apparatus for using extended disk sector formatting to assist in backup and hierarchical storage management
US6574705B1 (en) 2000-11-16 2003-06-03 International Business Machines Corporation Data processing system and method including a logical volume manager for storing logical volume data
US7917461B2 (en) 2000-12-18 2011-03-29 Netapp, Inc. Mechanism for handling file level and block level remote file accesses using the same server
US20070208757A1 (en) * 2000-12-18 2007-09-06 Kazar Michael L Mechanism for handling file level and block level remote file accesses using the same server
US20050033748A1 (en) * 2000-12-18 2005-02-10 Kazar Michael L. Mechanism for handling file level and block level remote file accesses using the same server
US6868417B2 (en) * 2000-12-18 2005-03-15 Spinnaker Networks, Inc. Mechanism for handling file level and block level remote file accesses using the same server
US8352518B2 (en) 2000-12-18 2013-01-08 Netapp, Inc. Mechanism for handling file level and block level remote file accesses using the same server
US6804690B1 (en) * 2000-12-27 2004-10-12 Emc Corporation Method for physical backup in data logical order
US7558840B1 (en) * 2001-01-25 2009-07-07 Emc Corporation Data backup system having a flexible restore architecture
US6990547B2 (en) 2001-01-29 2006-01-24 Adaptec, Inc. Replacing file system processors by hot swapping
US7356730B2 (en) 2001-01-29 2008-04-08 Adaptec, Inc. Dynamic redistribution of parity groups
US20020156891A1 (en) * 2001-01-29 2002-10-24 Ulrich Thomas R. Enhancing file system performance
US20020194523A1 (en) * 2001-01-29 2002-12-19 Ulrich Thomas R. Replacing file system processors by hot swapping
US7054927B2 (en) 2001-01-29 2006-05-30 Adaptec, Inc. File system metadata describing server directory information
US20020166079A1 (en) * 2001-01-29 2002-11-07 Ulrich Thomas R. Dynamic data recovery
US6745286B2 (en) 2001-01-29 2004-06-01 Snap Appliance, Inc. Interface architecture
US8214590B2 (en) 2001-01-29 2012-07-03 Overland Storage, Inc. Systems and methods for storing parity groups
US6754773B2 (en) 2001-01-29 2004-06-22 Snap Appliance, Inc. Data engine with metadata processor
US6990667B2 (en) 2001-01-29 2006-01-24 Adaptec, Inc. Server-independent object positioning for load balancing drives and servers
US8943513B2 (en) 2001-01-29 2015-01-27 Overland Storage, Inc. Systems and methods for load balancing drives and servers by pushing a copy of a frequently accessed file to another disk drive
US6775792B2 (en) 2001-01-29 2004-08-10 Snap Appliance, Inc. Discrete mapping of parity blocks
US20060031287A1 (en) * 2001-01-29 2006-02-09 Ulrich Thomas R Systems and methods for load balancing drives and servers
US20020178162A1 (en) * 2001-01-29 2002-11-28 Ulrich Thomas R. Integrated distributed file system with variable parity groups
US20050144514A1 (en) * 2001-01-29 2005-06-30 Ulrich Thomas R. Dynamic redistribution of parity groups
US10079878B2 (en) 2001-01-29 2018-09-18 Overland Storage, Inc. Systems and methods for load balancing drives and servers by pushing a copy of a frequently accessed file to another disk drive
US8782661B2 (en) 2001-01-29 2014-07-15 Overland Storage, Inc. Systems and methods for load balancing drives and servers
US20020156974A1 (en) * 2001-01-29 2002-10-24 Ulrich Thomas R. Redundant dynamically distributed file system
US20020156840A1 (en) * 2001-01-29 2002-10-24 Ulrich Thomas R. File system metadata
US20020138559A1 (en) * 2001-01-29 2002-09-26 Ulrich Thomas R. Dynamically distributed file system
US7917695B2 (en) 2001-01-29 2011-03-29 Overland Storage, Inc. Systems and methods for storing parity groups
WO2002061737A3 (en) * 2001-01-29 2003-07-31 Snap Appliance Inc Dynamically distributed file system
US20020124137A1 (en) * 2001-01-29 2002-09-05 Ulrich Thomas R. Enhancing disk array performance via variable parity based load balancing
US6871295B2 (en) 2001-01-29 2005-03-22 Adaptec, Inc. Dynamic data recovery
WO2002061737A2 (en) * 2001-01-29 2002-08-08 Snap Appliance Inc. Dynamically distributed file system
US20080126704A1 (en) * 2001-01-29 2008-05-29 Adaptec, Inc. Systems and methods for storing parity groups
US20020174295A1 (en) * 2001-01-29 2002-11-21 Ulrich Thomas R. Enhanced file system failure tolerance
US6728735B1 (en) 2001-03-12 2004-04-27 Network Appliance, Inc. Restartable dump that produces a consistent filesystem on tapes
US8204864B1 (en) 2001-03-12 2012-06-19 Network Appliance, Inc. Restartable dump that produces a consistent filesystem on tapes
US20020169934A1 (en) * 2001-03-23 2002-11-14 Oliver Krapp Methods and systems for eliminating data redundancies
US6889297B2 (en) * 2001-03-23 2005-05-03 Sun Microsystems, Inc. Methods and systems for eliminating data redundancies
US6668264B1 (en) 2001-04-03 2003-12-23 Network Appliance, Inc. Resynchronization of a target volume with a source volume
US6915316B1 (en) * 2001-04-03 2005-07-05 Network Appliance, Inc. Resynchronization of a target volume with a source volume
US20020194528A1 (en) * 2001-05-22 2002-12-19 Nigel Hart Method, disaster recovery record, back-up apparatus and RAID array controller for use in restoring a configuration of a RAID device
US20040230863A1 (en) * 2001-06-19 2004-11-18 Christoffer Buchhorn Copying procedures including verification in data networks
US7721142B2 (en) 2001-06-19 2010-05-18 Asensus Copying procedures including verification in data networks
US20030172158A1 (en) * 2001-06-28 2003-09-11 Pillai Ananthan K. Information replication system mounting partial database replications
US7076685B2 (en) * 2001-06-28 2006-07-11 Emc Corporation Information replication system mounting partial database replications
US20060200698A1 (en) * 2001-06-28 2006-09-07 Pillai Ananthan K Information replication system mounting partial database replications
US20050172093A1 (en) * 2001-07-06 2005-08-04 Computer Associates Think, Inc. Systems and methods of information backup
US9002910B2 (en) 2001-07-06 2015-04-07 Ca, Inc. Systems and methods of information backup
US7552214B2 (en) 2001-07-06 2009-06-23 Computer Associates Think, Inc. Systems and methods of information backup
US20050055444A1 (en) * 2001-07-06 2005-03-10 Krishnan Venkatasubramanian Systems and methods of information backup
US8370450B2 (en) 2001-07-06 2013-02-05 Ca, Inc. Systems and methods for information backup
US20050038836A1 (en) * 2001-07-06 2005-02-17 Jianxin Wang Systems and methods of information backup
US7734594B2 (en) * 2001-07-06 2010-06-08 Computer Associates Think, Inc. Systems and methods of information backup
US20030033051A1 (en) * 2001-08-09 2003-02-13 John Wilkes Self-disentangling data storage technique
US7761449B2 (en) * 2001-08-09 2010-07-20 Hewlett-Packard Development Company, L.P. Self-disentangling data storage technique
US20080183775A1 (en) * 2001-09-28 2008-07-31 Anand Prahlad System and method for generating and managing quick recovery volumes
US8655846B2 (en) 2001-09-28 2014-02-18 Commvault Systems, Inc. System and method for generating and managing quick recovery volumes
US8055625B2 (en) 2001-09-28 2011-11-08 Commvault Systems, Inc. System and method for generating and managing quick recovery volumes
US8442944B2 (en) 2001-09-28 2013-05-14 Commvault Systems, Inc. System and method for generating and managing quick recovery volumes
US20030110157A1 (en) * 2001-10-02 2003-06-12 Nobuhiro Maki Exclusive access control apparatus and method
US7243229B2 (en) * 2001-10-02 2007-07-10 Hitachi, Ltd. Exclusive access control apparatus and method
US7725588B2 (en) * 2001-11-02 2010-05-25 Nec Corporation Switching method and switch device
US20030097454A1 (en) * 2001-11-02 2003-05-22 Nec Corporation Switching method and switch device
US20050076063A1 (en) * 2001-11-08 2005-04-07 Fujitsu Limited File system for enabling the restoration of a deffective file
US7246139B2 (en) * 2001-11-08 2007-07-17 Fujitsu Limited File system for enabling the restoration of a deffective file
US6898669B2 (en) * 2001-12-18 2005-05-24 Kabushiki Kaisha Toshiba Disk array apparatus and data backup method used therein
US20030126327A1 (en) * 2001-12-28 2003-07-03 Pesola Troy Raymond Volume translation apparatus and method
US7340645B1 (en) 2001-12-28 2008-03-04 Storage Technology Corporation Data management with virtual recovery mapping and backward moves
US7007152B2 (en) 2001-12-28 2006-02-28 Storage Technology Corporation Volume translation apparatus and method
US6938180B1 (en) * 2001-12-31 2005-08-30 Emc Corporation Logical restores of physically backed up data
US20030126247A1 (en) * 2002-01-02 2003-07-03 Exanet Ltd. Apparatus and method for file backup using multiple backup devices
US6732244B2 (en) 2002-01-22 2004-05-04 International Business Machines Corporation Instant virtual copy technique with expedited creation of backup dataset inventory from source dataset inventory
US6684308B2 (en) * 2002-01-31 2004-01-27 Mirapoint, Inc. Method and system for providing direct access recovery using seekable tape device
US20030145180A1 (en) * 2002-01-31 2003-07-31 Mcneil Daniel D. Method and system for providing direct access recovery using seekable tape device
US20030177324A1 (en) * 2002-03-14 2003-09-18 International Business Machines Corporation Method, system, and program for maintaining backup copies of files in a backup storage device
US6880051B2 (en) 2002-03-14 2005-04-12 International Business Machines Corporation Method, system, and program for maintaining backup copies of files in a backup storage device
US7818299B1 (en) 2002-03-19 2010-10-19 Netapp, Inc. System and method for determining changes in two snapshots and for transmitting changes to a destination snapshot
US20050114297A1 (en) * 2002-03-22 2005-05-26 Edwards John K. System and method for performing an on-line check of a file system
US7499959B2 (en) * 2002-03-22 2009-03-03 Network Appliance, Inc. System and method for performing an on-line check of a file system
US20040002999A1 (en) * 2002-03-25 2004-01-01 David Leroy Rand Creating a backup volume using a data profile of a host volume
US7185031B2 (en) * 2002-03-25 2007-02-27 Quantum Corporation Creating a backup volume using a data profile of a host volume
US9182969B1 (en) * 2002-04-03 2015-11-10 Symantec Corporation Using disassociated images for computer and storage resource management
US6848037B2 (en) * 2002-04-08 2005-01-25 International Business Machines Corporation Data processing arrangement and method
US6857053B2 (en) * 2002-04-10 2005-02-15 International Business Machines Corporation Method, system, and program for backing up objects by creating groups of objects
US20030196052A1 (en) * 2002-04-10 2003-10-16 International Business Machines Corporation Method, system, and program for grouping objects
US7203865B2 (en) 2002-04-23 2007-04-10 Gateway Inc. Application level and BIOS level disaster recovery
US20030200482A1 (en) * 2002-04-23 2003-10-23 Gateway, Inc. Application level and BIOS level disaster recovery
US6785789B1 (en) 2002-05-10 2004-08-31 Veritas Operating Corporation Method and apparatus for creating a virtual data copy
US7818532B2 (en) 2002-06-28 2010-10-19 Microsoft Corporation Method and system for creating and restoring an image file
US20070112820A1 (en) * 2002-06-28 2007-05-17 Witt Wesley A Transporting Image Files
US20080016304A1 (en) * 2002-06-28 2008-01-17 Microsoft Corporation Method and System For Creating and Restoring An Image File
US7877567B2 (en) 2002-06-28 2011-01-25 Microsoft Corporation Transporting image files
WO2004010242A2 (en) * 2002-07-23 2004-01-29 Object Interactive Technologies Limited Software tool to detect and restore damaged or lost software components
WO2004010242A3 (en) * 2002-07-23 2008-02-21 Object Interactive Technologie Software tool to detect and restore damaged or lost software components
US20040019878A1 (en) * 2002-07-23 2004-01-29 Sreekrishna Kotnur Software tool to detect and restore damaged or lost software components
US20040030668A1 (en) * 2002-08-09 2004-02-12 Brian Pawlowski Multi-protocol storage appliance that provides integrated support for file and block access protocols
US7873700B2 (en) 2002-08-09 2011-01-18 Netapp, Inc. Multi-protocol storage appliance that provides integrated support for file and block access protocols
US6912631B1 (en) 2002-09-25 2005-06-28 Veritas Operating Corporation Method and apparatus for restoring a corrupted data volume
US7293146B1 (en) 2002-09-25 2007-11-06 Symantec Corporation Method and apparatus for restoring a corrupted data volume
US6938135B1 (en) 2002-10-04 2005-08-30 Veritas Operating Corporation Incremental backup of a data volume
US20110131187A1 (en) * 2002-10-07 2011-06-02 Commvault Systems, Inc. Snapshot storage and management system with indexing and user interface
US7568080B2 (en) 2002-10-07 2009-07-28 Commvault Systems, Inc. Snapshot storage and management system with indexing and user interface
US8140794B2 (en) 2002-10-07 2012-03-20 Commvault Systems, Inc. Snapshot storage and management system with indexing and user interface
US7873806B2 (en) 2002-10-07 2011-01-18 Commvault Systems, Inc. Snapshot storage and management system with indexing and user interface
US20090307449A1 (en) * 2002-10-07 2009-12-10 Anand Prahlad Snapshot storage and management system with indexing and user interface
US20040250033A1 (en) * 2002-10-07 2004-12-09 Anand Prahlad System and method for managing stored data
US8898411B2 (en) 2002-10-07 2014-11-25 Commvault Systems, Inc. Snapshot storage and management system with indexing and user interface
US8433872B2 (en) 2002-10-07 2013-04-30 Commvault Systems, Inc. Snapshot storage and management system with indexing and user interface
US7689861B1 (en) 2002-10-09 2010-03-30 Xpoint Technologies, Inc. Data processing recovery system and method spanning multiple operating system
US20070046791A1 (en) * 2002-10-09 2007-03-01 Xpoint Technologies, Inc. Method and system for deploying a software image
US8336044B2 (en) 2002-10-09 2012-12-18 Rpx Corporation Method and system for deploying a software image
US7925622B2 (en) 2002-10-10 2011-04-12 Netapp, Inc. System and method for file system snapshot of a virtual logical disk
US20080147755A1 (en) * 2002-10-10 2008-06-19 Chapman Dennis E System and method for file system snapshot of a virtual logical disk
US7174420B2 (en) * 2002-10-22 2007-02-06 Microsoft Corporation Transaction-safe FAT file system
US8156165B2 (en) 2002-10-22 2012-04-10 Microsoft Corporation Transaction-safe FAT files system
US7363540B2 (en) 2002-10-22 2008-04-22 Microsoft Corporation Transaction-safe FAT file system improvements
US20080177939A1 (en) * 2002-10-22 2008-07-24 Microsoft Corporation Transaction-safe fat file system improvements
US20040078704A1 (en) * 2002-10-22 2004-04-22 Malueg Michael D. Transaction-safe FAT file system
US8738845B2 (en) 2002-10-22 2014-05-27 Microsoft Corporation Transaction-safe fat file system improvements
US8024507B2 (en) 2002-10-22 2011-09-20 Microsoft Corporation Transaction-safe FAT file system improvements
CN1316779C (en) * 2002-12-05 2007-05-16 华为技术有限公司 A data disaster recovery solution method producing no interlinked data reproduction
US6880053B2 (en) 2002-12-19 2005-04-12 Veritas Operating Corporation Instant refresh of a data volume copy
US6907507B1 (en) 2002-12-19 2005-06-14 Veritas Operating Corporation Tracking in-progress writes through use of multi-column bitmaps
US20040123031A1 (en) * 2002-12-19 2004-06-24 Veritas Software Corporation Instant refresh of a data volume copy
US7337288B2 (en) 2002-12-19 2008-02-26 Symantec Operating Corporation Instant refresh of a data volume copy
US7089385B1 (en) 2002-12-19 2006-08-08 Veritas Operating Corporation Tracking in-progress writes through use of multi-column bitmaps
US6996687B1 (en) 2002-12-20 2006-02-07 Veritas Operating Corporation Method of optimizing the space and improving the write performance of volumes with multiple virtual copies
US6910111B1 (en) 2002-12-20 2005-06-21 Veritas Operating Corporation Volume restoration using an accumulator map
US6978354B1 (en) 2002-12-20 2005-12-20 Veritas Operating Corporation Method for creating a virtual data copy of a volume being restored
US7120823B2 (en) * 2003-04-17 2006-10-10 International Business Machines Corporation Method and apparatus for recovering logical partition configuration data
US20040210792A1 (en) * 2003-04-17 2004-10-21 International Business Machines Corporation Method and apparatus for recovering logical partition configuration data
US7117385B2 (en) 2003-04-21 2006-10-03 International Business Machines Corporation Method and apparatus for recovery of partitions in a logical partitioned data processing system
US20040210793A1 (en) * 2003-04-21 2004-10-21 International Business Machines Corporation Method and apparatus for recovery of partitions in a logical partitioned data processing system
US20040236984A1 (en) * 2003-05-20 2004-11-25 Yasuo Yamasaki Data backup method in a network storage system
US7039778B2 (en) * 2003-05-20 2006-05-02 Hitachi, Ltd. Data backup method in a network storage system
US20040255183A1 (en) * 2003-05-30 2004-12-16 Toshinari Takahashi Data management method and apparatus and program
US7523276B1 (en) * 2003-06-30 2009-04-21 Veritas Software Corporation Synchronization of selected data from snapshots stored on different storage volumes
US7103737B1 (en) 2003-07-01 2006-09-05 Veritas Operating Corporation Flexible hierarchy of relationships and operations in data volumes
US7664793B1 (en) 2003-07-01 2010-02-16 Symantec Operating Corporation Transforming unrelated data volumes into related data volumes
US6938136B2 (en) 2003-07-14 2005-08-30 International Business Machines Corporation Method, system, and program for performing an input/output operation with respect to a logical storage device
US20050015415A1 (en) * 2003-07-14 2005-01-20 International Business Machines Corporation Method, system, and program for performing an input/output operation with respect to a logical storage device
US7024527B1 (en) * 2003-07-18 2006-04-04 Veritas Operating Corporation Data restore mechanism
US7318135B1 (en) * 2003-07-22 2008-01-08 Acronis Inc. System and method for using file system snapshots for online data backup
US7085895B2 (en) 2003-09-05 2006-08-01 International Business Machines Corporation Apparatus, system, and method flushing data from a cache to secondary storage
US20050055512A1 (en) * 2003-09-05 2005-03-10 Kishi Gregory Tad Apparatus, system, and method flushing data from a cache to secondary storage
US7783611B1 (en) 2003-11-10 2010-08-24 Netapp, Inc. System and method for managing file metadata during consistency points
US7979402B1 (en) 2003-11-10 2011-07-12 Netapp, Inc. System and method for managing file data during consistency points
US7739250B1 (en) 2003-11-10 2010-06-15 Netapp, Inc. System and method for managing file data during consistency points
US7401093B1 (en) 2003-11-10 2008-07-15 Network Appliance, Inc. System and method for managing file data during consistency points
US7721062B1 (en) 2003-11-10 2010-05-18 Netapp, Inc. Method for detecting leaked buffer writes across file system consistency points
US8645320B2 (en) 2003-11-13 2014-02-04 Commvault Systems, Inc. System and method for performing an image level snapshot and for restoring partial volume data
US20050193026A1 (en) * 2003-11-13 2005-09-01 Anand Prahlad System and method for performing an image level snapshot and for restoring partial volume data
US9619341B2 (en) 2003-11-13 2017-04-11 Commvault Systems, Inc. System and method for performing an image level snapshot and for restoring partial volume data
US20070185940A1 (en) * 2003-11-13 2007-08-09 Anand Prahlad System and method for performing an image level snapshot and for restoring partial volume data
US8195623B2 (en) 2003-11-13 2012-06-05 Commvault Systems, Inc. System and method for performing a snapshot and for restoring data
US8583594B2 (en) 2003-11-13 2013-11-12 Commvault Systems, Inc. System and method for performing integrated storage operations
US7840533B2 (en) 2003-11-13 2010-11-23 Commvault Systems, Inc. System and method for performing an image level snapshot and for restoring partial volume data
US7539707B2 (en) 2003-11-13 2009-05-26 Commvault Systems, Inc. System and method for performing an image level snapshot and for restoring partial volume data
US8190565B2 (en) 2003-11-13 2012-05-29 Commvault Systems, Inc. System and method for performing an image level snapshot and for restoring partial volume data
US20090240748A1 (en) * 2003-11-13 2009-09-24 Anand Prahlad System and method for performing an image level snapshot and for restoring partial volume data
US9208160B2 (en) 2003-11-13 2015-12-08 Commvault Systems, Inc. System and method for performing an image level snapshot and for restoring partial volume data
US9405631B2 (en) 2003-11-13 2016-08-02 Commvault Systems, Inc. System and method for performing an image level snapshot and for restoring partial volume data
US8886595B2 (en) 2003-11-13 2014-11-11 Commvault Systems, Inc. System and method for performing an image level snapshot and for restoring partial volume data
US7606841B1 (en) 2003-12-29 2009-10-20 Symantec Operating Corporation Coordinated dirty block tracking
US7039661B1 (en) 2003-12-29 2006-05-02 Veritas Operating Corporation Coordinated dirty block tracking
US7657717B1 (en) 2004-02-02 2010-02-02 Symantec Operating Corporation Coherently sharing any form of instant snapshots separately from base volumes
US8086572B2 (en) 2004-03-30 2011-12-27 International Business Machines Corporation Method, system, and program for restoring data to a file
US7409494B2 (en) 2004-04-30 2008-08-05 Network Appliance, Inc. Extension of write anywhere file system layout
US8533201B2 (en) 2004-04-30 2013-09-10 Netapp, Inc. Extension of write anywhere file layout write allocation
US7970770B2 (en) 2004-04-30 2011-06-28 Netapp, Inc. Extension of write anywhere file layout write allocation
US8990539B2 (en) 2004-04-30 2015-03-24 Netapp, Inc. Extension of write anywhere file system layout
US8583892B2 (en) 2004-04-30 2013-11-12 Netapp, Inc. Extension of write anywhere file system layout
US7409511B2 (en) 2004-04-30 2008-08-05 Network Appliance, Inc. Cloning technique for efficiently creating a copy of a volume in a storage system
US9430493B2 (en) 2004-04-30 2016-08-30 Netapp, Inc. Extension of write anywhere file layout write allocation
US8903830B2 (en) 2004-04-30 2014-12-02 Netapp, Inc. Extension of write anywhere file layout write allocation
US20080155220A1 (en) * 2004-04-30 2008-06-26 Network Appliance, Inc. Extension of write anywhere file layout write allocation
US20050246397A1 (en) * 2004-04-30 2005-11-03 Edwards John K Cloning technique for efficiently creating a copy of a volume in a storage system
US20110225364A1 (en) * 2004-04-30 2011-09-15 Edwards John K Extension of write anywhere file layout write allocation
US8099576B1 (en) 2004-04-30 2012-01-17 Netapp, Inc. Extension of write anywhere file system layout
US20050246401A1 (en) * 2004-04-30 2005-11-03 Edwards John K Extension of write anywhere file system layout
US20050246382A1 (en) * 2004-04-30 2005-11-03 Edwards John K Extension of write anywhere file layout write allocation
CN1329838C (en) * 2004-05-13 2007-08-01 国际商业机器公司 Method and apparatus to eliminate interpartition covert storage channel and partition analysis
US7254682B1 (en) 2004-07-28 2007-08-07 Symantec Corporation Selective file and folder snapshot image creation
US20060026432A1 (en) * 2004-07-30 2006-02-02 Weirauch Charles R Drive tracking system for removable media
TWI384359B (en) * 2004-07-30 2013-02-01 Hewlett Packard Development Co Method of recording on removable storage medium and storage drive adapted to receive removable storage medium
US10402277B2 (en) 2004-11-15 2019-09-03 Commvault Systems, Inc. Using a snapshot as a data source
US8959299B2 (en) * 2004-11-15 2015-02-17 Commvault Systems, Inc. Using a snapshot as a data source
US20100070726A1 (en) * 2004-11-15 2010-03-18 David Ngo Using a snapshot as a data source
US10303650B2 (en) 2004-12-17 2019-05-28 Microsoft Technology Licensing, Llc Contiguous file allocation in an extensible file system
US9575972B2 (en) 2004-12-17 2017-02-21 Microsoft Technology Licensing, Llc Contiguous file allocation in an extensible file system
US20090164539A1 (en) * 2004-12-17 2009-06-25 Microsoft Corporation Contiguous file allocation in an extensible file system
US10474641B2 (en) 2004-12-17 2019-11-12 Microsoft Technology Licensing, Llc Extensible file system
US10614032B2 (en) 2004-12-17 2020-04-07 Microsoft Technology Licensing, Llc Quick filename lookup using name hash
US8606830B2 (en) 2004-12-17 2013-12-10 Microsoft Corporation Contiguous file allocation in an extensible file system
US20110212549A1 (en) * 2005-02-11 2011-09-01 Chen Kong C Apparatus and method for predetermined component placement to a target platform
US9152503B1 (en) 2005-03-16 2015-10-06 Netapp, Inc. System and method for efficiently calculating storage required to split a clone volume
US7757056B1 (en) 2005-03-16 2010-07-13 Netapp, Inc. System and method for efficiently calculating storage required to split a clone volume
US20060224642A1 (en) * 2005-04-01 2006-10-05 Microsoft Corporation Production server to data protection server mapping
US7483926B2 (en) * 2005-04-01 2009-01-27 Microsoft Corporation Production server to data protection server mapping
US20070022138A1 (en) * 2005-07-22 2007-01-25 Pranoop Erasani Client failure fencing mechanism for fencing network file system data in a host-cluster environment
US7653682B2 (en) 2005-07-22 2010-01-26 Netapp, Inc. Client failure fencing mechanism for fencing network file system data in a host-cluster environment
US20070027933A1 (en) * 2005-07-28 2007-02-01 Advanced Micro Devices, Inc. Resilient system partition for personal internet communicator
US7991850B2 (en) * 2005-07-28 2011-08-02 Advanced Micro Devices, Inc. Resilient system partition for personal internet communicator
US8914665B2 (en) 2005-10-31 2014-12-16 Hewlett-Packard Development Company, L.P. Reading or storing boot data in auxiliary memory of a tape cartridge
US20070101113A1 (en) * 2005-10-31 2007-05-03 Evans Rhys W Data back-up and recovery
US9009114B1 (en) * 2005-10-31 2015-04-14 Symantec Operating Corporation Version mapped incremental backups
US8935281B1 (en) * 2005-10-31 2015-01-13 Symantec Operating Corporation Optimized content search of files
US9158781B1 (en) 2005-10-31 2015-10-13 Symantec Operating Corporation Version mapped incremental backups with version creation condition
US7917481B1 (en) 2005-10-31 2011-03-29 Symantec Operating Corporation File-system-independent malicious content detection
US20120005163A1 (en) * 2005-11-04 2012-01-05 Oracle America, Inc. Block-based incremental backup
US20090015735A1 (en) * 2005-11-10 2009-01-15 Michael David Simmonds Display source
US9971657B2 (en) 2005-12-19 2018-05-15 Commvault Systems, Inc. Systems and methods for performing data replication
US9639294B2 (en) 2005-12-19 2017-05-02 Commvault Systems, Inc. Systems and methods for performing data replication
US8793221B2 (en) 2005-12-19 2014-07-29 Commvault Systems, Inc. Systems and methods for performing data replication
US20070186068A1 (en) * 2005-12-19 2007-08-09 Agrawal Vijay H Network redirector systems and methods for performing data replication
US9298382B2 (en) 2005-12-19 2016-03-29 Commvault Systems, Inc. Systems and methods for performing replication copy storage operations
US7962709B2 (en) 2005-12-19 2011-06-14 Commvault Systems, Inc. Network redirector systems and methods for performing data replication
US8725694B2 (en) 2005-12-19 2014-05-13 Commvault Systems, Inc. Systems and methods for performing replication copy storage operations
US8271830B2 (en) 2005-12-19 2012-09-18 Commvault Systems, Inc. Rolling cache configuration for a data replication system
US8935210B2 (en) 2005-12-19 2015-01-13 Commvault Systems, Inc. Systems and methods for performing replication copy storage operations
US8655850B2 (en) 2005-12-19 2014-02-18 Commvault Systems, Inc. Systems and methods for resynchronizing information
US8656218B2 (en) 2005-12-19 2014-02-18 Commvault Systems, Inc. Memory configuration for data replication system including identification of a subsequent log entry by a destination computer
US8463751B2 (en) 2005-12-19 2013-06-11 Commvault Systems, Inc. Systems and methods for performing replication copy storage operations
US9208210B2 (en) 2005-12-19 2015-12-08 Commvault Systems, Inc. Rolling cache configuration for a data replication system
US8121983B2 (en) 2005-12-19 2012-02-21 Commvault Systems, Inc. Systems and methods for monitoring application data in a data replication system
US8024294B2 (en) 2005-12-19 2011-09-20 Commvault Systems, Inc. Systems and methods for performing replication copy storage operations
US9002799B2 (en) 2005-12-19 2015-04-07 Commvault Systems, Inc. Systems and methods for resynchronizing information
US7962455B2 (en) 2005-12-19 2011-06-14 Commvault Systems, Inc. Pathname translation in a data replication system
US9020898B2 (en) 2005-12-19 2015-04-28 Commvault Systems, Inc. Systems and methods for performing data replication
US7590660B1 (en) 2006-03-21 2009-09-15 Network Appliance, Inc. Method and system for efficient database cloning
US9344112B2 (en) 2006-04-28 2016-05-17 Ling Zheng Sampling based elimination of duplicate data
US20070255758A1 (en) * 2006-04-28 2007-11-01 Ling Zheng System and method for sampling based elimination of duplicate data
US8165221B2 (en) 2006-04-28 2012-04-24 Netapp, Inc. System and method for sampling based elimination of duplicate data
US9122695B2 (en) 2006-05-23 2015-09-01 Microsoft Technology Licensing, Llc Extending cluster allocations in an extensible file system
US20110113078A1 (en) * 2006-05-23 2011-05-12 Microsoft Corporation Extending Cluster Allocations In An Extensible File System
US8805780B2 (en) 2006-05-23 2014-08-12 Microsoft Corporation Extending cluster allocations in an extensible file system
US8452729B2 (en) 2006-05-23 2013-05-28 Microsoft Corporation Extending cluster allocations in an extensible file system
US10585868B2 (en) 2006-05-23 2020-03-10 Microsoft Technology Licensing, Llc Extending cluster allocations in an extensible file system
US8725772B2 (en) 2006-05-23 2014-05-13 Microsoft Corporation Extending cluster allocations in an extensible file system
US8364732B2 (en) 2006-05-23 2013-01-29 Microsoft Corporation Extending cluster allocations in an extensible file system
US8433677B2 (en) 2006-05-23 2013-04-30 Microsoft Corporation Extending cluster allocations in an extensible file system
US9558223B2 (en) 2006-05-23 2017-01-31 Microsoft Technology Licensing, Llc Extending cluster allocations in an extensible file system
US20070276885A1 (en) * 2006-05-29 2007-11-29 Microsoft Corporation Creating frequent application-consistent backups efficiently
US7613750B2 (en) * 2006-05-29 2009-11-03 Microsoft Corporation Creating frequent application-consistent backups efficiently
US7917916B2 (en) * 2006-06-20 2011-03-29 Lenovo (Singapore) Pte. Ltd IT administrator initiated remote hardware independent imaging technology
US20070294465A1 (en) * 2006-06-20 2007-12-20 Lenovo (Singapore) Pte. Ltd. IT administrator initiated remote hardware independent imaging technology
US20080005141A1 (en) * 2006-06-29 2008-01-03 Ling Zheng System and method for retrieving and using block fingerprints for data deduplication
US8412682B2 (en) * 2006-06-29 2013-04-02 Netapp, Inc. System and method for retrieving and using block fingerprints for data deduplication
US7921077B2 (en) 2006-06-29 2011-04-05 Netapp, Inc. System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US8296260B2 (en) 2006-06-29 2012-10-23 Netapp, Inc. System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US20080005201A1 (en) * 2006-06-29 2008-01-03 Daniel Ting System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US20110035357A1 (en) * 2006-06-29 2011-02-10 Daniel Ting System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US9003374B2 (en) 2006-07-27 2015-04-07 Commvault Systems, Inc. Systems and methods for continuous data replication
US8726242B2 (en) 2006-07-27 2014-05-13 Commvault Systems, Inc. Systems and methods for continuous data replication
US8301673B2 (en) 2006-12-29 2012-10-30 Netapp, Inc. System and method for performing distributed consistency verification of a clustered file system
US20080189343A1 (en) * 2006-12-29 2008-08-07 Robert Wyckoff Hyer System and method for performing distributed consistency verification of a clustered file system
US20100049776A1 (en) * 2007-01-16 2010-02-25 Microsoft Corporation Fat directory structure for use in transaction safe file
US8024383B2 (en) 2007-01-16 2011-09-20 Mircrosoft Corporation Fat directory structure for use in transaction safe file
US9141630B2 (en) 2007-01-16 2015-09-22 Microsoft Technology Licensing, Llc Fat directory structure for use in transaction safe file system
US20080172426A1 (en) * 2007-01-16 2008-07-17 Microsoft Corporation Storage system format for transaction safe file system
US20080172425A1 (en) * 2007-01-16 2008-07-17 Microsoft Corporation FAT directory structure for use in transaction safe file system
US7747664B2 (en) 2007-01-16 2010-06-29 Microsoft Corporation Storage system format for transaction safe file system
US8499013B2 (en) 2007-01-16 2013-07-30 Microsoft Corporation FAT directory structure for use in transaction safe file system
US20100217788A1 (en) * 2007-01-16 2010-08-26 Microsoft Corporation Storage system format for transaction safe file system
US9239761B2 (en) 2007-01-16 2016-01-19 Microsoft Technology Licensing, Llc Storage system format for transaction safe file system
US8001165B2 (en) 2007-01-16 2011-08-16 Microsoft Corporation Storage system format for transaction safe file system
US7613738B2 (en) 2007-01-16 2009-11-03 Microsoft Corporation FAT directory structure for use in transaction safe file system
US8799051B2 (en) 2007-03-09 2014-08-05 Commvault Systems, Inc. System and method for automating customer-validated statement of work for a data storage environment
US8428995B2 (en) 2007-03-09 2013-04-23 Commvault Systems, Inc. System and method for automating customer-validated statement of work for a data storage environment
US8290808B2 (en) 2007-03-09 2012-10-16 Commvault Systems, Inc. System and method for automating customer-validated statement of work for a data storage environment
US8260748B1 (en) * 2007-03-27 2012-09-04 Symantec Corporation Method and apparatus for capturing data from a backup image
US8219821B2 (en) 2007-03-27 2012-07-10 Netapp, Inc. System and method for signature based data container recognition
US7882304B2 (en) 2007-04-27 2011-02-01 Netapp, Inc. System and method for efficient updates of sequential block storage
US8219749B2 (en) 2007-04-27 2012-07-10 Netapp, Inc. System and method for efficient updates of sequential block storage
US7827350B1 (en) 2007-04-27 2010-11-02 Netapp, Inc. Method and system for promoting a snapshot in a distributed file system
US20080270690A1 (en) * 2007-04-27 2008-10-30 English Robert M System and method for efficient updates of sequential block storage
US20090034377A1 (en) * 2007-04-27 2009-02-05 English Robert M System and method for efficient updates of sequential block storage
US8762345B2 (en) 2007-05-31 2014-06-24 Netapp, Inc. System and method for accelerating anchor point detection
US20080301134A1 (en) * 2007-05-31 2008-12-04 Miller Steven C System and method for accelerating anchor point detection
US9069787B2 (en) 2007-05-31 2015-06-30 Netapp, Inc. System and method for accelerating anchor point detection
US7996636B1 (en) 2007-11-06 2011-08-09 Netapp, Inc. Uniquely identifying block context signatures in a storage volume hierarchy
US8725986B1 (en) 2008-04-18 2014-05-13 Netapp, Inc. System and method for volume block number to disk block number mapping
US9280457B2 (en) 2008-04-18 2016-03-08 Netapp, Inc. System and method for volume block number to disk block number mapping
US11321181B2 (en) 2008-06-18 2022-05-03 Commvault Systems, Inc. Data protection scheduling, such as providing a flexible backup window in a data protection system
US11016859B2 (en) 2008-06-24 2021-05-25 Commvault Systems, Inc. De-duplication systems and methods for application-specific data
US8615495B1 (en) * 2008-08-13 2013-12-24 Symantec Corporation Techniques for providing a differential backup from a storage image
US20100057755A1 (en) * 2008-08-29 2010-03-04 Red Hat Corporation File system with flexible inode structures
US10997035B2 (en) 2008-09-16 2021-05-04 Commvault Systems, Inc. Using a snapshot as a data source
US20100082714A1 (en) * 2008-09-30 2010-04-01 Microsoft Corporation Nested file system support
US8990161B1 (en) * 2008-09-30 2015-03-24 Emc Corporation System and method for single segment backup
US8234316B2 (en) * 2008-09-30 2012-07-31 Microsoft Corporation Nested file system support
US20110022811A1 (en) * 2008-10-02 2011-01-27 Hitachi Software Engineering Co., Ltd. Information backup/restoration processing apparatus and information backup/restoration processing system
CN102150124A (en) * 2008-10-02 2011-08-10 日立系统解决方案有限公司 Information backup/restoration processing apparatus and information backup/restoration processing system
US9396244B2 (en) 2008-12-10 2016-07-19 Commvault Systems, Inc. Systems and methods for managing replicated database data
US8204859B2 (en) 2008-12-10 2012-06-19 Commvault Systems, Inc. Systems and methods for managing replicated database data
US9495382B2 (en) 2008-12-10 2016-11-15 Commvault Systems, Inc. Systems and methods for performing discrete data replication
US9047357B2 (en) 2008-12-10 2015-06-02 Commvault Systems, Inc. Systems and methods for managing replicated database data in dirty and clean shutdown states
US20100145909A1 (en) * 2008-12-10 2010-06-10 Commvault Systems, Inc. Systems and methods for managing replicated database data
US8666942B2 (en) 2008-12-10 2014-03-04 Commvault Systems, Inc. Systems and methods for managing snapshots of replicated databases
US20100174683A1 (en) * 2009-01-08 2010-07-08 Bryan Wayne Freeman Individual object restore
US8285680B2 (en) * 2009-01-08 2012-10-09 International Business Machines Corporation Individual object restore
US9170883B2 (en) 2009-02-09 2015-10-27 Netapp, Inc. Online data consistency checking in a network storage system with optional committal of remedial changes
US8793223B1 (en) 2009-02-09 2014-07-29 Netapp, Inc. Online data consistency checking in a network storage system with optional committal of remedial changes
US9031908B1 (en) 2009-03-31 2015-05-12 Symantec Corporation Method and apparatus for simultaneous comparison of multiple backup sets maintained in a computer system
US10540327B2 (en) 2009-07-08 2020-01-21 Commvault Systems, Inc. Synchronized data deduplication
US11288235B2 (en) 2009-07-08 2022-03-29 Commvault Systems, Inc. Synchronized data deduplication
US9092500B2 (en) 2009-09-03 2015-07-28 Commvault Systems, Inc. Utilizing snapshots for access to databases and other applications
US10831608B2 (en) 2009-09-14 2020-11-10 Commvault Systems, Inc. Systems and methods for performing data management operations using snapshots
US9268602B2 (en) 2009-09-14 2016-02-23 Commvault Systems, Inc. Systems and methods for performing data management operations using snapshots
US8595191B2 (en) 2009-12-31 2013-11-26 Commvault Systems, Inc. Systems and methods for performing data management operations using snapshots
US20110161295A1 (en) * 2009-12-31 2011-06-30 David Ngo Systems and methods for analyzing snapshots
US9298559B2 (en) 2009-12-31 2016-03-29 Commvault Systems, Inc. Systems and methods for analyzing snapshots
US10379957B2 (en) 2009-12-31 2019-08-13 Commvault Systems, Inc. Systems and methods for analyzing snapshots
US20110161299A1 (en) * 2009-12-31 2011-06-30 Anand Prahlad Systems and methods for performing data management operations using snapshots
US8433682B2 (en) 2009-12-31 2013-04-30 Commvault Systems, Inc. Systems and methods for analyzing snapshots
US8868494B2 (en) 2010-03-29 2014-10-21 Commvault Systems, Inc. Systems and methods for selective data replication
US8504517B2 (en) 2010-03-29 2013-08-06 Commvault Systems, Inc. Systems and methods for selective data replication
US9002785B2 (en) 2010-03-30 2015-04-07 Commvault Systems, Inc. Stubbing systems and methods in a data replication environment
US8504515B2 (en) 2010-03-30 2013-08-06 Commvault Systems, Inc. Stubbing systems and methods in a data replication environment
US8352422B2 (en) 2010-03-30 2013-01-08 Commvault Systems, Inc. Data restore systems and methods in a replication environment
US8725698B2 (en) 2010-03-30 2014-05-13 Commvault Systems, Inc. Stub file prioritization in a data replication system
US9483511B2 (en) 2010-03-30 2016-11-01 Commvault Systems, Inc. Stubbing systems and methods in a data replication environment
US8572038B2 (en) 2010-05-28 2013-10-29 Commvault Systems, Inc. Systems and methods for performing data replication
US8489656B2 (en) 2010-05-28 2013-07-16 Commvault Systems, Inc. Systems and methods for performing data replication
US8589347B2 (en) 2010-05-28 2013-11-19 Commvault Systems, Inc. Systems and methods for performing data replication
US8745105B2 (en) 2010-05-28 2014-06-03 Commvault Systems, Inc. Systems and methods for performing data replication
US20220156155A1 (en) * 2010-06-14 2022-05-19 Veeam Software Ag Selective processing of file system objects for image level backups
US20110307657A1 (en) * 2010-06-14 2011-12-15 Veeam Software International Ltd. Selective Processing of File System Objects for Image Level Backups
US9507670B2 (en) * 2010-06-14 2016-11-29 Veeam Software Ag Selective processing of file system objects for image level backups
US11789823B2 (en) * 2010-06-14 2023-10-17 Veeam Software Ag Selective processing of file system objects for image level backups
US20170075766A1 (en) * 2010-06-14 2017-03-16 Veeam Software Ag Selective processing of file system objects for image level backups
US20190332489A1 (en) * 2010-06-14 2019-10-31 Veeam Software Ag Selective Processing of File System Objects for Image Level Backups
US11068349B2 (en) * 2010-06-14 2021-07-20 Veeam Software Ag Selective processing of file system objects for image level backups
US8332689B2 (en) 2010-07-19 2012-12-11 Veeam Software International Ltd. Systems, methods, and computer program products for instant recovery of image level backups
US9104624B2 (en) 2010-07-19 2015-08-11 Veeam Software Ag Systems, methods, and computer program products for instant recovery of image level backups
US8566640B2 (en) 2010-07-19 2013-10-22 Veeam Software Ag Systems, methods, and computer program products for instant recovery of image level backups
US10126973B2 (en) 2010-09-30 2018-11-13 Commvault Systems, Inc. Systems and methods for retaining and using data block signatures in data protection operations
US9898225B2 (en) 2010-09-30 2018-02-20 Commvault Systems, Inc. Content aligned block-based deduplication
US10191816B2 (en) 2010-12-14 2019-01-29 Commvault Systems, Inc. Client-side repository in a networked deduplicated storage system
US11422976B2 (en) 2010-12-14 2022-08-23 Commvault Systems, Inc. Distributed deduplicated storage system
US10740295B2 (en) 2010-12-14 2020-08-11 Commvault Systems, Inc. Distributed deduplicated storage system
US11169888B2 (en) 2010-12-14 2021-11-09 Commvault Systems, Inc. Client-side repository in a networked deduplicated storage system
US9898478B2 (en) 2010-12-14 2018-02-20 Commvault Systems, Inc. Distributed deduplicated storage system
JP2012133769A (en) * 2010-12-17 2012-07-12 Internatl Business Mach Corp <Ibm> Computer program, system and method for restoring deduplicated data objects from sequential backup devices
US8719767B2 (en) 2011-03-31 2014-05-06 Commvault Systems, Inc. Utilizing snapshots to provide builds to developer computing devices
US8565545B1 (en) * 2011-04-07 2013-10-22 Symantec Corporation Systems and methods for restoring images
US9335931B2 (en) * 2011-07-01 2016-05-10 Futurewei Technologies, Inc. System and method for making snapshots of storage devices
US20130007389A1 (en) * 2011-07-01 2013-01-03 Futurewei Technologies, Inc. System and Method for Making Snapshots of Storage Devices
US9298715B2 (en) 2012-03-07 2016-03-29 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US9471578B2 (en) 2012-03-07 2016-10-18 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US9898371B2 (en) 2012-03-07 2018-02-20 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US9928146B2 (en) 2012-03-07 2018-03-27 Commvault Systems, Inc. Data storage system utilizing proxy device for storage operations
US10698632B2 (en) 2012-04-23 2020-06-30 Commvault Systems, Inc. Integrated snapshot interface for a data storage system
US11269543B2 (en) 2012-04-23 2022-03-08 Commvault Systems, Inc. Integrated snapshot interface for a data storage system
US9928002B2 (en) 2012-04-23 2018-03-27 Commvault Systems, Inc. Integrated snapshot interface for a data storage system
US9342537B2 (en) 2012-04-23 2016-05-17 Commvault Systems, Inc. Integrated snapshot interface for a data storage system
US9858156B2 (en) 2012-06-13 2018-01-02 Commvault Systems, Inc. Dedicated client-side signature generator in a networked storage system
US10387269B2 (en) 2012-06-13 2019-08-20 Commvault Systems, Inc. Dedicated client-side signature generator in a networked storage system
US10956275B2 (en) 2012-06-13 2021-03-23 Commvault Systems, Inc. Collaborative restore in a networked storage system
US10176053B2 (en) 2012-06-13 2019-01-08 Commvault Systems, Inc. Collaborative restore in a networked storage system
US9690666B1 (en) * 2012-07-02 2017-06-27 Veritas Technologies Llc Incremental backup operations in a transactional file system
CN102880522B (en) * 2012-09-21 2014-12-31 中国人民解放军国防科学技术大学 Hardware fault-oriented method and device for correcting faults in key files of system
CN102880522A (en) * 2012-09-21 2013-01-16 中国人民解放军国防科学技术大学 Hardware fault-oriented method and device for correcting faults in key files of system
US9569311B2 (en) 2012-10-01 2017-02-14 Hitachi, Ltd. Computer system for backing up data
US9336226B2 (en) 2013-01-11 2016-05-10 Commvault Systems, Inc. Criteria-based data synchronization management
US11157450B2 (en) 2013-01-11 2021-10-26 Commvault Systems, Inc. High availability distributed deduplicated storage system
US11847026B2 (en) 2013-01-11 2023-12-19 Commvault Systems, Inc. Single snapshot for multiple agents
US9430491B2 (en) 2013-01-11 2016-08-30 Commvault Systems, Inc. Request-based data synchronization management
US10229133B2 (en) 2013-01-11 2019-03-12 Commvault Systems, Inc. High availability distributed deduplicated storage system
US10853176B2 (en) 2013-01-11 2020-12-01 Commvault Systems, Inc. Single snapshot for multiple agents
US9886346B2 (en) 2013-01-11 2018-02-06 Commvault Systems, Inc. Single snapshot for multiple agents
US9262435B2 (en) 2013-01-11 2016-02-16 Commvault Systems, Inc. Location-based data synchronization management
US9639426B2 (en) 2014-01-24 2017-05-02 Commvault Systems, Inc. Single snapshot for multiple applications
US9753812B2 (en) 2014-01-24 2017-09-05 Commvault Systems, Inc. Generating mapping information for single snapshot for multiple applications
US10671484B2 (en) 2014-01-24 2020-06-02 Commvault Systems, Inc. Single snapshot for multiple applications
US9632874B2 (en) 2014-01-24 2017-04-25 Commvault Systems, Inc. Database application backup in single snapshot for multiple applications
US9495251B2 (en) 2014-01-24 2016-11-15 Commvault Systems, Inc. Snapshot readiness checking and reporting
US10223365B2 (en) 2014-01-24 2019-03-05 Commvault Systems, Inc. Snapshot readiness checking and reporting
US10942894B2 (en) 2014-01-24 2021-03-09 Commvault Systems, Inc Operation readiness checking and reporting
US9892123B2 (en) 2014-01-24 2018-02-13 Commvault Systems, Inc. Snapshot readiness checking and reporting
US10572444B2 (en) 2014-01-24 2020-02-25 Commvault Systems, Inc. Operation readiness checking and reporting
US11119984B2 (en) 2014-03-17 2021-09-14 Commvault Systems, Inc. Managing deletions from a deduplication database
US10380072B2 (en) 2014-03-17 2019-08-13 Commvault Systems, Inc. Managing deletions from a deduplication database
US11188504B2 (en) 2014-03-17 2021-11-30 Commvault Systems, Inc. Managing deletions from a deduplication database
US10445293B2 (en) 2014-03-17 2019-10-15 Commvault Systems, Inc. Managing deletions from a deduplication database
US11249858B2 (en) 2014-08-06 2022-02-15 Commvault Systems, Inc. Point-in-time backups of a production application made accessible over fibre channel and/or ISCSI as data sources to a remote application by representing the backups as pseudo-disks operating apart from the production application and its host
US11416341B2 (en) * 2014-08-06 2022-08-16 Commvault Systems, Inc. Systems and methods to reduce application downtime during a restore operation using a pseudo-storage device
US10044803B2 (en) 2014-09-03 2018-08-07 Commvault Systems, Inc. Consolidated processing of storage-array commands by a snapshot-control media agent
US10042716B2 (en) 2014-09-03 2018-08-07 Commvault Systems, Inc. Consolidated processing of storage-array commands using a forwarder media agent in conjunction with a snapshot-control media agent
US11245759B2 (en) 2014-09-03 2022-02-08 Commvault Systems, Inc. Consolidated processing of storage-array commands by a snapshot-control media agent
US9774672B2 (en) 2014-09-03 2017-09-26 Commvault Systems, Inc. Consolidated processing of storage-array commands by a snapshot-control media agent
US10891197B2 (en) 2014-09-03 2021-01-12 Commvault Systems, Inc. Consolidated processing of storage-array commands using a forwarder media agent in conjunction with a snapshot-control media agent
US10419536B2 (en) 2014-09-03 2019-09-17 Commvault Systems, Inc. Consolidated processing of storage-array commands by a snapshot-control media agent
US10798166B2 (en) 2014-09-03 2020-10-06 Commvault Systems, Inc. Consolidated processing of storage-array commands by a snapshot-control media agent
US9152507B1 (en) * 2014-09-05 2015-10-06 Storagecraft Technology Corporation Pruning unwanted file content from an image backup
US9934238B2 (en) 2014-10-29 2018-04-03 Commvault Systems, Inc. Accessing a file system using tiered deduplication
US11921675B2 (en) 2014-10-29 2024-03-05 Commvault Systems, Inc. Accessing a file system using tiered deduplication
US11113246B2 (en) 2014-10-29 2021-09-07 Commvault Systems, Inc. Accessing a file system using tiered deduplication
US10474638B2 (en) 2014-10-29 2019-11-12 Commvault Systems, Inc. Accessing a file system using tiered deduplication
US10628266B2 (en) 2014-11-14 2020-04-21 Commvault System, Inc. Unified snapshot storage management
US9648105B2 (en) 2014-11-14 2017-05-09 Commvault Systems, Inc. Unified snapshot storage management, using an enhanced storage manager and enhanced media agents
US9996428B2 (en) 2014-11-14 2018-06-12 Commvault Systems, Inc. Unified snapshot storage management
US9448731B2 (en) 2014-11-14 2016-09-20 Commvault Systems, Inc. Unified snapshot storage management
US11507470B2 (en) 2014-11-14 2022-11-22 Commvault Systems, Inc. Unified snapshot storage management
US10521308B2 (en) 2014-11-14 2019-12-31 Commvault Systems, Inc. Unified snapshot storage management, using an enhanced storage manager and enhanced media agents
US9921920B2 (en) 2014-11-14 2018-03-20 Commvault Systems, Inc. Unified snapshot storage management, using an enhanced storage manager and enhanced media agents
US9208817B1 (en) 2015-03-10 2015-12-08 Alibaba Group Holding Limited System and method for determination and reallocation of pending sectors caused by media fatigue
US10067707B2 (en) 2015-03-10 2018-09-04 Alibaba Group Holding Limited System and method for determination and reallocation of pending sectors caused by media fatigue
US11301420B2 (en) 2015-04-09 2022-04-12 Commvault Systems, Inc. Highly reusable deduplication database after disaster recovery
US10339106B2 (en) 2015-04-09 2019-07-02 Commvault Systems, Inc. Highly reusable deduplication database after disaster recovery
US10311150B2 (en) 2015-04-10 2019-06-04 Commvault Systems, Inc. Using a Unix-based file system to manage and serve clones to windows-based computing clients
US11232065B2 (en) 2015-04-10 2022-01-25 Commvault Systems, Inc. Using a Unix-based file system to manage and serve clones to windows-based computing clients
US10481826B2 (en) 2015-05-26 2019-11-19 Commvault Systems, Inc. Replication using deduplicated secondary copy data
US10481824B2 (en) 2015-05-26 2019-11-19 Commvault Systems, Inc. Replication using deduplicated secondary copy data
US10481825B2 (en) 2015-05-26 2019-11-19 Commvault Systems, Inc. Replication using deduplicated secondary copy data
US11314424B2 (en) 2015-07-22 2022-04-26 Commvault Systems, Inc. Restore for block-level backups
US11733877B2 (en) 2015-07-22 2023-08-22 Commvault Systems, Inc. Restore for block-level backups
US10310953B2 (en) 2015-12-30 2019-06-04 Commvault Systems, Inc. System for redirecting requests after a secondary storage computing device failure
US10592357B2 (en) 2015-12-30 2020-03-17 Commvault Systems, Inc. Distributed file system in a distributed deduplication data storage system
US10877856B2 (en) 2015-12-30 2020-12-29 Commvault Systems, Inc. System for redirecting requests after a secondary storage computing device failure
US10061663B2 (en) 2015-12-30 2018-08-28 Commvault Systems, Inc. Rebuilding deduplication data in a distributed deduplication data storage system
US10956286B2 (en) 2015-12-30 2021-03-23 Commvault Systems, Inc. Deduplication replication in a distributed deduplication data storage system
US10255143B2 (en) 2015-12-30 2019-04-09 Commvault Systems, Inc. Deduplication replication in a distributed deduplication data storage system
US11436038B2 (en) 2016-03-09 2022-09-06 Commvault Systems, Inc. Hypervisor-independent block-level live browse for access to backed up virtual machine (VM) data and hypervisor-free file-level recovery (block- level pseudo-mount)
US11238064B2 (en) 2016-03-10 2022-02-01 Commvault Systems, Inc. Snapshot replication operations based on incremental block change tracking
US10503753B2 (en) 2016-03-10 2019-12-10 Commvault Systems, Inc. Snapshot replication operations based on incremental block change tracking
US11836156B2 (en) 2016-03-10 2023-12-05 Commvault Systems, Inc. Snapshot replication operations based on incremental block change tracking
US9619335B1 (en) 2016-03-11 2017-04-11 Storagecraft Technology Corporation Filtering a directory enumeration of a directory to exclude files with missing file content from an image backup
CN108268380A (en) * 2016-12-30 2018-07-10 北京兆易创新科技股份有限公司 A kind of method and apparatus for reading and writing data
US11321195B2 (en) 2017-02-27 2022-05-03 Commvault Systems, Inc. Hypervisor-independent reference copies of virtual machine payload data based on block-level pseudo-mount
US11294768B2 (en) 2017-06-14 2022-04-05 Commvault Systems, Inc. Live browsing of backed up data residing on cloned disks
US11507533B2 (en) * 2018-02-05 2022-11-22 Huawei Technologies Co., Ltd. Data query method and apparatus
US10740022B2 (en) 2018-02-14 2020-08-11 Commvault Systems, Inc. Block-level live browsing and private writable backup copies using an ISCSI server
US10732885B2 (en) 2018-02-14 2020-08-04 Commvault Systems, Inc. Block-level live browsing and private writable snapshots using an ISCSI server
US11422732B2 (en) 2018-02-14 2022-08-23 Commvault Systems, Inc. Live browsing and private writable environments based on snapshots and/or backup copies provided by an ISCSI server
US10754729B2 (en) * 2018-03-12 2020-08-25 Commvault Systems, Inc. Recovery point objective (RPO) driven backup scheduling in a data storage management system
US10761942B2 (en) * 2018-03-12 2020-09-01 Commvault Systems, Inc. Recovery point objective (RPO) driven backup scheduling in a data storage management system using an enhanced data agent
US11237915B2 (en) 2018-03-12 2022-02-01 Commvault Systems, Inc. Recovery Point Objective (RPO) driven backup scheduling in a data storage management system
US11010258B2 (en) 2018-11-27 2021-05-18 Commvault Systems, Inc. Generating backup copies through interoperability between components of a data storage management system and appliances for data storage and deduplication
US11681587B2 (en) 2018-11-27 2023-06-20 Commvault Systems, Inc. Generating copies through interoperability between a data storage management system and appliances for data storage and deduplication
US10860443B2 (en) 2018-12-10 2020-12-08 Commvault Systems, Inc. Evaluation and reporting of recovery readiness in a data storage management system
US11573866B2 (en) 2018-12-10 2023-02-07 Commvault Systems, Inc. Evaluation and reporting of recovery readiness in a data storage management system
US11698727B2 (en) 2018-12-14 2023-07-11 Commvault Systems, Inc. Performing secondary copy operations based on deduplication performance
US11829251B2 (en) 2019-04-10 2023-11-28 Commvault Systems, Inc. Restore using deduplicated secondary copy data
US11463264B2 (en) 2019-05-08 2022-10-04 Commvault Systems, Inc. Use of data block signatures for monitoring in an information management system
US11853104B2 (en) 2019-06-27 2023-12-26 Netapp, Inc. Virtual machine backup from computing environment to storage environment
US11709615B2 (en) 2019-07-29 2023-07-25 Commvault Systems, Inc. Block-level data replication
US11042318B2 (en) 2019-07-29 2021-06-22 Commvault Systems, Inc. Block-level data replication
US11442896B2 (en) 2019-12-04 2022-09-13 Commvault Systems, Inc. Systems and methods for optimizing restoration of deduplicated data stored in cloud-based storage resources
US11687424B2 (en) 2020-05-28 2023-06-27 Commvault Systems, Inc. Automated media agent state management
US11640339B2 (en) 2020-11-23 2023-05-02 International Business Machines Corporation Creating a backup data set
CN112988473B (en) * 2021-05-10 2021-11-23 南京云信达科技有限公司 Backup data real-time recovery method and system
CN112988473A (en) * 2021-05-10 2021-06-18 南京云信达科技有限公司 Backup data real-time recovery method and system
US11809285B2 (en) 2022-02-09 2023-11-07 Commvault Systems, Inc. Protecting a management database of a data storage management system to meet a recovery point objective (RPO)

Also Published As

Publication number Publication date
EP0767431A1 (en) 1997-04-09
JPH1055298A (en) 1998-02-24

Similar Documents

Publication Publication Date Title
US5907672A (en) System for backing up computer disk volumes with error remapping of flawed memory addresses
US20200278792A1 (en) Systems and methods for performing storage operations using network attached storage
US10241873B2 (en) Headstart restore of first volume to a second volume
US8051044B1 (en) Method and system for continuous data protection
US7953948B1 (en) System and method for data protection on a storage medium
US8117410B2 (en) Tracking block-level changes using snapshots
US7603533B1 (en) System and method for data protection on a storage medium
EP0415346B1 (en) Method and system for dynamic volume tracking in an installable file system
US7937612B1 (en) System and method for on-the-fly migration of server from backup
US8037032B2 (en) Managing backups using virtual machines
US8074035B1 (en) System and method for using multivolume snapshots for online data backup
US7047380B2 (en) System and method for using file system snapshots for online data backup
US7779221B1 (en) System and method for online data migration
US6453383B1 (en) Manipulation of computer volume segments
US5497483A (en) Method and system for track transfer control during concurrent copy operations in a data processing storage subsystem
US6829688B2 (en) File system backup in a logical volume management data storage environment
US6701450B1 (en) System backup and recovery
US20100076934A1 (en) Storing Block-Level Tracking Information in the File System on the Same Block Device
JPH0683677A (en) Method and system for increment time-zero backup copy of data
US7921093B2 (en) Information processing apparatus and information processing method

Legal Events

Date Code Title Description
AS Assignment

Owner name: STAC ELECTRONICS, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MATZE, JOHN E.G.;WHITING, DOUGLAS L.;REEL/FRAME:007716/0965

Effective date: 19951002

AS Assignment

Owner name: STAC, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:STAC ELECTRONICS, INC.;REEL/FRAME:008553/0260

Effective date: 19951003

REMI Maintenance fee reminder mailed
REIN Reinstatement after maintenance fee payment confirmed
FP Lapsed due to failure to pay maintenance fee

Effective date: 20030525

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES FILED (ORIGINAL EVENT CODE: PMFP); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PETITION RELATED TO MAINTENANCE FEES GRANTED (ORIGINAL EVENT CODE: PMFG); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
AS Assignment

Owner name: ALTIRIS, INC., UTAH

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PREVIO, INC.;REEL/FRAME:015311/0975

Effective date: 20020924

Owner name: PREVIO, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:STAC SOFTWARE, INC.;REEL/FRAME:015312/0025

Effective date: 20000424

PRDP Patent reinstated due to the acceptance of a late maintenance fee

Effective date: 20040617

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: STAC, INC., CALIFORNIA

Free format text: MERGER;ASSIGNOR:STAC ELECTRONICS;REEL/FRAME:015603/0778

Effective date: 19951003

Owner name: STAC SOFTWARE, INC., CALIFORNIA

Free format text: MERGER;ASSIGNOR:STAC, INC.;REEL/FRAME:015603/0782

Effective date: 19981208

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: SYMANTEC CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALTIRIS, INC.;REEL/FRAME:019781/0651

Effective date: 20070905

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: VERITAS US IP HOLDINGS LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SYMANTEC CORPORATION;REEL/FRAME:037697/0412

Effective date: 20160129

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: SECURITY INTEREST;ASSIGNOR:VERITAS US IP HOLDINGS LLC;REEL/FRAME:037891/0001

Effective date: 20160129

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CONNECTICUT

Free format text: SECURITY INTEREST;ASSIGNOR:VERITAS US IP HOLDINGS LLC;REEL/FRAME:037891/0726

Effective date: 20160129

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: SECURITY INTEREST;ASSIGNOR:VERITAS US IP HOLDINGS LLC;REEL/FRAME:037891/0001

Effective date: 20160129

Owner name: WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATE

Free format text: SECURITY INTEREST;ASSIGNOR:VERITAS US IP HOLDINGS LLC;REEL/FRAME:037891/0726

Effective date: 20160129

AS Assignment

Owner name: VERITAS TECHNOLOGIES LLC, CALIFORNIA

Free format text: MERGER AND CHANGE OF NAME;ASSIGNORS:VERITAS US IP HOLDINGS LLC;VERITAS TECHNOLOGIES LLC;REEL/FRAME:038455/0752

Effective date: 20160329

AS Assignment

Owner name: VERITAS US IP HOLDINGS, LLC, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY IN PATENTS AT R/F 037891/0726;ASSIGNOR:WILMINGTON TRUST, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:054535/0814

Effective date: 20201127