US20040073677A1 - Computer system using a storage area network and method of handling data in the computer system - Google Patents

Computer system using a storage area network and method of handling data in the computer system Download PDF

Info

Publication number
US20040073677A1
US20040073677A1 US10/663,687 US66368703A US2004073677A1 US 20040073677 A1 US20040073677 A1 US 20040073677A1 US 66368703 A US66368703 A US 66368703A US 2004073677 A1 US2004073677 A1 US 2004073677A1
Authority
US
United States
Prior art keywords
data
backup
servers
storage
storages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/663,687
Inventor
Shigeo Honma
Hiroshi Morishima
Tokuhiro Tsukiyama
Hiroyuki Matsushima
Takashi Oeda
Yoji Tomono
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to US10/663,687 priority Critical patent/US20040073677A1/en
Publication of US20040073677A1 publication Critical patent/US20040073677A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/329Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]

Definitions

  • the present invention relates to storage systems for storing data, in particular, a technique relating to methods for the data protection of handled data, the data sharing, the storage resource management, and the data handling.
  • FIG. 2 Described below is the situation of the information processing environment resulting from increase in the amount of handled data by using FIG. 2.
  • relations between servers and storages are established in such a way that, for example, a main frame (MF) as a server for a large-scale computer, a UNIX server as a server for a medium-scale computer, and a PC server as a server for a small-scale computer are connected with their respective exclusive storages, for example, RAIDs (Redundant Arrays of Inexpensive Disks) and magnetic tapes (MTs), and client computers give instructions to their respective servers via a LAN and perform data processing by using an exclusive storage for the relevant server.
  • MF main frame
  • UNIX server as a server for a medium-scale computer
  • PC server as a server for a small-scale computer
  • client computers give instructions to their respective servers via a LAN and perform data processing by using an exclusive storage for the relevant server.
  • SAN Storage Area Network
  • the SAN means a network that connects multiple servers and multiple storages through fiber channels, and is used only for input to and output from storages, and a SAN realizes the sharing of various storages, high-speed data processing between servers and storages, and long distance connection.
  • an SAN is being introduced into environments, in which the information processing is performed, in order to improve the input and output performance, to expand a total disk capacity, to reduce the running cost of storages, and to expand data sharing.
  • the SAN as shown in FIG. 2, is a new type of networks that connect multiple servers and multiple storages through a high-speed network (for example, fiber channels).
  • a high-speed network for example, fiber channels.
  • storages which are connected with their respective servers and are controlled by the servers are given independence from the servers, and at first a SAN used only for storages is constructed.
  • all users that have an access right are enabled to share storage information on the SAN network.
  • connecting multiple storages enables to improve the input and output performance of the storages very significantly. That is, as merits, drastic improvement in the input and output performance of the storages (improvement in the performance), setting up and expanding flexibly a storage environment independently of server environments (improvement in scalability), unified storage operation (improvement in the storage management function), disaster measures by expanding the connection distance drastically (improvement in the data protection capability), etc. have been achieved.
  • An object of the present invention is, in order to ensure the various merits and usability obtained by employing an SAN, to provide a integrated storage system in which collaboration over the entire storage system is reinforced by devising concrete functions of a storage system and corresponding concrete configurations, and in addition, another object is to provide a method for handling data more usefully at an Internet data center (abbreviated to “iDC”), which connects storages to the Internet and keeps and makes use of a large volume of data, by applying an integrated storage system to iDC.
  • iDC Internet data center
  • the present invention employs mainly the following configuration of a computer system and the following management method.
  • a computer system that is provided with multiple client computers, multiple various servers, multiple storages storing data, local area networks (LANs) connecting said computers and said servers, and a storage area network (SAN) lying between said servers and said storages, wherein said SAN forms circuit switched networks by fiber channel switches (FC switches) to make a mutual connection between any of said servers and any of said storages, and said SAN is equipped with terminals in which management and operation software has been installed to perform the storage management including management of logical volumes in said various storages, data arrangement, and error monitoring, the management of setup of said FC switches, and the data backup operation for data in said storages.
  • FC switches fiber channel switches
  • the management method is a method for managing a system comprising servers, storages storing data of said servers, and a network connecting said servers and said storages, and the method works in such a way that it obtains the information to identify data to be processed, obtains a specification of processing the data denoted by said information, gives said specification of processing to said storages keeping the data denoted by said information, and receives the result of processing the data denoted by said information from said storages.
  • FIG. 1 is a schematic diagram illustrating the basic overall configuration of an integrated storage system relating to a preferred embodiment of the present invention.
  • FIG. 2 is a schematic diagram illustrating the overall configuration of a storage system according to a prior art.
  • FIG. 3 is a diagram describing the primary functions of an integrated storage system relating to a preferred embodiment of the present invention.
  • FIG. 4 is a diagram illustrating the basic system configuration about the non-disruptive backup in accordance with a preferred embodiment of the present invention.
  • FIG. 5 a and FIG. 5 b are a diagram describing functions or actions about the non-disruptive backup in accordance with a preferred embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a system configuration in which mirroring software is used about the non-disruptive backup in accordance with a preferred embodiment of the present invention.
  • FIG. 7 is a diagram illustrating the preparations done in advance in a backup system and an example of system construction.
  • FIG. 8 is a diagram illustrating examples of various system configurations for backup by sharing tape units, relating to a preferred embodiment of the present invention.
  • FIG. 9 is a diagram illustrating a configuration for tape unit-shared backup in which multiple servers share one tape library.
  • FIG. 10 is a diagram illustrating a system configuration for asynchronous remote copying in disaster recovery, relating to a preferred embodiment of the present invention.
  • FIG. 11 is a diagram illustrating a system configuration for high-speed DB replication between servers in data sharing, relating to a preferred embodiment of the present invention.
  • FIG. 12 is a diagram illustrating error monitoring and backup operation in integrated system operation and management, relating to a preferred embodiment of the present invention.
  • FIG. 13 is a diagram illustrating centralized management of the storage performance in integrated system operation and management, relating to a preferred embodiment of the present invention.
  • FIG. 14 is a diagram illustrating storage management, in particular, the LUN manager and LUN security in integrated system operation and management, relating to a preferred embodiment of the present invention.
  • FIG. 15 is a diagram illustrating storage management, in particular, hierarchical control in a subsystem in integrated system operation and management, relating to a preferred embodiment of the present invention.
  • FIG. 16 is a diagram illustrating switch management; in particular, setting of zonings in integrated system operation and management, relating to a preferred embodiment of the present invention.
  • FIG. 17 is a diagram illustrating outline of a system configuration of an Internet data center in which an integrated storage system is used, relating to a preferred embodiment of the present invention.
  • FIG. 18 is a diagram illustrating storage integration in an Internet data center in accordance with a preferred embodiment of the present invention.
  • FIG. 19 is a diagram illustrating a system configuration for non-disruptive backup in an Internet data center in accordance with a preferred embodiment of the present invention.
  • FIG. 20 is a diagram illustrating a system configuration for ensuring security in an Internet data center in accordance with a preferred embodiment of the present invention.
  • FIG. 21 is a diagram illustrating an example of system configurations of a large-scale computer system in which individual computer systems of multiple enterprises are connected mutually.
  • FIG. 1 is a schematic diagram illustrating the basic overall configuration of said computer system relating to a preferred embodiment of the present invention.
  • the computer system in which the SAN is used consists of a main site and a remote site, and these sites are connected via a Wide Area Network (WAN).
  • WAN Wide Area Network
  • client computers and various servers for example, a main frame (MF) as a server for large-scale computers, a UNIX server as a server for medium-scale computers, and a PC server as a server for small-scale computers, are connected via a LAN.
  • MF main frame
  • a UNIX server as a server for medium-scale computers
  • a PC server as a server for small-scale computers
  • a dedicated terminal in which operation and management software on integrated storage system has been installed is connected with the LAN, and the whole of the integrated storage system is operated, managed, and monitored by using the terminal.
  • This operation and management software can be installed in any of the client terminals instead of the dedicated terminal and the relevant client terminal is used for operation and management of the integrated storage system.
  • storages such as a RAID, a tape library, and a DVD-RAM library/library array are connected with the server such as the main frame (MF) server, the UNIX server, and the PC server via a Storage Area Network (SAN) consisting of network switches such as a fiber channel switch (FC-Switch) and a fiber channel hub (FC-Hub) not shown in the figure.
  • SAN Storage Area Network
  • FC-Switch fiber channel switch
  • FC-Hub fiber channel hub
  • the servers and the storages are connected through channel switches in the SAN, the servers and the storages which are connected through channel switches are enabled to be added, detached, and changed optionally. Therefore, firstly storages are enabled to be added and detached optionally to suit the storage capacity and the kind and object (access speed, cost, etc.) of data to be stored.
  • the server sides are also enabled to access these storages without any restriction via the channel switches.
  • main site since the main site is connected with the remote site via a WAN, data can be shared between the sites, and a great amount of data can be shared worldwide.
  • storages for backup data at the remote site are not limited to the same type of storage as at the main site, for example, not limited to copying from a RAID on the main side to a RAID on the remote side, and hence cost reduction and simplified management may be achieved by copying from a RAID on the main side to a DVD-RAM or tape library, etc., on the remote side.
  • the operation and management software on a terminal for managing a SAN manages the copy source, copy destination, etc., of these data.
  • clients are connected with an application-specific server, for example, a main frame, a UNIX server, and a PC server, individually through communication lines such as a LAN, and individual servers are also connected via a LAN.
  • Storages are connected with their respective servers. Therefore, data stored in the storages could be accessed only through their respective servers.
  • data stored in storages connected with individual servers are managed in an integrated manner via a SAN.
  • individuals of multiple servers are connected to various storages (such as a RAID disk drive, a tape library, and a DVD-RAM library/library array) via fiber channel switches (FC-Switches) of which the SAN is comprised.
  • FC-Switches fiber channel switches
  • data stored in individual storages are enabled to be accessed directly from individual servers without passing a LAN. For example, access to a great amount of data, etc., is simplified.
  • storages for data are consolidated into an integrated storage system, management of data and equipment is simplified.
  • the computer system must be an information system that is intended primarily for making any information about the data to be handled available at any time, for anyone, and from anywhere.
  • the integrated storage system relating to a preferred embodiment of the present invention firstly has as one of the basic functions the data protection that provides the backup as a measure against disk drive failures and the disaster recovery as a measure against a disaster such as an earthquake and fire, secondly has as one of the basic functions the data exchange and sharing among main frames, UNIX servers, and PC servers and the data sharing in which many types and forms of information such as a database (DB), documents, drawings, multi-media contents are handled, and lastly has as one of the basic functions the storage management (storage resource management) that provides unified management of storages that each server operated and managed separately, and the environment set-up and storage operation/management by standardized operations.
  • DB database
  • storage management storage resource management
  • a data center in which a SAN-applied computer system consisting of a system group of a large capacity of storages and various servers is connected to the Internet and is equipped with data storage service functions, namely Internet data center (abbreviated to “iDC”), is constructed, and an inventive device relating to a method for processing a mass of data at that iDC is one of features of the present invention.
  • iDC Internet data center
  • Functions of the data protection are intended for backup of DBs during online operation, reduction in the management cost by sharing storage resources, improvement in system availability by means of disaster recovery, etc., and assurance of data security, and thereby, enable to back up data without stopping a job (non-disruptive backup) for 24-hour-per-day, 365-day-per-year operation that is expected to increase in the years ahead, enable to share a tape library at the time of backup (tape unit-shared backup), resulting in reduction in the cost as well, and further enable to restore the system rapidly in the event of a disaster by ensuring data security in copying remotely at long distance (remote copying).
  • the details of the data protection are three techniques of the non-disruptive backup, the tape unit-shared backup, and the asynchronous remote copying as described above.
  • non-disruptive backup enable applications to run even during backup operation by the backup using a replica of data, and prevent application servers from being affected by using servers for backup only.
  • FIG. 4, FIG. 5 a , and FIG. 5 b illustrate a configuration for, and a function of the non-disruptive backup in detail.
  • An outline of this function is to back up DBs without affecting online jobs via a SAN without passing a LAN by collaboration between internal functions in storages and database management system (DBMS) in application servers.
  • DBMS database management system
  • FIG. 4 illustrates a series of a flow of the non-disruptive backup.
  • copying from the volumes to be backed up (primary volumes) to the secondary volumes with a capacity equal to or larger than that of the primary volume in a storage unit is executed to make a copy of the primary volumes.
  • DBMS database management system
  • the status of the database management system (DBMS) in an application server is changed to a backup-allowable state to prevent online jobs from being affected, and then the backup server makes a backup copy of data in the secondary volumes to tape units.
  • DBMS database management system
  • FIG. 5 a and FIG. 5 b illustrate an outline of the processing by the volume copy function that is an internal function of a storage unit, in a process of the non-disruptive backup illustrated in FIG. 4.
  • DB database
  • FIG. 5 b illustrates an outline of the processing by the volume copy function that is an internal function of a storage unit, in a process of the non-disruptive backup illustrated in FIG. 4.
  • a replica for backup namely Logical Volume B (Logical VOLB)
  • Logical Volume B Logical Volume B
  • Logical VOLB Logical Volume B
  • Logical VOLA Logical Volume A
  • Logical VOLB two logical volumes of Logical VOLA and Logical VOLB are prepared in advance and duplication is directed.
  • the backup server instructs the storage unit to perform pair split by using a means for controlling disk drives.
  • the storage unit writes the data to Logical VOLA only, and not to Logical VOLB.
  • the backup software on the backup server reads data from the secondary volume, Logical VOLB, and makes a backup copy of the data to a backup device such as a tape unit.
  • volume duplication For the volume duplication scheme illustrated in FIG. 5 a , a duplicated volume must be prepared before a time when backup is performed. Therefore, in order to perform backup, volume duplication must be started further the duplication time before a backup time by taking into consideration the time taken to duplicate a volume.
  • a function of a storage unit illustrated in FIG. 5 b solves this problem.
  • Logical VOLB to which a copy of Logical VOLA is made must be prepared in the same way as for FIG. 5 a .
  • the backup server instructs the storage unit to perform pair split by using a means for controlling disk drives in the same way as for the case of FIG. 5 a .
  • data in Logical VOLA does not need to have been copied to Logical VOLB.
  • the backup software on the backup server starts reading data from the secondary volume, Logical VOLB.
  • the disk drive reads out data from Logical VOLA and hands the data over to the backup server, or copies data from Logical VOLA to Logical VOLB once and then hands the data over to the backup server.
  • data may be written from the application server into a certain area of Logical VOLA during the backup processing. Since data in Logical VOLA is being copied to Logical VOLB sequentially in the storage unit, if the data from the application server is written into Logical VOLB by the processing of copying, data after the split is also written into Logical VOLB. To prevent this, the storage unit reads Logical VOLA's data currently present in the area for which a write demand is made and writes the data out into Logical VOLB. After that, the storage unit writes into Logical VOLA the data which the application server demanded to write. As a result of this processing, data present in Logical VOLA only at the time of the split instruction is copied to Logical VOLB.
  • FIG. 7 illustrates an example of installing a system constructed for the non-disruptive backup illustrated in FIGS. 4, 5 a , and 5 b .
  • the application server is equipped with DBMS and a means for controlling disk drives
  • the backup server is equipped with backup software and a means for controlling disk drives.
  • the means for controlling disk drives is installed, its configuration is set up, and operation of the means for controlling disk drives is checked.
  • DBMS script Logging in, Setting the backup mode, Terminating the backup mode, and Logging out
  • a script Pair split, Pair event wait, and Resynchronization
  • the primary and secondary volumes created with the mirroring software are mirror split according to an instruction from the collaborating tool in the application server, and while backup is performed by using one volume (secondary volume), jobs are enabled to continue by using the other volume (primary volume). Then, after the backup terminates, resynchronization is performed.
  • the duplicated writing to the primary and secondary volumes is performed with the mirroring software in the application server, accessing a DB is stopped with the collaborating tool (software) in the application server, and accessing the DB is restarted after mirror split is directed.
  • the backup copying of data in the secondary volume is started to a backup device such as a tape unit connected with the backup server by use of the collaborating tool (software) in the backup server.
  • the collaborating tool in the application server that is notified of completion of the backup from the collaborating tool (software) in the backup server directs mirror resynchronization and performs duplicated writing again.
  • FIG. 8 and FIG. 9 illustrate the details of a configuration and function of the tape unit-shared backup.
  • This function outlined is intended for reduction in the management cost of data that are scattered among many servers, and reduction in the load of a LAN with the result that high-speed backup is achieved.
  • the expansive library can be made the effective use of (compared with the case where a backup tape unit is installed for each disk drive), and by sharing a single tape library among multiple servers, backup data can be output directly to a tape unit via a SAN without passing a LAN, resulting in achievement of high-speed backup.
  • FIG. 8 The left one of FIG. 8 illustrates conventional tape unit backup.
  • Backup data is copied from each disk drive of individual servers via a LAN, through the backup server, to a tape unit, and hence data passes a LAN every backup case, a load is put on the LAN. Further, a load is also put on the backup server every backup case.
  • the backup processing can be speeded up by copying data from a disk drive to a tape unit via a SAN, and backup is achieved by use of servers without passing a LAN.
  • a single type of server can be used, and hence the load of servers is reduced.
  • server-less backup illustrated in the right one of FIG. 8 enables to copy data directly from disk drives to a tape unit, the backup processing can be speeded up and the load of servers can be reduced as well.
  • the preferred embodiment of the present invention as illustrated in the right one of FIG.
  • disk drives must be equipped with a capability of writing into tape units
  • tape units must be equipped with a capability of reading data from disk drives
  • FC switches must be equipped with a capability of writing from disk drives into tape units
  • FC-SCSI multiplexers (described later in the explanation of FIG. 9) must be equipped with a capability of writing from disk drives into tape units if tape units are connected to the FC-SCSI multiplexers.
  • FIG. 9 illustrates another example of configurations for tape unit-shared backup.
  • the configuration shown in FIG. 9 corresponds to LAN-free backup shown in the middle one of FIG. 8.
  • Server C is different in functions from Servers A and B, has a backup manager installed for managing all over the backup, in addition to a backup agent necessary to perform a backup operation practically, and is equipped with functions of assigning a backup drive, etc.
  • the backup drive for example, has three drives and assigns Drive 1 to Server A. When a backup demand is made from Server A, the backup drive is controlled so that a tape cartridge for storing is loaded onto Drive A.
  • drives may be assigned to servers in such a way that the backup manager manages the condition of drive usage, selects unused drives, and assigns a proper drive of them.
  • a set of an FC-SCSI multiplexer and a backup drive corresponds to a tape library shown in FIG. 8.
  • FIG. 10 illustrates a system configuration for asynchronous remote copying.
  • a main site and a remote site are located away long enough from each other not to suffer from a disaster at the same time in the event of it and are connected through communication lines.
  • completion of the update is reported to a server (without waiting for reflecting information on the remote site, that is, asynchronously).
  • updated data is copied sequentially at a proper timing from the main site to the remote site; however, if data is not transferred in the same order the data was updated at the main site, updated data is sorted by the time sequence in a system at the remote site and then the data is copied with the sequence of update guaranteed (for example, if update data of receipt and payment of money are stored in reverse order, this can cause to force improper dealings in processing of remains).
  • intermediate files can be a virtual volume that is created temporarily on semiconductor memory, namely cache memory, on the outside of magnetic disk drives. With cache memory, data can be transferred at a higher speed.
  • UNIX servers or PC servers can construct a data warehouse easily, by installing in the UNIX servers or their attached units the software which is capable of performing easily and quickly in GUI base a series of the processing from extracting data from a variety of source DBs such as backbone DB, through converting and consolidating data, up to loading data, the time taken to transfer data can be shortened when constructing a data warehouse.
  • source DBs such as backbone DB
  • system maintenance work such as backing up data at each site periodically against a system crash, system setting modification work when volumes are added, and further data handling such as moving data in some volumes to other volumes when the performance drops due to load congestion in a particular volume.
  • monitoring the condition of the load is also important management work.
  • one maintenance terminal is installed for each storage unit, and individual storages must be managed from their respective terminals.
  • all storage units can be managed by a single terminal.
  • FIG. 12 illustrates an example of backup operation and failure monitoring in a large-scale office system.
  • data used commonly within each department and data used commonly by all departments.
  • WWW World Wide Web
  • a backup device such as a tape unit is installed in individual departments.
  • multiple large-scale storages to store a large-size data and a backup device such as a tape library are installed at a computer center, and each device at the center, each system on individual floor, and an enterprise general system are connected mutually via a Storage Area Network.
  • a centralized monitoring console monitors all devices on individual floor, in the enterprise general system and at the computer center, and all device failure reports are collected to the centralized monitoring console. Service personnel can identify easily what device a failure occurs in by seeing the console. When data is destroyed due to failures, the data can be recovered (restored) from a backup device. This restore processing can be also initiated from the centralized monitoring console.
  • the centralized monitoring console has such a function that service personnel leave the terminal unattended in some cases, so in such a case a mail is sent to a cellular phone, etc., of the service personnel from the centralized monitoring console to notify them.
  • the centralized monitoring console also directs how to operate backup and manages the backup.
  • the frequency of backing up and the requirement of a destination of backing up vary with the kind of data individually. For example, data almost unnecessary to back up (for example, data updated very rarely) and data accessed by only a particular department or person do not need to be backed up frequently. Or, even if attempting to make a backup copy of all data at the same time zone, there is a limit to the number of backup devices.
  • the centralized monitoring console rearranges the frequency of backing up, the time zone, or the destination of the backing up in accordance with the data or volume depending on the need of users, and automatically performs the backup processing individually.
  • FIG. 14 illustrates a diagrammatic view of the processing of setting up volumes.
  • multiple disk drives are grouped to one or multiple apparent logical devices (LDEVs).
  • the storage unit has multiple ports to connect to hosts or fiber channel switches, and which ports are allowed to access to individual LDEVs can be set and changed for the storage unit.
  • LUN logical unit number
  • the host address is assigned to individual LDEVs and is made open to hosts.
  • a host address is assigned to LDEVs, and the type of hosts that can access individual LDEVs is set. Since all hosts are connected to all storages via a storage area network, there is the risk that a host which is not allowed normally to access a storage gains an invalid access to the storage, so the type of hosts that can access individual LDEVs can be registered in the storage to prevent invalid access.
  • FIG. 13 illustrates an example of monitoring the performance of storages.
  • the centralized monitoring console can watch the condition of the load of each volume.
  • the load condition is the number of times per second I/O operations are received, the ratio of read and write operations, the cache hit rate, etc.
  • the load condition is the number of times per second I/O operations are received, the ratio of read and write operations, the cache hit rate, etc.
  • a load is very seldom put on all volumes evenly, and volumes with an extremely high load put on them or volumes with nearly no load put on them may present.
  • FIG. 15 illustrates an example of a case where a storage unit has the functions of reallocating volumes.
  • Some storage units have a small capacity but a comparatively high speed of volumes, and other storage units have a large capacity but a low performance of volumes. In such a situation, it is better to move data which has a low access frequency to a large capacity of volumes, and data which has a high access frequency to a high speed of volumes.
  • individual logical devices LDEVs
  • Disk drives obtain the usage rate of logical devices as statistical information, and send the information to a centralized monitoring console.
  • the centralized monitoring console predicts how the usage rate of logical devices changes when a logical device is moved based on the information, and presents the prediction to service personnel.
  • Service personnel can draw a reallocation plan more easily than in the case of the previous figure based on the prediction.
  • service personnel can instruct to move the logical devices actually or not, or set in advance detailed conditions under which, when individual volumes are set in a certain state, the volumes are automatically moved.
  • FC switch management As a part of integrated system operation and management, and the FC switch management enables to make various settings of FC switches and to manage the status of zoning, etc. To put it concretely, it includes management such as the displaying of a fabric topology, the setting of FC switches' zoning, and the setting/displaying of various parameters in FC switches, and these items can be watched on the centralized monitoring console.
  • FIG. 16 illustrates an example of configurations of a fabric switch (FC) lying between servers and storages with the switch divided into three zonings.
  • FIG. 4 To back up (FIG. 4), which volume in a storage is to be backed up must be determined.
  • a server manages data which an application stores in a storage in units of files.
  • a storage manages data in units of volumes.
  • the SAN management unit when backup is started, if the SAN management unit (terminal shown in FIG. 1, in which operation and management software has been installed) is asked to back up a file by a server, the SAN management unit obtains information to identify a file, information about a backup device (address on a SAN, etc.), a backup time, etc., from servers. Further, the SAN management unit obtains information to identify a volume in which the relevant files have been stored from storages. Next, the SAN management unit instructs a storage in which the relevant files have been stored to create a replica (secondary volume) of a volume to be backed up using the obtained two kinds of information.
  • the SAN management unit instructs a storage which has a volume in which the relevant files have been stored to assign another volume (secondary volume) for creating a replica of the relevant volume (primary volume) and to create the replica.
  • secondary volume another volume
  • considerations must be taken so that a volume of at least the same capacity as that of the primary volume must be assigned to the secondary volume, and the SAN management unit must grasp how large capacity and what configuration of volumes individual storages have.
  • the SAN management unit receiving this termination report, instructs the storage to split a pair of volumes, and instructs the backup server to make a backup copy of data from the secondary volume to a backup device while keeping the primary volume occupied in the normal processing from servers.
  • the backup server reads data in the secondary volume via the SAN, and transfers the read data to the backup device.
  • this is reported to the SAN management unit from the backup server, and then the SAN management unit reports termination of the backup to an application that asked to back up.
  • a time at which to split a pair of volumes is the backup time described above.
  • a destination on the SAN to which to transfer backup data is said address of the backup device on the SAN.
  • the SAN management unit plays the central role to control reception of a backup demand, creation and split of a replica, the backup processing, and reporting of backup termination, however, software in an application server and software in a backup server exchange control information directly via a LAN, and thereby can realize the backup system without making use of a SAN management unit (FIG. 6).
  • a SAN management unit In this case, compared with the case where a SAN management unit is used, individuals of software in the two servers must collaborate, however, the SAN management unit described above is not required, and hence this scheme is considered to be suitable for a comparatively small-scale system.
  • backup In the backup system described above, data is backed up by transferring it to a backup device through a backup server, however, backup can be controlled so that data is transferred directly from the secondary volume in a storage to a backup device via a SAN (direct backup) without passing a backup server.
  • this backup is achieved by instructing a storage to transfer data in the secondary volume to a backup device after the SAN management unit recognizes that a replica has been created and split. This instruction includes the address of the backup device on the SAN, etc.
  • cluster servers are connected to storages through a fabric switch.
  • the fabric switch is divided logically, that is, is treated as multiple switches. Therefore, if the storage side output destination of the switch in Zoning 1 and the storage side output destination of the switch in Zoning 2 or Zoning 3 have been separated, cluster servers belonging to the switch in Zoning 1 can not gain access to the switch in Zoning 2 or Zoning 3 , and invalid access to the storage side output destination of the switch in Zoning 2 or Zoning 3 from cluster servers belonging to the switch in Zoning 1 can be prevented.
  • Such set-up of zonings in the switch is enabled by connecting a fabric switch and an SAN management unit not shown in the figure through a LAN, etc. not shown in the figure, and setting up said zonings in the fabric switch according to an instruction from the SAN management unit, etc.
  • zonings can be set up in the fabric switch by using a dedicated console, etc., however, control information for zoning must be set at the location of said dedicated console each time cluster servers and storages are added, changed, or detached, resulting in inefficient operation.
  • the SAN management unit when providing various functions of the data processing, basically obtains from servers and storages the information about files and volumes to be processed, a operation timing, a destination to which to move data, etc., and instructs the devices required based on these pieces of information to process files and volumes (replica creation, data copying, split of replica, backup copying, remote copying, etc.,) according to the operation timing. Individual devices perform their processing according to instructions from the SAN management unit, and return the result of processing. On as needed base, they can make the SAN management unit return the result to the client that asked to process.
  • a preferred embodiment of the present invention is considered to be composed of the following steps: step 1 ; an SAN management unit (terminal in which operation and management software has been installed as shown in FIG. 1) accepts a request for processing data in an integrated storage system from applications which run on individual application servers (this step can be replaced with another step at which the SAN management unit creates a demand for data on its own accord according to a schedule made out separately in advance), step 2 ; obtains information (information to identify the data to be processed, a operation time, a destination to which to move data, etc.,) necessary for processing the relevant data, step 3 ; determines the order in which the SAN management unit starts various kinds of functional software (software to execute replica creation, data copying, separation of replica, backup copying, remote copying, etc.,) which reside on storages, network switches, and servers based on said obtained information and makes out a schedule such as a start timing at which to execute the functional software (this step is considered to be a step for collaborating individuals of
  • a SAN management unit since a SAN management unit has functions of collaborating multiple pieces of functional software and operate them, the SAN management unit can realize easily complex functions that individuals of the functional software cannot achieve and the SAN management unit enables the more accurate data processing in an integrated storage system.
  • complex functions can be achieved by creating a single piece of large software without collaborating multiple pieces of functional software, however, this leads to a situation in which separate pieces of software must be developed for each kind of the data processing, resulting in an inflexible system.
  • FIG. 17 illustrates an example of configurations of an Internet data center (abbreviated to “iDC”), which has been expanding in the number of systems recently.
  • the Internet data center is entrusted with Internet service providers (ISPs) and WWW servers of individual enterprises (this system is called “housing”), and provides network management and server operation and management. Further, it also provides value-added services such as web design, construction of an electronic commerce (EC) system, and addition of high-degree security.
  • ISPs Internet service providers
  • EC electronic commerce
  • the Internet data center provides solutions together that solve problems in enterprises, which want to do Internet business, such as shortage of system staffs and their skill, and preparation of server installation places and networks.
  • FIG. 18 illustrates a schematic configuration diagram of an Internet data center to which a large-scale storage area network (SAN) is applied.
  • SAN storage area network
  • Multiple server computers exist at each enterprise storages such as a disk drive and a tape unit are consolidated to a few units, one or two-three units, and servers and disk drives/tape units are connected mutually through fiber channel switches.
  • individual storage units must be connected to individual server computers in an environment in which a SAN does not exist, storage units can be shared by all computers through a SAN, and hence can be consolidated and managed.
  • the storage units can be added while a host computer is in online (in operation), so the addition does not affect jobs.
  • FIG. 19 illustrates a schematic configuration diagram of an example of non-disruptive backup under a SAN environment at an Internet data center.
  • individual server computers, storages, and backup libraries of multiple enterprises are connected mutually via a storage area network.
  • a management host exists on the SAN to manage storage devices and to operate backup.
  • Data in each server computer for example, Web contents on a WWW server and data used by an application server, have been consolidated and stored in storages on the SAN.
  • the demands for backup is considered to be varied depending on the circumstances of each host computer. For example, there are cases where it is desirable that a backup copy of data is taken every day at a time when a load of access to a host computer drops, that is, during a time zone such as midnight for which the number of times access is made to disk drives decreases, or it is desirable that in the case of a host computer which is very busy on the processing of an update type of transactions, the host computer determines a backup start time optionally according to the time and circumstances, such as a time when a flow of transactions breaks.
  • the management host accepts those demands from individual host computers and manages backup processing properly.
  • interruption of processing on the host computer must be avoided and non-disruptive backup is mandatory. Described below briefly is an example of backup processing.
  • the management host makes out a schedule of the backup beginning and ending for individual server computers. For example, a backup operation for a WWW server of Company A begins at midnight, a backup operation for an application server of Company B at one in the morning, a backup operation for an application server of Company A at half past one in the morning, a backup operation for a WWW server of Company B at three in the morning, and so on. Time taken to perform the backup processing depends on the amount of data that individual servers keep, etc., and hence the management host manages what amount of data individual server computers keep in storages, and calculates the time taken for backup based on the amount of data and makes out a schedule. In addition, if a tape library has multiple tape drives, multiple backup jobs can be executed concurrently.
  • a backup operation for Company A begins at midnight
  • the management host creates a replica of data, present in disk drives, of a WWW server of Company A.
  • the management host finds out a free disk (logical volume) in a disk drive, assigns it to a volume for the replica of a WWW server of Company A, and instructs the disk drive to create the replica.
  • a flow of the processing of creating a replica is that as illustrated in detail in FIG. 5 a and FIG. 5 b.
  • a tape cartridge is mounted onto a tape drive in a tape library.
  • the copying of backup data begins from the replica volume to the tape library.
  • the server computer of Company A can perform the data backup processing, however, if the direct backup function by which data is transferred directly from the management host or a disk drive to a tape library is supported (all right if at least any of a disk drive, a tape library, and a FC switch supports), this function can actually be used for backup processing.
  • FIG. 20 illustrates an environment in which server computers and storages of multiple enterprises coexist on a SAN at an Internet data center.
  • first zonings of an FC switch are set so that server computers of individual enterprises can gain access to a particular path only to storage units.
  • LUs that server computers of individual enterprises use are assigned to individual paths in the disk drives. For example, if Company B uses two logical units of LU 1 and LU 2 , LUs 1 and 2 are assigned to the middle path, and if Company C uses LU 0 , LU 0 is assigned to the right path.
  • Company B secures the path to access LU 1 and LU 2 in FIG. 20, however, there may be a requirement in which only some particular one of Company B's servers is permitted to gain access to LU 1 . In that case, access limitation is done by use of the LUN.
  • the WWN of a particular server of Company B is registered in a disk drive, and it can be set so that only a server whose WWN has been registered can gain access to LU 1 .
  • zonings, path assignment, and access limitation in units of LUs are set on the centralized monitoring console.
  • the topology of an FC switch is checked on the monitoring console, zonings are set based on the topology, further as many LUs as necessary are mapped on individual paths, and LUs that individual companies can use are registered.
  • the centralized monitoring console obtains the WWNs of host computers that are permitted to access, sets them in a disk drive, and limits access in units of LUs.
  • FIG. 21 illustrates an example of a large-scale computer system in which computer systems of multiple enterprises are connected mutually. Host computers among enterprises are connected through the Internet, and mutual utilization of data is achieved. In addition, by introducing storage area networks, storages in individual enterprises are organized so that they are also connected through a public switched network or leased lines.
  • Enterprises A and B individually have a backbone database by which transaction processing such as account processing is performed, and an database of information system by which analysis processing is performed in offline using data in the backbone database.
  • the data of the backbone databases of Enterprise A and Enterprise B are integrated to create a data mart for various jobs.
  • a large-scale data warehouse is constructed once, and then a small-scale data mart for various applications may be created from the data warehouse individually.
  • data In the case where does not exist an environment in which storages are connected mutually via a storage area network, when integrating databases, data must be moved through a host computer and a network.
  • many databases which enterprises want to share have a large capacity, and hence it takes a large amount of time to transfer data.
  • a replica of Enterprise B's data is created by using a remote copying function in storages.
  • a replica volume is split once at a frequency of once a day or once a week, etc., and a replication server reads data in the split replica volume to create various data marts.
  • Replication servers exist separately from various types of DBMS of information system which make use of data marts. Storages are combined mutually via a storage area network, and a replica of a database can be created without putting any load on a host by using the remote copying function in storages.
  • replication servers that creates data marts, and DBMS of information system can be realized on separate host computers individually, and hence the processing of creating data marts does not affect jobs of a backbone DB and a DB of information system.
  • an integrated storage system can be constructed by reinforcing collaboration of components or functions of a storage system in which a SAN is used, and all various functions illustrated in FIG. 3 can be achieved.

Abstract

In order to construct an integrated storage system by reinforcing collaboration of components or functions of a storage system in which a storage area network (SAN) is used, in a computer system comprising multiple client computers, multiple various servers, multiple various storages which keep data, a local area network (LAN) which connects the computers and the servers, a storage area networks (SAN) which lies between the servers and said storages, the SAN forms a switched circuit network which is capable of connecting any servers and any storages through fiber channel switches, and the computer system further comprises a terminal which is equipped with operation and management software which performs storage management including management of logical volumes in the various storages, data arrangement, and error monitoring, management of setting up said FC switches, and backup operation for data in said storages.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates to storage systems for storing data, in particular, a technique relating to methods for the data protection of handled data, the data sharing, the storage resource management, and the data handling. [0001]
  • At present, environment in which the information processing is performed has been changing drastically as a result of development of the Internet and Intranets, and expansion of such applications as data warehouse, electronic commerce, and information service, and this change has resulted in rapid increase in the amount of handled data. [0002]
  • For example, while the performance of CPUs has improved 100 times for the last five years, the input and output performance of disk drives has been held in about 10 times improvement. That is, the limit of the input and output performance compared with rapid increase in traffic has come to give rise to apprehensions. In addition, as applications such as enterprise resource planning (ERP), which processes a mass of data, and data warehouse have come to wide use, and information to be processed (documents, drawings, visual contents, etc.) has been diversified and communicated in Multimedia, demands of enterprises for a total disk capacity has increased two times a year on an average. Further, as storage capacities used in enterprises and others have increased and use of storages has been diversified, the running cost of storages has also increased. Furthermore, backbone data in main frames has been shared and utilized by individual departments. [0003]
  • Described below is the situation of the information processing environment resulting from increase in the amount of handled data by using FIG. 2. As shown in FIG. 2, relations between servers and storages are established in such a way that, for example, a main frame (MF) as a server for a large-scale computer, a UNIX server as a server for a medium-scale computer, and a PC server as a server for a small-scale computer are connected with their respective exclusive storages, for example, RAIDs (Redundant Arrays of Inexpensive Disks) and magnetic tapes (MTs), and client computers give instructions to their respective servers via a LAN and perform data processing by using an exclusive storage for the relevant server. [0004]
  • Recently, proposed was a Storage Area Network (SAN) environment in which a SAN is constructed between the various servers and storages described above, and individual servers are allowed to access to any of the storages. Here, the SAN means a network that connects multiple servers and multiple storages through fiber channels, and is used only for input to and output from storages, and a SAN realizes the sharing of various storages, high-speed data processing between servers and storages, and long distance connection. [0005]
  • SUMMARY OF THE INVENTION
  • As described above, an SAN is being introduced into environments, in which the information processing is performed, in order to improve the input and output performance, to expand a total disk capacity, to reduce the running cost of storages, and to expand data sharing. The SAN, as shown in FIG. 2, is a new type of networks that connect multiple servers and multiple storages through a high-speed network (for example, fiber channels). In this environment, storages which are connected with their respective servers and are controlled by the servers are given independence from the servers, and at first a SAN used only for storages is constructed. In addition, all users that have an access right are enabled to share storage information on the SAN network. [0006]
  • In addition, connecting multiple storages enables to improve the input and output performance of the storages very significantly. That is, as merits, drastic improvement in the input and output performance of the storages (improvement in the performance), setting up and expanding flexibly a storage environment independently of server environments (improvement in scalability), unified storage operation (improvement in the storage management function), disaster measures by expanding the connection distance drastically (improvement in the data protection capability), etc. have been achieved. [0007]
  • However, existing proposals of SAN networks did not always disclose clearly concrete configurations or embodiments to realize these SAN network. [0008]
  • An object of the present invention is, in order to ensure the various merits and usability obtained by employing an SAN, to provide a integrated storage system in which collaboration over the entire storage system is reinforced by devising concrete functions of a storage system and corresponding concrete configurations, and in addition, another object is to provide a method for handling data more usefully at an Internet data center (abbreviated to “iDC”), which connects storages to the Internet and keeps and makes use of a large volume of data, by applying an integrated storage system to iDC. [0009]
  • In order to solve the issues described above, the present invention employs mainly the following configuration of a computer system and the following management method. [0010]
  • A computer system that is provided with multiple client computers, multiple various servers, multiple storages storing data, local area networks (LANs) connecting said computers and said servers, and a storage area network (SAN) lying between said servers and said storages, wherein said SAN forms circuit switched networks by fiber channel switches (FC switches) to make a mutual connection between any of said servers and any of said storages, and said SAN is equipped with terminals in which management and operation software has been installed to perform the storage management including management of logical volumes in said various storages, data arrangement, and error monitoring, the management of setup of said FC switches, and the data backup operation for data in said storages. [0011]
  • In addition, the management method is a method for managing a system comprising servers, storages storing data of said servers, and a network connecting said servers and said storages, and the method works in such a way that it obtains the information to identify data to be processed, obtains a specification of processing the data denoted by said information, gives said specification of processing to said storages keeping the data denoted by said information, and receives the result of processing the data denoted by said information from said storages.[0012]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic diagram illustrating the basic overall configuration of an integrated storage system relating to a preferred embodiment of the present invention. [0013]
  • FIG. 2 is a schematic diagram illustrating the overall configuration of a storage system according to a prior art. [0014]
  • FIG. 3 is a diagram describing the primary functions of an integrated storage system relating to a preferred embodiment of the present invention. [0015]
  • FIG. 4 is a diagram illustrating the basic system configuration about the non-disruptive backup in accordance with a preferred embodiment of the present invention. [0016]
  • FIG. 5[0017] a and FIG. 5b are a diagram describing functions or actions about the non-disruptive backup in accordance with a preferred embodiment of the present invention.
  • FIG. 6 is a diagram illustrating a system configuration in which mirroring software is used about the non-disruptive backup in accordance with a preferred embodiment of the present invention. [0018]
  • FIG. 7 is a diagram illustrating the preparations done in advance in a backup system and an example of system construction. [0019]
  • FIG. 8 is a diagram illustrating examples of various system configurations for backup by sharing tape units, relating to a preferred embodiment of the present invention. [0020]
  • FIG. 9 is a diagram illustrating a configuration for tape unit-shared backup in which multiple servers share one tape library. [0021]
  • FIG. 10 is a diagram illustrating a system configuration for asynchronous remote copying in disaster recovery, relating to a preferred embodiment of the present invention. [0022]
  • FIG. 11 is a diagram illustrating a system configuration for high-speed DB replication between servers in data sharing, relating to a preferred embodiment of the present invention. [0023]
  • FIG. 12 is a diagram illustrating error monitoring and backup operation in integrated system operation and management, relating to a preferred embodiment of the present invention. [0024]
  • FIG. 13 is a diagram illustrating centralized management of the storage performance in integrated system operation and management, relating to a preferred embodiment of the present invention. [0025]
  • FIG. 14 is a diagram illustrating storage management, in particular, the LUN manager and LUN security in integrated system operation and management, relating to a preferred embodiment of the present invention. [0026]
  • FIG. 15 is a diagram illustrating storage management, in particular, hierarchical control in a subsystem in integrated system operation and management, relating to a preferred embodiment of the present invention. [0027]
  • FIG. 16 is a diagram illustrating switch management; in particular, setting of zonings in integrated system operation and management, relating to a preferred embodiment of the present invention. [0028]
  • FIG. 17 is a diagram illustrating outline of a system configuration of an Internet data center in which an integrated storage system is used, relating to a preferred embodiment of the present invention. [0029]
  • FIG. 18 is a diagram illustrating storage integration in an Internet data center in accordance with a preferred embodiment of the present invention. [0030]
  • FIG. 19 is a diagram illustrating a system configuration for non-disruptive backup in an Internet data center in accordance with a preferred embodiment of the present invention. [0031]
  • FIG. 20 is a diagram illustrating a system configuration for ensuring security in an Internet data center in accordance with a preferred embodiment of the present invention. [0032]
  • FIG. 21 is a diagram illustrating an example of system configurations of a large-scale computer system in which individual computer systems of multiple enterprises are connected mutually.[0033]
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • The following describes a computer system in which a storage area network (SAN) is used and a method by which data is handled, referring to the drawings. FIG. 1 is a schematic diagram illustrating the basic overall configuration of said computer system relating to a preferred embodiment of the present invention. [0034]
  • In FIG. 1, the computer system in which the SAN is used consists of a main site and a remote site, and these sites are connected via a Wide Area Network (WAN). At the main site, multiple client computers and various servers, for example, a main frame (MF) as a server for large-scale computers, a UNIX server as a server for medium-scale computers, and a PC server as a server for small-scale computers, are connected via a LAN. In addition, a dedicated terminal in which operation and management software on integrated storage system has been installed is connected with the LAN, and the whole of the integrated storage system is operated, managed, and monitored by using the terminal. This operation and management software can be installed in any of the client terminals instead of the dedicated terminal and the relevant client terminal is used for operation and management of the integrated storage system. [0035]
  • Further, storages such as a RAID, a tape library, and a DVD-RAM library/library array are connected with the server such as the main frame (MF) server, the UNIX server, and the PC server via a Storage Area Network (SAN) consisting of network switches such as a fiber channel switch (FC-Switch) and a fiber channel hub (FC-Hub) not shown in the figure. In addition, the main site is connected with the remote site consisting of the same components as those of the main site via a wide area communication network such as WAN. [0036]
  • Here, since the servers and the storages are connected through channel switches in the SAN, the servers and the storages which are connected through channel switches are enabled to be added, detached, and changed optionally. Therefore, firstly storages are enabled to be added and detached optionally to suit the storage capacity and the kind and object (access speed, cost, etc.) of data to be stored. The server sides are also enabled to access these storages without any restriction via the channel switches. [0037]
  • In addition, since the main site is connected with the remote site via a WAN, data can be shared between the sites, and a great amount of data can be shared worldwide. In addition, if a copy of data at the main and remote sites is retained at each other site, even when either site fails due to a disaster, etc., jobs can continue to run using the data at the other site. In this case, storages for backup data at the remote site are not limited to the same type of storage as at the main site, for example, not limited to copying from a RAID on the main side to a RAID on the remote side, and hence cost reduction and simplified management may be achieved by copying from a RAID on the main side to a DVD-RAM or tape library, etc., on the remote side. In this case, the operation and management software on a terminal for managing a SAN manages the copy source, copy destination, etc., of these data. [0038]
  • In addition, in a prior art shown in FIG. 2, clients are connected with an application-specific server, for example, a main frame, a UNIX server, and a PC server, individually through communication lines such as a LAN, and individual servers are also connected via a LAN. Storages are connected with their respective servers. Therefore, data stored in the storages could be accessed only through their respective servers. [0039]
  • On the other hand, in the preferred embodiment of the present invention, data stored in storages connected with individual servers are managed in an integrated manner via a SAN. Firstly individuals of multiple servers are connected to various storages (such as a RAID disk drive, a tape library, and a DVD-RAM library/library array) via fiber channel switches (FC-Switches) of which the SAN is comprised. Thereby, data stored in individual storages are enabled to be accessed directly from individual servers without passing a LAN. For example, access to a great amount of data, etc., is simplified. In addition, since storages for data are consolidated into an integrated storage system, management of data and equipment is simplified. [0040]
  • In addition, in order to make backup and remote copies, etc., of data against a disaster, individual storages corresponding to each server must be installed and the data must be copied via a LAN according to a prior art, however, in the preferred embodiment of the present invention, an integrated storage system consisting of a SAN and various storages is introduced, and hence the integrated storage system enables to back up data, and furthermore remotely and more efficiently. [0041]
  • As a computer system to which a SAN is applied is outlined above, the computer system must be an information system that is intended primarily for making any information about the data to be handled available at any time, for anyone, and from anywhere. [0042]
  • The integrated storage system relating to a preferred embodiment of the present invention, as disclosed in FIG. 3, firstly has as one of the basic functions the data protection that provides the backup as a measure against disk drive failures and the disaster recovery as a measure against a disaster such as an earthquake and fire, secondly has as one of the basic functions the data exchange and sharing among main frames, UNIX servers, and PC servers and the data sharing in which many types and forms of information such as a database (DB), documents, drawings, multi-media contents are handled, and lastly has as one of the basic functions the storage management (storage resource management) that provides unified management of storages that each server operated and managed separately, and the environment set-up and storage operation/management by standardized operations. [0043]
  • Concretely described below are details of individual basic functions according to the present invention. These functions are realized by installing a program (software), which describes these functions, and necessary data in memory of devices such as a storage, a switch, a server (computer), and a management unit (realized by a computer, etc.), and executing the program on a central processing unit (CPU) in theses devices individually. In addition, a data center in which a SAN-applied computer system consisting of a system group of a large capacity of storages and various servers is connected to the Internet and is equipped with data storage service functions, namely Internet data center (abbreviated to “iDC”), is constructed, and an inventive device relating to a method for processing a mass of data at that iDC is one of features of the present invention. [0044]
  • First the data protection is described. Functions of the data protection are intended for backup of DBs during online operation, reduction in the management cost by sharing storage resources, improvement in system availability by means of disaster recovery, etc., and assurance of data security, and thereby, enable to back up data without stopping a job (non-disruptive backup) for 24-hour-per-day, 365-day-per-year operation that is expected to increase in the years ahead, enable to share a tape library at the time of backup (tape unit-shared backup), resulting in reduction in the cost as well, and further enable to restore the system rapidly in the event of a disaster by ensuring data security in copying remotely at long distance (remote copying). To put it concretely, the details of the data protection are three techniques of the non-disruptive backup, the tape unit-shared backup, and the asynchronous remote copying as described above. [0045]
  • Firstly functions or actions of the non-disruptive backup enable applications to run even during backup operation by the backup using a replica of data, and prevent application servers from being affected by using servers for backup only. [0046]
  • FIG. 4, FIG. 5[0047] a, and FIG. 5b illustrate a configuration for, and a function of the non-disruptive backup in detail. An outline of this function is to back up DBs without affecting online jobs via a SAN without passing a LAN by collaboration between internal functions in storages and database management system (DBMS) in application servers.
  • FIG. 4 illustrates a series of a flow of the non-disruptive backup. First, by using said internal functions in storages, copying from the volumes to be backed up (primary volumes) to the secondary volumes with a capacity equal to or larger than that of the primary volume in a storage unit is executed to make a copy of the primary volumes. Next, during execution of applications, the status of the database management system (DBMS) in an application server is changed to a backup-allowable state to prevent online jobs from being affected, and then the backup server makes a backup copy of data in the secondary volumes to tape units. [0048]
  • FIG. 5[0049] a and FIG. 5b illustrate an outline of the processing by the volume copy function that is an internal function of a storage unit, in a process of the non-disruptive backup illustrated in FIG. 4. According to a prior backup technique not shown in the figure, originally, after stopping the jobs which a server performs to a database (DB), a backup copy of the DB is made to other storages, and after the relevant backup processing is complete, said online jobs to the DB is restarted. According to the prior art, online jobs to a DB must be in stop during backup operation.
  • In contrast to this, in one example of preferred embodiment of the present invention as illustrated in FIG. 5[0050] a, a replica for backup, namely Logical Volume B (Logical VOLB), is secured in a storages and a copy is made in advance. When backing up data in Logical Volume A (Logical VOLA), the data in Logical VOLA is copied to Logical VOLB in advance too. To put it concretely, if Logical VOLA is a backup target, two logical volumes of Logical VOLA and Logical VOLB are prepared in advance and duplication is directed.
  • While data in Logical VOLA is being copied to Logical VOLB sequentially in the storage unit, when data is written to the storage unit from an online job (JOBA in the figure) concurrently with the copying, the duplicated writing of the data from the job is automatically performed on both Logical VOLA and Logical VOLB in the storage unit. After completion of copying sequentially from Logical VOLA to Logical VOLB, if data is written from JOBA, duplicated writing is also performed to keep individual data of Logical VOLA and Logical VOLB identical. [0051]
  • When performing backup, the backup server instructs the storage unit to perform pair split by using a means for controlling disk drives. After the split instruction, although data is written from JOBA, the storage unit writes the data to Logical VOLA only, and not to Logical VOLB. Thereby, data present in Logical VOLA when the split instruction is given is left in Logical VOLB as it is. After the split instruction, the backup software on the backup server reads data from the secondary volume, Logical VOLB, and makes a backup copy of the data to a backup device such as a tape unit. [0052]
  • However, for the volume duplication scheme illustrated in FIG. 5[0053] a, a duplicated volume must be prepared before a time when backup is performed. Therefore, in order to perform backup, volume duplication must be started further the duplication time before a backup time by taking into consideration the time taken to duplicate a volume. A function of a storage unit illustrated in FIG. 5b solves this problem.
  • In the case of FIG. 5[0054] b, Logical VOLB to which a copy of Logical VOLA is made must be prepared in the same way as for FIG. 5a. Before starting backup, the backup server instructs the storage unit to perform pair split by using a means for controlling disk drives in the same way as for the case of FIG. 5a. However, at this time, data in Logical VOLA does not need to have been copied to Logical VOLB. After the split instruction, the backup software on the backup server starts reading data from the secondary volume, Logical VOLB. While data in Logical VOLA is being copied to Logical VOLB sequentially in the storage unit, if there in no data present in Logical VOLB when the backup server attempts to read data from the secondary volume, Logical VOLB, the disk drive reads out data from Logical VOLA and hands the data over to the backup server, or copies data from Logical VOLA to Logical VOLB once and then hands the data over to the backup server. As a result of this processing, although there is no data present in Logical VOLB at the time of splitting, it appears from view of the backup server that a copy of data in Logical VOLA is present in Logical VOLB.
  • However, data may be written from the application server into a certain area of Logical VOLA during the backup processing. Since data in Logical VOLA is being copied to Logical VOLB sequentially in the storage unit, if the data from the application server is written into Logical VOLB by the processing of copying, data after the split is also written into Logical VOLB. To prevent this, the storage unit reads Logical VOLA's data currently present in the area for which a write demand is made and writes the data out into Logical VOLB. After that, the storage unit writes into Logical VOLA the data which the application server demanded to write. As a result of this processing, data present in Logical VOLA only at the time of the split instruction is copied to Logical VOLB. With this method, data in the primary volume (Logical VOLA) does not need to have been copied to the secondary volume (Logical VOLB) when the backup processing starts, that is, system operation in which a copy of volumes must be prepared in advance is not required, resulting in improvement of system operational ability. [0055]
  • FIG. 7 illustrates an example of installing a system constructed for the non-disruptive backup illustrated in FIGS. 4, 5[0056] a, and 5 b. The application server is equipped with DBMS and a means for controlling disk drives, and the backup server is equipped with backup software and a means for controlling disk drives. As an advance preparation, the means for controlling disk drives is installed, its configuration is set up, and operation of the means for controlling disk drives is checked. After that, when constructing an non-disruptive backup system, first a DBMS script (Logging in, Setting the backup mode, Terminating the backup mode, and Logging out) is created, a script (Pair split, Pair event wait, and Resynchronization) of the means for controlling disk drives in the application server is created, collaborated operation with the backup software is checked, and parameters for allocation of logical unit and the means for controlling disk drives are set.
  • In addition, in the case of another example of non-disruptive backup configurations illustrated in FIG. 6, the primary and secondary volumes created with the mirroring software are mirror split according to an instruction from the collaborating tool in the application server, and while backup is performed by using one volume (secondary volume), jobs are enabled to continue by using the other volume (primary volume). Then, after the backup terminates, resynchronization is performed. To put it concretely, the duplicated writing to the primary and secondary volumes is performed with the mirroring software in the application server, accessing a DB is stopped with the collaborating tool (software) in the application server, and accessing the DB is restarted after mirror split is directed. Next, the backup copying of data in the secondary volume is started to a backup device such as a tape unit connected with the backup server by use of the collaborating tool (software) in the backup server. After that, the collaborating tool in the application server that is notified of completion of the backup from the collaborating tool (software) in the backup server directs mirror resynchronization and performs duplicated writing again. [0057]
  • Next, FIG. 8 and FIG. 9 illustrate the details of a configuration and function of the tape unit-shared backup. This function outlined is intended for reduction in the management cost of data that are scattered among many servers, and reduction in the load of a LAN with the result that high-speed backup is achieved. Further, by enabling a tape library to be shared among many server sides, the expansive library can be made the effective use of (compared with the case where a backup tape unit is installed for each disk drive), and by sharing a single tape library among multiple servers, backup data can be output directly to a tape unit via a SAN without passing a LAN, resulting in achievement of high-speed backup. [0058]
  • The left one of FIG. 8 illustrates conventional tape unit backup. Backup data is copied from each disk drive of individual servers via a LAN, through the backup server, to a tape unit, and hence data passes a LAN every backup case, a load is put on the LAN. Further, a load is also put on the backup server every backup case. [0059]
  • In accordance with a preferred embodiment of the present invention, in the case of LAN-free backup illustrated in the middle one of FIG. 8, the backup processing can be speeded up by copying data from a disk drive to a tape unit via a SAN, and backup is achieved by use of servers without passing a LAN. When performing backup, a single type of server can be used, and hence the load of servers is reduced. In accordance with another preferred embodiment of the present invention, since server-less backup illustrated in the right one of FIG. 8 enables to copy data directly from disk drives to a tape unit, the backup processing can be speeded up and the load of servers can be reduced as well. In accordance with the preferred embodiment of the present invention as illustrated in the right one of FIG. 8, disk drives must be equipped with a capability of writing into tape units, tape units must be equipped with a capability of reading data from disk drives, FC switches must be equipped with a capability of writing from disk drives into tape units, or FC-SCSI multiplexers (described later in the explanation of FIG. 9) must be equipped with a capability of writing from disk drives into tape units if tape units are connected to the FC-SCSI multiplexers. [0060]
  • FIG. 9 illustrates another example of configurations for tape unit-shared backup. The configuration shown in FIG. 9 corresponds to LAN-free backup shown in the middle one of FIG. 8. In this configuration example, two or more nodes share a tape library concurrently and individual servers back up. In accordance with FIG. 9, Server C is different in functions from Servers A and B, has a backup manager installed for managing all over the backup, in addition to a backup agent necessary to perform a backup operation practically, and is equipped with functions of assigning a backup drive, etc. Here, the backup drive, for example, has three drives and assigns [0061] Drive 1 to Server A. When a backup demand is made from Server A, the backup drive is controlled so that a tape cartridge for storing is loaded onto Drive A. In addition, drives may be assigned to servers in such a way that the backup manager manages the condition of drive usage, selects unused drives, and assigns a proper drive of them. In the structure shown in FIG. 9, a set of an FC-SCSI multiplexer and a backup drive corresponds to a tape library shown in FIG. 8.
  • Concrete operation of the tape unit-shared backup shown in FIG. 9 is described below. First, the agent on Server A demands the backup manager to mount a tape cartridge. Next, the manager receiving the demand mounts a tape cartridge onto any drive of a tape library. Then, the managers goes on to inform the agent on Server A of completion of mounting and the name of the drive onto which a tape cartridge has been mounted. Then, the agent on Server A performs backup actually. To put it concretely, Server A reads data from a storage, and writes the data into the mounted tape cartridge through an FC switch and an FC-SCSI multiplexer. Following this, after completion of backing up, the agent on Server A demands the manager to demount the tape cartridge. The manager instructs to demount the tape cartridge, and all the processing terminates. [0062]
  • Next, the following describes a configuration for and a function of asynchronous remote copying in the disaster recovery as a measure of data protection. This is intended for assurance of data security by copying remotely at long distance, for quick restoration of a system in the event of a disaster such as an earthquake, for duplication of a database to a remote site without affecting the performance of the main site, and for continuation of a job at the remote site in the event of a disaster. [0063]
  • FIG. 10 illustrates a system configuration for asynchronous remote copying. A main site and a remote site are located away long enough from each other not to suffer from a disaster at the same time in the event of it and are connected through communication lines. When information is updated at the main site and the updating is complete, completion of the update is reported to a server (without waiting for reflecting information on the remote site, that is, asynchronously). Next, updated data is copied sequentially at a proper timing from the main site to the remote site; however, if data is not transferred in the same order the data was updated at the main site, updated data is sorted by the time sequence in a system at the remote site and then the data is copied with the sequence of update guaranteed (for example, if update data of receipt and payment of money are stored in reverse order, this can cause to force improper dealings in processing of remains). [0064]
  • Next, the following describes a configuration for and a function of high-speed replication between servers in data sharing. As shown in FIG. 11, when loading data between a DB on a main frame (backbone database with high reliability ensured) and a DB on UNIX/NT servers (for example, a database for which easiness in data handling is considered more important than reliability of data when performing the statistical processing of data, and on which hence source data necessary for the statistical processing is loaded from the main frame DB), intermediate files as a file of the main frame DB are set up, and the data is moved from the backbone DB to the intermediate files once (because specifications of the data loader of a UNIX server are not defined so as to read data directly from the backbone DB). Since the data in the intermediate files is converted to such a level that the data loader of a UNIX server can read, a replication of data is made in the DB on the UNIX server through pipes to prepare a DB for the required processing. At this time, data replication from the backbone DB to the DB on the UNIX server is done without passing a LAN, and hence high-speed replication between servers can be achieved. Here, intermediate files can be a virtual volume that is created temporarily on semiconductor memory, namely cache memory, on the outside of magnetic disk drives. With cache memory, data can be transferred at a higher speed. [0065]
  • Furthermore, in order that UNIX servers or PC servers can construct a data warehouse easily, by installing in the UNIX servers or their attached units the software which is capable of performing easily and quickly in GUI base a series of the processing from extracting data from a variety of source DBs such as backbone DB, through converting and consolidating data, up to loading data, the time taken to transfer data can be shortened when constructing a data warehouse. [0066]
  • Next, the following describes a configuration for and a function of integrated operation and management of systems including storages. For computer systems that are large in size and is required to run 24-hour-per-day continuously, system management, in particular, storage management is considered important. [0067]
  • As a typical function of storage management, listed is monitoring for device failures, in particular, what part fails in a device. In addition, required are system maintenance work such as backing up data at each site periodically against a system crash, system setting modification work when volumes are added, and further data handling such as moving data in some volumes to other volumes when the performance drops due to load congestion in a particular volume. At that time, monitoring the condition of the load is also important management work. In a conventional system, one maintenance terminal is installed for each storage unit, and individual storages must be managed from their respective terminals. [0068]
  • In a means of storage integrated operation and management relating to a preferred embodiment of the present invention, all storage units can be managed by a single terminal. [0069]
  • FIG. 12 illustrates an example of backup operation and failure monitoring in a large-scale office system. In ordinary office environment, there are data used commonly within each department and data used commonly by all departments. In this example, there exist multiple client computers and multiple server computers on floor A, floor B, and floor C individually, and a mail server and a World Wide Web (WWW) server which are used commonly as a enterprise general system by all departments are prepared to provide their services to each department. [0070]
  • For a small-size data so that it is used by each department, in many cases individual departments can make a copy of their respective data for backup, so a backup device such as a tape unit is installed in individual departments. In addition, multiple large-scale storages to store a large-size data and a backup device such as a tape library are installed at a computer center, and each device at the center, each system on individual floor, and an enterprise general system are connected mutually via a Storage Area Network. [0071]
  • A centralized monitoring console monitors all devices on individual floor, in the enterprise general system and at the computer center, and all device failure reports are collected to the centralized monitoring console. Service personnel can identify easily what device a failure occurs in by seeing the console. When data is destroyed due to failures, the data can be recovered (restored) from a backup device. This restore processing can be also initiated from the centralized monitoring console. [0072]
  • In addition, the centralized monitoring console has such a function that service personnel leave the terminal unattended in some cases, so in such a case a mail is sent to a cellular phone, etc., of the service personnel from the centralized monitoring console to notify them. [0073]
  • The centralized monitoring console also directs how to operate backup and manages the backup. The frequency of backing up and the requirement of a destination of backing up vary with the kind of data individually. For example, data almost unnecessary to back up (for example, data updated very rarely) and data accessed by only a particular department or person do not need to be backed up frequently. Or, even if attempting to make a backup copy of all data at the same time zone, there is a limit to the number of backup devices. The centralized monitoring console rearranges the frequency of backing up, the time zone, or the destination of the backing up in accordance with the data or volume depending on the need of users, and automatically performs the backup processing individually. [0074]
  • FIG. 14 illustrates a diagrammatic view of the processing of setting up volumes. In the case of a large-scale storage unit, multiple disk drives are grouped to one or multiple apparent logical devices (LDEVs). In addition, the storage unit has multiple ports to connect to hosts or fiber channel switches, and which ports are allowed to access to individual LDEVs can be set and changed for the storage unit. When a host references an LDEV, the LDEV is recognized uniquely with the port identifier and logical unit number (LUN) of the storage unit. Hereafter, this set of a port identifier and an LUN is called the host address. In the storage unit, this host address is assigned to individual LDEVs and is made open to hosts. [0075]
  • From the centralized monitoring console, a host address is assigned to LDEVs, and the type of hosts that can access individual LDEVs is set. Since all hosts are connected to all storages via a storage area network, there is the risk that a host which is not allowed normally to access a storage gains an invalid access to the storage, so the type of hosts that can access individual LDEVs can be registered in the storage to prevent invalid access. [0076]
  • FIG. 13 illustrates an example of monitoring the performance of storages. The centralized monitoring console can watch the condition of the load of each volume. To put it concretely, the load condition is the number of times per second I/O operations are received, the ratio of read and write operations, the cache hit rate, etc. Generally, a load is very seldom put on all volumes evenly, and volumes with an extremely high load put on them or volumes with nearly no load put on them may present. Since the condition in which an one-sided load is put on particular multiple volumes can be monitored on the centralized monitoring console all at once, when watching this condition, a load is reallocated in such a way that part of data on heavy-loaded volumes is moved to light-loaded volumes, thereby operation plan can be drawn up easily so as to prevent the performance of a overall system from being dropped. [0077]
  • In addition, FIG. 15 illustrates an example of a case where a storage unit has the functions of reallocating volumes. Some storage units have a small capacity but a comparatively high speed of volumes, and other storage units have a large capacity but a low performance of volumes. In such a situation, it is better to move data which has a low access frequency to a large capacity of volumes, and data which has a high access frequency to a high speed of volumes. In the disk drives involved in this case, individual logical devices (LDEVs) can be moved to other areas. [0078]
  • In addition, reallocation of volumes is invisible from hosts both during movement of the logical devices and after movement of the logical devices, and volumes can be handled in the same as before movement. Disk drives obtain the usage rate of logical devices as statistical information, and send the information to a centralized monitoring console. The centralized monitoring console predicts how the usage rate of logical devices changes when a logical device is moved based on the information, and presents the prediction to service personnel. Service personnel can draw a reallocation plan more easily than in the case of the previous figure based on the prediction. In addition, from the centralized monitoring console, service personnel can instruct to move the logical devices actually or not, or set in advance detailed conditions under which, when individual volumes are set in a certain state, the volumes are automatically moved. [0079]
  • In addition, there is FC switch management as a part of integrated system operation and management, and the FC switch management enables to make various settings of FC switches and to manage the status of zoning, etc. To put it concretely, it includes management such as the displaying of a fabric topology, the setting of FC switches' zoning, and the setting/displaying of various parameters in FC switches, and these items can be watched on the centralized monitoring console. FIG. 16 illustrates an example of configurations of a fabric switch (FC) lying between servers and storages with the switch divided into three zonings. [0080]
  • Next, on the whole configuration of a computer system relating to a preferred embodiment of the present invention described above, the following describes an concrete example of cases where a terminal in which the operation and management software illustrated in FIG. 1 has been installed, namely a management terminal, manages and controls the whole configuration of a computer system. [0081]
  • To back up (FIG. 4), which volume in a storage is to be backed up must be determined. Usually, a server manages data which an application stores in a storage in units of files. On the other hand, a storage manages data in units of volumes. [0082]
  • Therefore, when backup is started, if the SAN management unit (terminal shown in FIG. 1, in which operation and management software has been installed) is asked to back up a file by a server, the SAN management unit obtains information to identify a file, information about a backup device (address on a SAN, etc.), a backup time, etc., from servers. Further, the SAN management unit obtains information to identify a volume in which the relevant files have been stored from storages. Next, the SAN management unit instructs a storage in which the relevant files have been stored to create a replica (secondary volume) of a volume to be backed up using the obtained two kinds of information. To put it concretely, the SAN management unit instructs a storage which has a volume in which the relevant files have been stored to assign another volume (secondary volume) for creating a replica of the relevant volume (primary volume) and to create the replica. In assigning the secondary volume, considerations must be taken so that a volume of at least the same capacity as that of the primary volume must be assigned to the secondary volume, and the SAN management unit must grasp how large capacity and what configuration of volumes individual storages have. When the creating of the secondary volume terminates, the SAN management unit, receiving this termination report, instructs the storage to split a pair of volumes, and instructs the backup server to make a backup copy of data from the secondary volume to a backup device while keeping the primary volume occupied in the normal processing from servers. The backup server reads data in the secondary volume via the SAN, and transfers the read data to the backup device. When the backup processing terminates, this is reported to the SAN management unit from the backup server, and then the SAN management unit reports termination of the backup to an application that asked to back up. Note that a time at which to split a pair of volumes is the backup time described above. In addition, a destination on the SAN to which to transfer backup data is said address of the backup device on the SAN. Here, while communication of control information between the SAN management unit and storages can be performed from the SAN management unit, through a LAN, a server, and a SAN, to a storage as illustrated in FIG. 1, the SAN management unit not shown in the figure and storages are connected directly via a LAN, said control information can be communicated through this connection. [0083]
  • In the above description, the SAN management unit plays the central role to control reception of a backup demand, creation and split of a replica, the backup processing, and reporting of backup termination, however, software in an application server and software in a backup server exchange control information directly via a LAN, and thereby can realize the backup system without making use of a SAN management unit (FIG. 6). In this case, compared with the case where a SAN management unit is used, individuals of software in the two servers must collaborate, however, the SAN management unit described above is not required, and hence this scheme is considered to be suitable for a comparatively small-scale system. [0084]
  • In the backup system described above, data is backed up by transferring it to a backup device through a backup server, however, backup can be controlled so that data is transferred directly from the secondary volume in a storage to a backup device via a SAN (direct backup) without passing a backup server. In the case where a SAN management unit is used, this backup is achieved by instructing a storage to transfer data in the secondary volume to a backup device after the SAN management unit recognizes that a replica has been created and split. This instruction includes the address of the backup device on the SAN, etc. [0085]
  • In addition, in the backup system described above, applications play the primary role to specify the backup file and the volume, however, for files and volumes which are updated frequently and require backup every day or every several hours, the load of applications can be reduced by specifying periodical backup for the management unit and the backup software in advance. [0086]
  • Next, the following describes an example of functions of a SAN management unit in the tape unit-shared backup (FIG. 8). In the case of the LAN-free backup, data backup related to individual servers is almost the same in backup operation as the backup described above. Differences from the above are that since data associated with multiple servers must be backed up, conflict of the backup processing among these multiple servers must be arbitrated, and so functions of arbitrating this conflict are required from the SAN management unit. For example, the SAN management unit is required to have functions of preventing access congestion in a tape library by instructing multiple servers to back up according to the schedule made out in advance, etc. [0087]
  • The following describes an example of controlling the zoning function illustrated in FIG. 16 as an example of operations of a SAN management unit. In FIG. [0088] 16, cluster servers are connected to storages through a fabric switch. Here, the fabric switch is divided logically, that is, is treated as multiple switches. Therefore, if the storage side output destination of the switch in Zoning 1 and the storage side output destination of the switch in Zoning 2 or Zoning 3 have been separated, cluster servers belonging to the switch in Zoning 1 can not gain access to the switch in Zoning 2 or Zoning 3, and invalid access to the storage side output destination of the switch in Zoning 2 or Zoning 3 from cluster servers belonging to the switch in Zoning 1 can be prevented.
  • Such set-up of zonings in the switch is enabled by connecting a fabric switch and an SAN management unit not shown in the figure through a LAN, etc. not shown in the figure, and setting up said zonings in the fabric switch according to an instruction from the SAN management unit, etc. In the case where a SAN management unit is not used, zonings can be set up in the fabric switch by using a dedicated console, etc., however, control information for zoning must be set at the location of said dedicated console each time cluster servers and storages are added, changed, or detached, resulting in inefficient operation. By using a SAN management unit and setting up zonings from the SAN management unit through communication, the operability is improved. [0089]
  • A few examples of operation of an SAN management unit are described above, however, when providing various functions of the data processing, the SAN management unit basically obtains from servers and storages the information about files and volumes to be processed, a operation timing, a destination to which to move data, etc., and instructs the devices required based on these pieces of information to process files and volumes (replica creation, data copying, split of replica, backup copying, remote copying, etc.,) according to the operation timing. Individual devices perform their processing according to instructions from the SAN management unit, and return the result of processing. On as needed base, they can make the SAN management unit return the result to the client that asked to process. [0090]
  • To put it in order, a preferred embodiment of the present invention is considered to be composed of the following steps: step [0091] 1; an SAN management unit (terminal in which operation and management software has been installed as shown in FIG. 1) accepts a request for processing data in an integrated storage system from applications which run on individual application servers (this step can be replaced with another step at which the SAN management unit creates a demand for data on its own accord according to a schedule made out separately in advance), step 2; obtains information (information to identify the data to be processed, a operation time, a destination to which to move data, etc.,) necessary for processing the relevant data, step 3; determines the order in which the SAN management unit starts various kinds of functional software (software to execute replica creation, data copying, separation of replica, backup copying, remote copying, etc.,) which reside on storages, network switches, and servers based on said obtained information and makes out a schedule such as a start timing at which to execute the functional software (this step is considered to be a step for collaborating individuals of the functional software), step 4; starts individuals of the functional software actually according to the schedule, step 5; obtains results of execution from the functional software on individual devices (this result at step 4 may affect the result at step 3, namely a schedule), step 6; reports a result at step 5 to an application that asked to process data. Note that this process is divided to these steps for convenience, and two steps of them can be combined, or any step can be subdivided into several sub steps as a separate step.
  • As described above, since a SAN management unit has functions of collaborating multiple pieces of functional software and operate them, the SAN management unit can realize easily complex functions that individuals of the functional software cannot achieve and the SAN management unit enables the more accurate data processing in an integrated storage system. On the other hand, complex functions can be achieved by creating a single piece of large software without collaborating multiple pieces of functional software, however, this leads to a situation in which separate pieces of software must be developed for each kind of the data processing, resulting in an inflexible system. [0092]
  • Next, the following describes how storage systems and storage area network techniques are used in a large-scale computer system, using a concrete example. FIG. 17 illustrates an example of configurations of an Internet data center (abbreviated to “iDC”), which has been expanding in the number of systems recently. The Internet data center is entrusted with Internet service providers (ISPs) and WWW servers of individual enterprises (this system is called “housing”), and provides network management and server operation and management. Further, it also provides value-added services such as web design, construction of an electronic commerce (EC) system, and addition of high-degree security. The Internet data center provides solutions together that solve problems in enterprises, which want to do Internet business, such as shortage of system staffs and their skill, and preparation of server installation places and networks. [0093]
  • Since high-priced equipment such as a high-speed network line is shared in an Internet data center, there is a feature that an Internet data center, in provider's place, can provide services to many enterprises at a low cost. In addition, users and enterprises which utilize an Internet data center are released from burdensome work such as backup and maintenance and deal with a business at a lower cost than running a system alone. However, since iDC runs many Internet environments and many pieces of application software that individual enterprises use, high-speed Internet backbone lines and many high-performance servers must be installed. In addition, these facilities must have high reliability and high security. In these environments, high-speed and highly functional storage systems are indispensable. [0094]
  • The following describes an example of applying storage area network techniques to a large-scale system such as an Internet data center. [0095]
  • FIG. 18 illustrates a schematic configuration diagram of an Internet data center to which a large-scale storage area network (SAN) is applied. Multiple server computers exist at each enterprise, storages such as a disk drive and a tape unit are consolidated to a few units, one or two-three units, and servers and disk drives/tape units are connected mutually through fiber channel switches. Although individual storage units must be connected to individual server computers in an environment in which a SAN does not exist, storage units can be shared by all computers through a SAN, and hence can be consolidated and managed. In addition, when adding storage units, the storage units can be added while a host computer is in online (in operation), so the addition does not affect jobs. [0096]
  • In addition, from the point of view of backup, storage consolidation through a SAN plays an effective role. Here, FIG. 19 illustrates a schematic configuration diagram of an example of non-disruptive backup under a SAN environment at an Internet data center. In this figure, individual server computers, storages, and backup libraries of multiple enterprises are connected mutually via a storage area network. A management host exists on the SAN to manage storage devices and to operate backup. Data in each server computer, for example, Web contents on a WWW server and data used by an application server, have been consolidated and stored in storages on the SAN. [0097]
  • The demands for backup is considered to be varied depending on the circumstances of each host computer. For example, there are cases where it is desirable that a backup copy of data is taken every day at a time when a load of access to a host computer drops, that is, during a time zone such as midnight for which the number of times access is made to disk drives decreases, or it is desirable that in the case of a host computer which is very busy on the processing of an update type of transactions, the host computer determines a backup start time optionally according to the time and circumstances, such as a time when a flow of transactions breaks. The management host accepts those demands from individual host computers and manages backup processing properly. In addition, since 24-hour-per-day continuous operation is important at an Internet data center, interruption of processing on the host computer must be avoided and non-disruptive backup is mandatory. Described below briefly is an example of backup processing. [0098]
  • For example, if individual server computers want to make a backup copy at some timing once a day, the management host makes out a schedule of the backup beginning and ending for individual server computers. For example, a backup operation for a WWW server of Company A begins at midnight, a backup operation for an application server of Company B at one in the morning, a backup operation for an application server of Company A at half past one in the morning, a backup operation for a WWW server of Company B at three in the morning, and so on. Time taken to perform the backup processing depends on the amount of data that individual servers keep, etc., and hence the management host manages what amount of data individual server computers keep in storages, and calculates the time taken for backup based on the amount of data and makes out a schedule. In addition, if a tape library has multiple tape drives, multiple backup jobs can be executed concurrently. [0099]
  • Taking as an example a case where a backup operation for Company A begins at midnight, the following describes a flow of processing. When midnight comes, the management host creates a replica of data, present in disk drives, of a WWW server of Company A. For that, the management host finds out a free disk (logical volume) in a disk drive, assigns it to a volume for the replica of a WWW server of Company A, and instructs the disk drive to create the replica. A flow of the processing of creating a replica is that as illustrated in detail in FIG. 5[0100] a and FIG. 5b.
  • Following this, a tape cartridge is mounted onto a tape drive in a tape library. After that, the copying of backup data begins from the replica volume to the tape library. The server computer of Company A can perform the data backup processing, however, if the direct backup function by which data is transferred directly from the management host or a disk drive to a tape library is supported (all right if at least any of a disk drive, a tape library, and a FC switch supports), this function can actually be used for backup processing. [0101]
  • In that case, while the server computer is not aware of whether the backup processing is performed or not, a backup copy of data is automatically made. When the backup processing is complete, the tape cartridge is demounted from the tape drive, the replica volume in the disk drive is placed out of use, the volume is set to a free volume again, and the next backup processing follows. [0102]
  • In this case, since the tape library is shared and connected mutually via the SAN, if the schedule of tape library utilization is managed properly by the role of the management host, etc., one tape library can cover all their backup volumes even for multiple host computers. In addition, it is sufficient to prepare a replica volume only at the time the backup processing is needed if the management host assigns volumes properly, a replica volume does not need to be always prepared in individual volumes, and hence the number of tape library units and the number of volumes, etc., can be reduced. [0103]
  • Next, though the merits of sharing of storage units through a SAN are large in cost reduction, on the other hand, there are considerations to be taken in an environment in which servers of multiple enterprises coexist. One of them is security. All server computers can gain access to all storage units on a SAN via the SAN, so a server of Company C can look at data of Company A on the same SAN. Next, described below are examples of means by which to solve these problems. [0104]
  • FIG. 20 illustrates an environment in which server computers and storages of multiple enterprises coexist on a SAN at an Internet data center. Under the environment in which storages are shared by Companies A, B, and C as illustrated in the figure, first zonings of an FC switch are set so that server computers of individual enterprises can gain access to a particular path only to storage units. Next, LUs that server computers of individual enterprises use are assigned to individual paths in the disk drives. For example, if Company B uses two logical units of LU[0105] 1 and LU2, LUs 1 and 2 are assigned to the middle path, and if Company C uses LU0, LU 0 is assigned to the right path.
  • Further, there are multiple LUs on the same path and the LUs are shared by multiple servers, however, individual servers do not want to share in some case. For example, Company B secures the path to access [0106] LU 1 and LU 2 in FIG. 20, however, there may be a requirement in which only some particular one of Company B's servers is permitted to gain access to LU1. In that case, access limitation is done by use of the LUN. The WWN of a particular server of Company B is registered in a disk drive, and it can be set so that only a server whose WWN has been registered can gain access to LU1.
  • These zonings, path assignment, and access limitation in units of LUs are set on the centralized monitoring console. The topology of an FC switch is checked on the monitoring console, zonings are set based on the topology, further as many LUs as necessary are mapped on individual paths, and LUs that individual companies can use are registered. Furthermore, for LUs to which mutual access is not permitted within the same path, the centralized monitoring console obtains the WWNs of host computers that are permitted to access, sets them in a disk drive, and limits access in units of LUs. [0107]
  • Next, described below is an example of applying a computer system which uses an integrated storage system consisting of a SAN and various storages. In recent years, merge and consolidation of enterprises have increased. As a result, this gives rise to the need to integrate computer systems among enterprises. [0108]
  • FIG. 21 illustrates an example of a large-scale computer system in which computer systems of multiple enterprises are connected mutually. Host computers among enterprises are connected through the Internet, and mutual utilization of data is achieved. In addition, by introducing storage area networks, storages in individual enterprises are organized so that they are also connected through a public switched network or leased lines. [0109]
  • From the point of view of computer system operation, integration of data is important. Usually, application databases that are used by individual enterprises are different, only straightforward mutual connection among devices does not make direct mutual use of data available. Therefore, generally, individual data from multiple databases must be consolidated and integrated to construct a new database. [0110]
  • In FIG. 21, Enterprises A and B individually have a backbone database by which transaction processing such as account processing is performed, and an database of information system by which analysis processing is performed in offline using data in the backbone database. In this example, the data of the backbone databases of Enterprise A and Enterprise B are integrated to create a data mart for various jobs. In some case, a large-scale data warehouse is constructed once, and then a small-scale data mart for various applications may be created from the data warehouse individually. In the case where does not exist an environment in which storages are connected mutually via a storage area network, when integrating databases, data must be moved through a host computer and a network. Usually, many databases which enterprises want to share have a large capacity, and hence it takes a large amount of time to transfer data. [0111]
  • In the example in FIG. 21, a replica of Enterprise B's data is created by using a remote copying function in storages. A replica volume is split once at a frequency of once a day or once a week, etc., and a replication server reads data in the split replica volume to create various data marts. Replication servers exist separately from various types of DBMS of information system which make use of data marts. Storages are combined mutually via a storage area network, and a replica of a database can be created without putting any load on a host by using the remote copying function in storages. In addition, replication servers that creates data marts, and DBMS of information system can be realized on separate host computers individually, and hence the processing of creating data marts does not affect jobs of a backbone DB and a DB of information system. [0112]
  • According to the present invention, an integrated storage system can be constructed by reinforcing collaboration of components or functions of a storage system in which a SAN is used, and all various functions illustrated in FIG. 3 can be achieved. [0113]
  • Further, by connecting an integrated storage system to the Internet and applying the system to an Internet data center that keeps a large capacity of data and achieves utilization of the data, Internet information services can be provided efficiently in the cost and both of quantity and quality, and timely. [0114]

Claims (12)

1. A computer system which has plural client computers, plural various servers, plural various storages which keep data, a local area network (LAN) which connects said computers and said servers, and a storage area network (SAN) which lies between said servers and said storages,
wherein said SAN forms a switched circuit network which is capable of connecting any said servers and any said storages through fiber channel switches (FC switches),
said computer system comprising a terminal having operation and management software which performs storage management comprising management of logical volumes in said plural storages, data arrangement and error monitoring, management of setting up said FC switches, and a backup operation for data in said storages.
2. The computer system as claimed in claim 1, wherein said SAN is connected to SAN in other computer system via a wide area network (WAN).
3. The computer system as claimed in claim 1, wherein when data in a primary volume in said storage is backed up to a backup device in a non-disruptive manner, a secondary volume corresponding to said primary volume is created in said storage by an internal function, a copy is made from said primary volume to said secondary volume, and said copy is transferred to said backup device via said SAN without passing said LAN.
4. A computer system which has plural client computers, plural various servers, plural various storages which keep data, a local area network (LAN) which connects said computers and said servers, a storage area network (SAN) which lies between said servers and said storages wherein:
wherein said SAN forms a switched circuit network which is capable of connecting any said servers and any said storages through fiber channel switches (FC switches), and
wherein when data in said storage is backed up to a backup device in a non-disruptive manner, said storage has function of receiving instruction of a volume split from said server, function of assuming as if data in a primary volume were kept in a secondary volume at the time of said instruction, and function of backing up said data from said secondary volume to a backup device.
5. A method for managing a system having servers, a storage which keeps data of said servers, a network which connects said servers and said storage, and a backup device which is connected with said network and backs up said data, said method comprising:
a first step of obtaining information to identify data to be executed;
a second step of obtaining specification of processing a data denoted by said information;
a third step of instructing said storage which keeps the data denoted by said information to execute said specification of processing; and
a fourth step of receiving of processing the data denoted by said information from said storage result.
6. The method for managing said system as claimed in claim 5, wherein said specification of processing is to transfer said data from said storage to said backup device.
7. The method for managing said system as claimed in claim 5, wherein said specification of processing is to create a copy of the data denoted by said information, and to transfer said created copy data to said backup device.
8. The method for managing said system as claimed in claim 5, further comprising a fifth step of obtaining a timing at which said specification of processing is executed and a sixth of controlling execution timing of said third step according to said timing.
9. The method for managing a system as claimed in claim 5, wherein said server in said system is connected with an internet, and said data is sent out to said internet.
10. The method for managing said system as claimed in claim 6, wherein said server in said system is connected with an internet, and said data is sent out to said internet.
11. The method for managing said system as claimed in claim 7, wherein, said server in said system is connected with an internet, and said data is sent out to said internet.
12. The method for managing said system as claimed in claim 8, wherein said server in said system is connected with an internet, and said data is sent out to said internet.
US10/663,687 2000-06-29 2003-09-17 Computer system using a storage area network and method of handling data in the computer system Abandoned US20040073677A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/663,687 US20040073677A1 (en) 2000-06-29 2003-09-17 Computer system using a storage area network and method of handling data in the computer system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/606,050 US6950871B1 (en) 2000-06-29 2000-06-29 Computer system having a storage area network and method of handling data in the computer system
US10/663,687 US20040073677A1 (en) 2000-06-29 2003-09-17 Computer system using a storage area network and method of handling data in the computer system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/606,050 Division US6950871B1 (en) 2000-06-29 2000-06-29 Computer system having a storage area network and method of handling data in the computer system

Publications (1)

Publication Number Publication Date
US20040073677A1 true US20040073677A1 (en) 2004-04-15

Family

ID=32070141

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/606,050 Expired - Fee Related US6950871B1 (en) 2000-06-29 2000-06-29 Computer system having a storage area network and method of handling data in the computer system
US10/662,473 Abandoned US20040073675A1 (en) 2000-06-29 2003-09-16 Computer system using a storage area network and method of handling data in the computer system
US10/662,527 Abandoned US20040073676A1 (en) 2000-06-29 2003-09-16 Computer system using a storage area network and method of handling data in the computer system
US10/663,687 Abandoned US20040073677A1 (en) 2000-06-29 2003-09-17 Computer system using a storage area network and method of handling data in the computer system

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/606,050 Expired - Fee Related US6950871B1 (en) 2000-06-29 2000-06-29 Computer system having a storage area network and method of handling data in the computer system
US10/662,473 Abandoned US20040073675A1 (en) 2000-06-29 2003-09-16 Computer system using a storage area network and method of handling data in the computer system
US10/662,527 Abandoned US20040073676A1 (en) 2000-06-29 2003-09-16 Computer system using a storage area network and method of handling data in the computer system

Country Status (1)

Country Link
US (4) US6950871B1 (en)

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030212781A1 (en) * 2002-05-08 2003-11-13 Hitachi, Ltd. Network topology management system, management apparatus, management method, management program, and storage media that records management program
US20040010732A1 (en) * 2002-07-10 2004-01-15 Hitachi, Ltd. Backup method and storage control device using the same
US20040039756A1 (en) * 2002-08-20 2004-02-26 Veritas Software Corporation System and method for network-free file replication in a storage area network
US20040088477A1 (en) * 2002-10-31 2004-05-06 Bullen Melvin James Methods and systems for a memory section
US20040193827A1 (en) * 2003-03-31 2004-09-30 Kazuhiko Mogi Computer system for managing performances of storage apparatus and performance management method of the computer system
US20050008016A1 (en) * 2003-06-18 2005-01-13 Hitachi, Ltd. Network system and its switches
US20050038836A1 (en) * 2001-07-06 2005-02-17 Jianxin Wang Systems and methods of information backup
US20050060125A1 (en) * 2003-09-11 2005-03-17 Kaiser Scott Douglas Data storage analysis mechanism
US20050108190A1 (en) * 2003-11-17 2005-05-19 Conkel Dale W. Enterprise directory service diagnosis and repair
US20060015544A1 (en) * 2004-07-13 2006-01-19 Hitachi, Ltd. Incorporation: Japan Data management system
US20060155951A1 (en) * 2002-04-08 2006-07-13 Hitachi, Ltd. Computer system, storage and storage utilization and monitoring method
US20060179167A1 (en) * 2005-01-28 2006-08-10 Bomhoff Matthew D Apparatus, system, and method for performing storage device maintenance
US20060190694A1 (en) * 2003-11-27 2006-08-24 Akinobu Shimada Disk array apparatus and control method for disk array apparatus
GB2425217A (en) * 2005-04-15 2006-10-18 Hewlett Packard Development Co Controlling access to at least one storage device
US7143096B2 (en) 2002-06-14 2006-11-28 Hitachi, Ltd. Information processing method and system
US7222168B2 (en) 2003-08-04 2007-05-22 Hitachi, Ltd. Computer system
US20070143552A1 (en) * 2005-12-21 2007-06-21 Cisco Technology, Inc. Anomaly detection for storage traffic in a data center
US20070168704A1 (en) * 2005-11-30 2007-07-19 Oracle International Corporation System and method of configuring a database system with replicated data and automatic failover and recovery
US20070214384A1 (en) * 2006-03-07 2007-09-13 Manabu Kitamura Method for backing up data in a clustered file system
US20080072000A1 (en) * 2006-09-15 2008-03-20 Nobuyuki Osaki Method and apparatus incorporating virtualization for data storage and protection
US20080235294A1 (en) * 2007-03-20 2008-09-25 Oracle International Corporation No data loss system with reduced commit latency
US20080243420A1 (en) * 2006-12-22 2008-10-02 Parag Gokhale Systems and methods of media management, such as management of media to and from a media storage library
US20090063765A1 (en) * 2007-08-30 2009-03-05 Rajiv Kottomtharayil Parallel access virtual tape library and drives
US7529834B1 (en) * 2000-06-02 2009-05-05 Hewlett-Packard Development Company, L.P. Method and system for cooperatively backing up data on computers in a network
US20090222896A1 (en) * 2005-03-10 2009-09-03 Nippon Telegraph And Telephone Corporation Network system, method for controlling access to storage device, management server, storage device, log-in control method, network boot system, and unit storage unit access method
US20090313448A1 (en) * 2003-04-03 2009-12-17 Parag Gokhale System and method for extended media retention
US7711978B1 (en) 2004-12-30 2010-05-04 Symantec Operating Corporation Proactive utilization of fabric events in a network virtualization environment
US20100115008A1 (en) * 2005-09-27 2010-05-06 Yoji Nakatani File system migration in storage system
US20100250698A1 (en) * 2009-03-30 2010-09-30 Nils Haustein Automated tape drive sharing in a heterogeneous server and application environment
US20110016092A1 (en) * 2008-03-31 2011-01-20 Fujitsu Limited Federated configuration management database, management data repository, and backup data management system
US20110213755A1 (en) * 2006-12-22 2011-09-01 Srinivas Kavuri Systems and methods of hierarchical storage management, such as global management of storage operations
US20110231852A1 (en) * 2001-11-23 2011-09-22 Parag Gokhale Method and system for scheduling media exports
US20110302280A1 (en) * 2008-07-02 2011-12-08 Hewlett-Packard Development Company Lp Performing Administrative Tasks Associated with a Network-Attached Storage System at a Client
US8230171B2 (en) 2005-12-19 2012-07-24 Commvault Systems, Inc. System and method for improved media identification in a storage device
US20120198023A1 (en) * 2008-04-08 2012-08-02 Geist Joshua B System and method for providing data and application continuity in a computer system
US20130024426A1 (en) * 2010-03-26 2013-01-24 Flowers Jeffry C Transfer of user data between logical data sites
US8478864B1 (en) * 2003-04-14 2013-07-02 Symantec Operating Corporation Topology for showing data protection activity
US20130238941A1 (en) * 2010-10-14 2013-09-12 Fujitsu Limited Storage control apparatus, method of setting reference time, and computer-readable storage medium storing reference time setting program
US8539118B2 (en) 2006-09-22 2013-09-17 Commvault Systems, Inc. Systems and methods of media management, such as management of media to and from a media storage library, including removable media
US20140351545A1 (en) * 2012-02-10 2014-11-27 Hitachi, Ltd. Storage management method and storage system in virtual volume having data arranged astride storage device
US8903773B2 (en) 2010-03-31 2014-12-02 Novastor Corporation Computer file storage, backup, restore and retrieval
US20140380303A1 (en) * 2013-06-21 2014-12-25 International Business Machines Corporation Storage management for a cluster of integrated computing systems
US9069799B2 (en) 2012-12-27 2015-06-30 Commvault Systems, Inc. Restoration of centralized data storage manager, such as data storage manager in a hierarchical data storage system
US20150254142A1 (en) * 2014-03-06 2015-09-10 Software Ag Systems and/or methods for data recovery in distributed, scalable multi-tenant environments
US9201917B2 (en) 2003-04-03 2015-12-01 Commvault Systems, Inc. Systems and methods for performing storage operations in a computer network
US9244779B2 (en) 2010-09-30 2016-01-26 Commvault Systems, Inc. Data recovery operations, such as recovery from modified network data management protocol data
US20160162349A1 (en) * 2013-03-07 2016-06-09 Axcient, Inc. Protection Status Determinations for Computing Devices
US9507525B2 (en) 2004-11-05 2016-11-29 Commvault Systems, Inc. Methods and system of pooling storage devices
US9529871B2 (en) 2012-03-30 2016-12-27 Commvault Systems, Inc. Information management of mobile device data
US9705730B1 (en) 2013-05-07 2017-07-11 Axcient, Inc. Cloud storage using Merkle trees
US9785647B1 (en) 2012-10-02 2017-10-10 Axcient, Inc. File system virtualization
US9852140B1 (en) 2012-11-07 2017-12-26 Axcient, Inc. Efficient file replication
US9928144B2 (en) 2015-03-30 2018-03-27 Commvault Systems, Inc. Storage management of data using an open-archive architecture, including streamlined access to primary data originally stored on network-attached storage and archived to secondary storage
US10101913B2 (en) 2015-09-02 2018-10-16 Commvault Systems, Inc. Migrating data to disk without interrupting running backup operations
US10218650B2 (en) 2014-04-01 2019-02-26 Ricoh Company, Ltd. Information processing system
US10284437B2 (en) 2010-09-30 2019-05-07 Efolder, Inc. Cloud-based virtual machines and offices
US10547678B2 (en) 2008-09-15 2020-01-28 Commvault Systems, Inc. Data transfer techniques within data storage devices, such as network attached storage performing data migration
US10742735B2 (en) 2017-12-12 2020-08-11 Commvault Systems, Inc. Enhanced network attached storage (NAS) services interfacing to cloud storage
US11275619B2 (en) 2016-05-16 2022-03-15 International Business Machines Corporation Opportunistic data analytics using memory bandwidth in disaggregated computing systems
US11474697B2 (en) * 2016-05-16 2022-10-18 International Business Machines Corporation Opportunistic data analytics using memory bandwidth in disaggregated computing systems
US11593223B1 (en) 2021-09-02 2023-02-28 Commvault Systems, Inc. Using resource pool administrative entities in a data storage management system to provide shared infrastructure to tenants

Families Citing this family (106)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6209002B1 (en) * 1999-02-17 2001-03-27 Emc Corporation Method and apparatus for cascading data through redundant data storage units
ATE312378T1 (en) * 2000-03-01 2005-12-15 Computer Ass Think Inc METHOD AND SYSTEM FOR UPDATE AN ARCHIVE OF A FILE
US6886020B1 (en) * 2000-08-17 2005-04-26 Emc Corporation Method and apparatus for storage system metrics management and archive
US7222176B1 (en) * 2000-08-28 2007-05-22 Datacore Software Corporation Apparatus and method for using storage domains for controlling data in storage area networks
US7277933B2 (en) * 2000-08-28 2007-10-02 Fujitsu Limited System for operating a plurality of apparatuses based on accumulated operating times thereof to equalize the respective operating times of the apparatuses
US7209899B2 (en) * 2000-10-31 2007-04-24 Fujitsu Limited Management device, network apparatus, and management method
US20020087880A1 (en) * 2000-12-29 2002-07-04 Storage Technology Corporation Secure gateway multiple automated data storage system sharing
US7593972B2 (en) * 2001-04-13 2009-09-22 Ge Medical Systems Information Technologies, Inc. Application service provider based redundant archive services for medical archives and/or imaging systems
US7231430B2 (en) * 2001-04-20 2007-06-12 Egenera, Inc. Reconfigurable, virtual processing system, cluster, network and method
US20030018696A1 (en) * 2001-05-10 2003-01-23 Sanchez Humberto A. Method for executing multi-system aware applications
US7110394B1 (en) * 2001-06-25 2006-09-19 Sanera Systems, Inc. Packet switching apparatus including cascade ports and method for switching packets
US7403987B1 (en) * 2001-06-29 2008-07-22 Symantec Operating Corporation Transactional SAN management
US20030055932A1 (en) * 2001-09-19 2003-03-20 Dell Products L.P. System and method for configuring a storage area network
US20030154271A1 (en) * 2001-10-05 2003-08-14 Baldwin Duane Mark Storage area network methods and apparatus with centralized management
US7287063B2 (en) * 2001-10-05 2007-10-23 International Business Machines Corporation Storage area network methods and apparatus using event notifications with data
US6880101B2 (en) * 2001-10-12 2005-04-12 Dell Products L.P. System and method for providing automatic data restoration after a storage device failure
DE10155090A1 (en) * 2001-11-09 2003-05-22 Siemens Ag Provision of information in an automation system
US7349961B2 (en) * 2001-12-07 2008-03-25 Hitachi, Ltd. Detecting configuration inconsistency in storage networks
US20030140128A1 (en) * 2002-01-18 2003-07-24 Dell Products L.P. System and method for validating a network
US20040215644A1 (en) * 2002-03-06 2004-10-28 Edwards Robert Clair Apparatus, method, and system for aggregated no query restore
US7228353B1 (en) * 2002-03-28 2007-06-05 Emc Corporation Generating and launching remote method invocation servers for individual client applications
US7404145B1 (en) * 2002-03-28 2008-07-22 Emc Corporation Generic mechanism for reporting on backups
US7350149B1 (en) 2002-03-28 2008-03-25 Emc Corporation Backup reporting framework graphical user interface
US6947939B2 (en) * 2002-05-08 2005-09-20 Hitachi, Ltd. System and methods to manage wide storage area network
JP2003345518A (en) * 2002-05-29 2003-12-05 Hitachi Ltd Method for setting disk array device, program, information processor, disk array device
AU2003278612A1 (en) * 2002-06-24 2004-01-23 Xymphonic Systems As Method for data-centric collaboration
US7263108B2 (en) * 2002-08-06 2007-08-28 Netxen, Inc. Dual-mode network storage systems and methods
US7152107B2 (en) * 2002-08-07 2006-12-19 Hewlett-Packard Development Company, L.P. Information sharing device
US20040039815A1 (en) * 2002-08-20 2004-02-26 Compaq Information Technologies Group, L.P. Dynamic provisioning system for a network of computers
US7401338B1 (en) 2002-09-27 2008-07-15 Symantec Operating Corporation System and method for an access layer application programming interface for managing heterogeneous components of a storage area network
JP4160817B2 (en) * 2002-11-05 2008-10-08 株式会社日立製作所 Disk subsystem, computer system, storage management method for managing the same, and management program
KR20040049667A (en) * 2002-12-06 2004-06-12 엘지전자 주식회사 Home network's system and its operating method for the same
US20040122938A1 (en) * 2002-12-19 2004-06-24 Messick Randall E. Method and apparatus for dynamically allocating storage array bandwidth
US20040199618A1 (en) * 2003-02-06 2004-10-07 Knight Gregory John Data replication solution
US7447714B1 (en) * 2003-02-25 2008-11-04 Storage Technology Corporation Management of multiple virtual data copies
US20050021524A1 (en) * 2003-05-14 2005-01-27 Oliver Jack K. System and method of managing backup media in a computing environment
JP4325849B2 (en) 2003-06-27 2009-09-02 株式会社日立製作所 Storage system, backup system, and backup method
US7246254B2 (en) * 2003-07-16 2007-07-17 International Business Machines Corporation System and method for automatically and dynamically optimizing application data resources to meet business objectives
JP4400126B2 (en) * 2003-08-08 2010-01-20 株式会社日立製作所 Centralized disk usage control method in virtual centralized network storage system
US7143112B2 (en) * 2003-09-10 2006-11-28 Hitachi, Ltd. Method and apparatus for data integration
US7496723B1 (en) * 2003-12-15 2009-02-24 Symantec Operating Corporation Server-free archival of backup data
US7395352B1 (en) * 2004-03-12 2008-07-01 Netapp, Inc. Managing data replication relationships
US8949395B2 (en) 2004-06-01 2015-02-03 Inmage Systems, Inc. Systems and methods of event driven recovery management
US8055745B2 (en) * 2004-06-01 2011-11-08 Inmage Systems, Inc. Methods and apparatus for accessing data from a primary data storage system for secondary storage
JP4518887B2 (en) * 2004-09-10 2010-08-04 株式会社日立製作所 Storage area network management system, management apparatus, volume allocation method, and computer software
GB0426309D0 (en) * 2004-11-30 2004-12-29 Ibm Method and system for error strategy in a storage system
US7672979B1 (en) * 2005-04-22 2010-03-02 Symantec Operating Corporation Backup and restore techniques using inconsistent state indicators
US20070058620A1 (en) * 2005-08-31 2007-03-15 Mcdata Corporation Management of a switch fabric through functionality conservation
JP2007074115A (en) * 2005-09-05 2007-03-22 Hitachi Ltd Voice communication terminal, media server, and lock control method of voice communication
US8140750B2 (en) * 2005-09-29 2012-03-20 International Business Machines Corporation Monitoring performance of a storage area network
US9143841B2 (en) 2005-09-29 2015-09-22 Brocade Communications Systems, Inc. Federated management of intelligent service modules
JP4927408B2 (en) * 2006-01-25 2012-05-09 株式会社日立製作所 Storage system and data restoration method thereof
US7953866B2 (en) 2006-03-22 2011-05-31 Mcdata Corporation Protocols for connecting intelligent service modules in a storage area network
US7681130B1 (en) * 2006-03-31 2010-03-16 Emc Corporation Methods and apparatus for displaying network data
US20070258443A1 (en) * 2006-05-02 2007-11-08 Mcdata Corporation Switch hardware and architecture for a computer network
US7596729B2 (en) * 2006-06-30 2009-09-29 Micron Technology, Inc. Memory device testing system and method using compressed fail data
US9218213B2 (en) 2006-10-31 2015-12-22 International Business Machines Corporation Dynamic placement of heterogeneous workloads
US7769931B1 (en) * 2007-02-15 2010-08-03 Emc Corporation Methods and systems for improved virtual data storage management
US8375005B1 (en) 2007-03-31 2013-02-12 Emc Corporation Rapid restore
US8924352B1 (en) 2007-03-31 2014-12-30 Emc Corporation Automated priority backup and archive
US8463798B1 (en) 2007-03-31 2013-06-11 Emc Corporation Prioritized restore
US9405585B2 (en) * 2007-04-30 2016-08-02 International Business Machines Corporation Management of heterogeneous workloads
US8832495B2 (en) * 2007-05-11 2014-09-09 Kip Cr P1 Lp Method and system for non-intrusive monitoring of library components
MY154553A (en) * 2007-06-27 2015-06-30 Nippon Oil Corp Hydroisomerization catalyst, method of dewaxing hydrocarbon oil, process for producing base oil, and process for producing lube base oil
US8341121B1 (en) 2007-09-28 2012-12-25 Emc Corporation Imminent failure prioritized backup
US8583601B1 (en) 2007-09-28 2013-11-12 Emc Corporation Imminent failure backup
US8650241B2 (en) * 2008-02-01 2014-02-11 Kip Cr P1 Lp System and method for identifying failing drives or media in media library
US7974215B1 (en) 2008-02-04 2011-07-05 Crossroads Systems, Inc. System and method of network diagnosis
US9015005B1 (en) 2008-02-04 2015-04-21 Kip Cr P1 Lp Determining, displaying, and using tape drive session information
US8645328B2 (en) * 2008-02-04 2014-02-04 Kip Cr P1 Lp System and method for archive verification
US8015343B2 (en) 2008-08-08 2011-09-06 Amazon Technologies, Inc. Providing executing programs with reliable access to non-local block data storage
US8019732B2 (en) 2008-08-08 2011-09-13 Amazon Technologies, Inc. Managing access of multiple executing programs to non-local block data storage
US20100043006A1 (en) * 2008-08-13 2010-02-18 Egenera, Inc. Systems and methods for a configurable deployment platform with virtualization of processing resource specific persistent settings
US8713060B2 (en) 2009-03-31 2014-04-29 Amazon Technologies, Inc. Control service for relational data management
US8060792B2 (en) 2009-03-31 2011-11-15 Amazon Technologies, Inc. Monitoring and automated recovery of data instances
US9207984B2 (en) * 2009-03-31 2015-12-08 Amazon Technologies, Inc. Monitoring and automatic scaling of data volumes
US8332365B2 (en) 2009-03-31 2012-12-11 Amazon Technologies, Inc. Cloning and recovery of data volumes
US8307003B1 (en) 2009-03-31 2012-11-06 Amazon Technologies, Inc. Self-service control environment
US9705888B2 (en) * 2009-03-31 2017-07-11 Amazon Technologies, Inc. Managing security groups for data instances
US8935366B2 (en) * 2009-04-24 2015-01-13 Microsoft Corporation Hybrid distributed and cloud backup architecture
US8769055B2 (en) * 2009-04-24 2014-07-01 Microsoft Corporation Distributed backup and versioning
US8769049B2 (en) * 2009-04-24 2014-07-01 Microsoft Corporation Intelligent tiers of backup data
US8560639B2 (en) * 2009-04-24 2013-10-15 Microsoft Corporation Dynamic placement of replica data
US8095684B2 (en) * 2009-09-15 2012-01-10 Symantec Corporation Intelligent device and media server selection for optimized backup image duplication
US9866633B1 (en) 2009-09-25 2018-01-09 Kip Cr P1 Lp System and method for eliminating performance impact of information collection from media drives
US9135283B2 (en) 2009-10-07 2015-09-15 Amazon Technologies, Inc. Self-service configuration for data environment
US8074107B2 (en) 2009-10-26 2011-12-06 Amazon Technologies, Inc. Failover and recovery for replicated data instances
US8676753B2 (en) 2009-10-26 2014-03-18 Amazon Technologies, Inc. Monitoring of replicated data instances
US8335765B2 (en) 2009-10-26 2012-12-18 Amazon Technologies, Inc. Provisioning and managing replicated data instances
US8843787B1 (en) 2009-12-16 2014-09-23 Kip Cr P1 Lp System and method for archive verification according to policies
JP2012018556A (en) * 2010-07-08 2012-01-26 Hitachi Ltd Computer system and control method for system changeover of computer system
US8364852B1 (en) 2010-12-22 2013-01-29 Juniper Networks, Inc. Methods and apparatus to generate and update fibre channel firewall filter rules using address prefixes
US8958429B2 (en) 2010-12-22 2015-02-17 Juniper Networks, Inc. Methods and apparatus for redundancy associated with a fibre channel over ethernet network
US8495019B2 (en) 2011-03-08 2013-07-23 Ca, Inc. System and method for providing assured recovery and replication
CN102761579B (en) 2011-04-29 2015-12-09 国际商业机器公司 Storage are network is utilized to transmit the method and system of data
US20130124686A1 (en) * 2011-11-16 2013-05-16 Université d'Orléans System and a Method for Sharing Computing Resources Associated to Scientific Publications
WO2015042185A1 (en) * 2013-09-17 2015-03-26 Chadwell Craig Fabric attached storage
US9626367B1 (en) 2014-06-18 2017-04-18 Veritas Technologies Llc Managing a backup procedure
US9558078B2 (en) 2014-10-28 2017-01-31 Microsoft Technology Licensing, Llc Point in time database restore from storage snapshots
US10397029B2 (en) 2015-07-08 2019-08-27 Toshiba Memory Corporation Relay apparatus
US10430270B2 (en) 2017-12-04 2019-10-01 Bank Of America Corporation System for migrating data using dynamic feedback
CN108459567B (en) * 2018-02-11 2019-11-19 成都兴联宜科技有限公司 A kind of ERP system of concrete mixing plant
US11256595B2 (en) 2019-07-11 2022-02-22 Dell Products L.P. Predictive storage management system
US11199855B2 (en) 2019-08-20 2021-12-14 Lisandro Chacin System and method for lube cost control on heavy machinery
US11681445B2 (en) * 2021-09-30 2023-06-20 Pure Storage, Inc. Storage-aware optimization for serverless functions
CN113704026B (en) * 2021-10-28 2022-01-25 北京时代正邦科技股份有限公司 Distributed financial memory database security synchronization method, device and medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5548711A (en) * 1993-08-26 1996-08-20 Emc Corporation Method and apparatus for fault tolerant fast writes through buffer dumping
US5835953A (en) * 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US6148414A (en) * 1998-09-24 2000-11-14 Seek Systems, Inc. Methods and systems for implementing shared disk array management functions
US6199146B1 (en) * 1998-03-12 2001-03-06 International Business Machines Corporation Storage management system and method for increasing capacity utilization of nonvolatile storage devices using partially filled substitute storage devices for continuing write operations
US6389432B1 (en) * 1999-04-05 2002-05-14 Auspex Systems, Inc. Intelligent virtual volume access
US6397308B1 (en) * 1998-12-31 2002-05-28 Emc Corporation Apparatus and method for differential backup and restoration of data in a computer storage system
US6401178B1 (en) * 1999-12-23 2002-06-04 Emc Corporatiion Data processing method and apparatus for enabling independent access to replicated data
US6421723B1 (en) * 1999-06-11 2002-07-16 Dell Products L.P. Method and system for establishing a storage area network configuration
US6446141B1 (en) * 1999-03-25 2002-09-03 Dell Products, L.P. Storage server system including ranking of data source
US6460113B1 (en) * 2000-01-25 2002-10-01 Dell Products L.P. System and method for performing backup operations using a fibre channel fabric in a multi-computer environment
US6502162B2 (en) * 1998-06-29 2002-12-31 Emc Corporation Configuring vectors of logical storage units for data storage partitioning and sharing
US6526419B1 (en) * 2000-06-09 2003-02-25 International Business Machines Corporation Method, system, and program for remote copy in an open systems environment
US6535518B1 (en) * 2000-02-10 2003-03-18 Simpletech Inc. System for bypassing a server to achieve higher throughput between data network and data storage system
US6640278B1 (en) * 1999-03-25 2003-10-28 Dell Products L.P. Method for configuration and management of storage resources in a storage network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0858036A3 (en) * 1997-02-10 1999-12-22 Compaq Computer Corporation Fibre channel attached storage architecture
JP3228182B2 (en) * 1997-05-29 2001-11-12 株式会社日立製作所 Storage system and method for accessing storage system
US5941972A (en) * 1997-12-31 1999-08-24 Crossroads Systems, Inc. Storage router and method for providing virtual local storage

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5548711A (en) * 1993-08-26 1996-08-20 Emc Corporation Method and apparatus for fault tolerant fast writes through buffer dumping
US5835953A (en) * 1994-10-13 1998-11-10 Vinca Corporation Backup system that takes a snapshot of the locations in a mass storage device that has been identified for updating prior to updating
US6199146B1 (en) * 1998-03-12 2001-03-06 International Business Machines Corporation Storage management system and method for increasing capacity utilization of nonvolatile storage devices using partially filled substitute storage devices for continuing write operations
US6502162B2 (en) * 1998-06-29 2002-12-31 Emc Corporation Configuring vectors of logical storage units for data storage partitioning and sharing
US6148414A (en) * 1998-09-24 2000-11-14 Seek Systems, Inc. Methods and systems for implementing shared disk array management functions
US6397308B1 (en) * 1998-12-31 2002-05-28 Emc Corporation Apparatus and method for differential backup and restoration of data in a computer storage system
US6446141B1 (en) * 1999-03-25 2002-09-03 Dell Products, L.P. Storage server system including ranking of data source
US6640278B1 (en) * 1999-03-25 2003-10-28 Dell Products L.P. Method for configuration and management of storage resources in a storage network
US6389432B1 (en) * 1999-04-05 2002-05-14 Auspex Systems, Inc. Intelligent virtual volume access
US6421723B1 (en) * 1999-06-11 2002-07-16 Dell Products L.P. Method and system for establishing a storage area network configuration
US6401178B1 (en) * 1999-12-23 2002-06-04 Emc Corporatiion Data processing method and apparatus for enabling independent access to replicated data
US6460113B1 (en) * 2000-01-25 2002-10-01 Dell Products L.P. System and method for performing backup operations using a fibre channel fabric in a multi-computer environment
US6535518B1 (en) * 2000-02-10 2003-03-18 Simpletech Inc. System for bypassing a server to achieve higher throughput between data network and data storage system
US6526419B1 (en) * 2000-06-09 2003-02-25 International Business Machines Corporation Method, system, and program for remote copy in an open systems environment

Cited By (155)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529834B1 (en) * 2000-06-02 2009-05-05 Hewlett-Packard Development Company, L.P. Method and system for cooperatively backing up data on computers in a network
US20050038836A1 (en) * 2001-07-06 2005-02-17 Jianxin Wang Systems and methods of information backup
US20050172093A1 (en) * 2001-07-06 2005-08-04 Computer Associates Think, Inc. Systems and methods of information backup
US20100132022A1 (en) * 2001-07-06 2010-05-27 Computer Associates Think, Inc. Systems and Methods for Information Backup
US9002910B2 (en) 2001-07-06 2015-04-07 Ca, Inc. Systems and methods of information backup
US20050055444A1 (en) * 2001-07-06 2005-03-10 Krishnan Venkatasubramanian Systems and methods of information backup
US7734594B2 (en) 2001-07-06 2010-06-08 Computer Associates Think, Inc. Systems and methods of information backup
US7552214B2 (en) * 2001-07-06 2009-06-23 Computer Associates Think, Inc. Systems and methods of information backup
US8370450B2 (en) 2001-07-06 2013-02-05 Ca, Inc. Systems and methods for information backup
US20110231852A1 (en) * 2001-11-23 2011-09-22 Parag Gokhale Method and system for scheduling media exports
US8924428B2 (en) 2001-11-23 2014-12-30 Commvault Systems, Inc. Systems and methods of media management, such as management of media to and from a media storage library
US20060155951A1 (en) * 2002-04-08 2006-07-13 Hitachi, Ltd. Computer system, storage and storage utilization and monitoring method
US20030212781A1 (en) * 2002-05-08 2003-11-13 Hitachi, Ltd. Network topology management system, management apparatus, management method, management program, and storage media that records management program
US20090109875A1 (en) * 2002-05-08 2009-04-30 Hitachi, Ltd. Network Topology Management System, Management Apparatus, Management Method, Management Program, and Storage Media That Records Management Program
US7469281B2 (en) 2002-05-08 2008-12-23 Hitachi, Ltd. Network topology management system, management apparatus, management method, management program, and storage media that records management program
US7143096B2 (en) 2002-06-14 2006-11-28 Hitachi, Ltd. Information processing method and system
US20070043919A1 (en) * 2002-06-14 2007-02-22 Hitachi, Ltd. Information processing method and system
US20040010732A1 (en) * 2002-07-10 2004-01-15 Hitachi, Ltd. Backup method and storage control device using the same
US20040039756A1 (en) * 2002-08-20 2004-02-26 Veritas Software Corporation System and method for network-free file replication in a storage area network
US7120654B2 (en) * 2002-08-20 2006-10-10 Veritas Operating Corporation System and method for network-free file replication in a storage area network
US7941595B2 (en) * 2002-10-31 2011-05-10 Ring Technology Enterprises Of Texas, Llc Methods and systems for a memory section
US20040088477A1 (en) * 2002-10-31 2004-05-06 Bullen Melvin James Methods and systems for a memory section
US7707351B2 (en) 2002-10-31 2010-04-27 Ring Technology Enterprises Of Texas, Llc Methods and systems for an identifier-based memory section
US20080052454A1 (en) * 2002-10-31 2008-02-28 Ring Technology Enterprises, Llc. Methods and systems for a memory section
US7089347B2 (en) * 2003-03-31 2006-08-08 Hitachi, Ltd. Computer system for managing performances of storage apparatus and performance management method of the computer system
US20040193827A1 (en) * 2003-03-31 2004-09-30 Kazuhiko Mogi Computer system for managing performances of storage apparatus and performance management method of the computer system
US7694070B2 (en) 2003-03-31 2010-04-06 Hitachi, Ltd. Computer system for managing performances of storage apparatus and performance management method of the computer system
US20060242356A1 (en) * 2003-03-31 2006-10-26 Kazuhiko Mogi Computer system for managing performances of storage apparatus and performance management method of the computer system
US8209293B2 (en) 2003-04-03 2012-06-26 Commvault Systems, Inc. System and method for extended media retention
US9940043B2 (en) 2003-04-03 2018-04-10 Commvault Systems, Inc. Systems and methods for performing storage operations in a computer network
US10162712B2 (en) 2003-04-03 2018-12-25 Commvault Systems, Inc. System and method for extended media retention
US8463753B2 (en) 2003-04-03 2013-06-11 Commvault Systems, Inc. System and method for extended media retention
US20090313448A1 (en) * 2003-04-03 2009-12-17 Parag Gokhale System and method for extended media retention
US9201917B2 (en) 2003-04-03 2015-12-01 Commvault Systems, Inc. Systems and methods for performing storage operations in a computer network
US8478864B1 (en) * 2003-04-14 2013-07-02 Symantec Operating Corporation Topology for showing data protection activity
US20060187908A1 (en) * 2003-06-18 2006-08-24 Hitachi, Ltd. Network system and its switches
US7124169B2 (en) 2003-06-18 2006-10-17 Hitachi, Ltd. Network system and its switches
US20050008016A1 (en) * 2003-06-18 2005-01-13 Hitachi, Ltd. Network system and its switches
US7222168B2 (en) 2003-08-04 2007-05-22 Hitachi, Ltd. Computer system
US6912482B2 (en) * 2003-09-11 2005-06-28 Veritas Operating Corporation Data storage analysis mechanism
US7539835B2 (en) 2003-09-11 2009-05-26 Symantec Operating Corporation Data storage analysis mechanism
US20050060125A1 (en) * 2003-09-11 2005-03-17 Kaiser Scott Douglas Data storage analysis mechanism
US20050108190A1 (en) * 2003-11-17 2005-05-19 Conkel Dale W. Enterprise directory service diagnosis and repair
US7653792B2 (en) 2003-11-27 2010-01-26 Hitachi, Ltd. Disk array apparatus including controller that executes control to move data between storage areas based on a data protection level
US7152149B2 (en) 2003-11-27 2006-12-19 Hitachi, Ltd. Disk array apparatus and control method for disk array apparatus
US20060190694A1 (en) * 2003-11-27 2006-08-24 Akinobu Shimada Disk array apparatus and control method for disk array apparatus
US7930502B2 (en) 2003-11-27 2011-04-19 Hitachi, Ltd. Disk array apparatus and control method for disk array apparatus
US20100115199A1 (en) * 2003-11-27 2010-05-06 Akinobu Shimada Disk array apparatus and control method for disk array apparatus
US20060015544A1 (en) * 2004-07-13 2006-01-19 Hitachi, Ltd. Incorporation: Japan Data management system
US7206790B2 (en) * 2004-07-13 2007-04-17 Hitachi, Ltd. Data management system
US10191675B2 (en) 2004-11-05 2019-01-29 Commvault Systems, Inc. Methods and system of pooling secondary storage devices
US9507525B2 (en) 2004-11-05 2016-11-29 Commvault Systems, Inc. Methods and system of pooling storage devices
US7711978B1 (en) 2004-12-30 2010-05-04 Symantec Operating Corporation Proactive utilization of fabric events in a network virtualization environment
US20060179167A1 (en) * 2005-01-28 2006-08-10 Bomhoff Matthew D Apparatus, system, and method for performing storage device maintenance
US7401260B2 (en) 2005-01-28 2008-07-15 International Business Machines Corporation Apparatus, system, and method for performing storage device maintenance
US20080244101A1 (en) * 2005-01-28 2008-10-02 Matthew David Bomhoff Apparatus, system, and method for performing storage device maintenance
US7818612B2 (en) 2005-01-28 2010-10-19 International Business Machines Corporation Apparatus, system, and method for performing storage device maintenance
US8412935B2 (en) * 2005-03-10 2013-04-02 Nippon Telegraph And Telephone Corporation Administration of storage systems containing three groups of data-operational, backup, and standby
US8261364B2 (en) 2005-03-10 2012-09-04 Nippon Telegraph And Telephone Corporation Network system for accessing the storage units based on log-in request having password granted by administration server
US20090222896A1 (en) * 2005-03-10 2009-09-03 Nippon Telegraph And Telephone Corporation Network system, method for controlling access to storage device, management server, storage device, log-in control method, network boot system, and unit storage unit access method
US8185961B2 (en) 2005-03-10 2012-05-22 Nippon Telegraph And Telephone Corporation Network system, method for controlling access to storage device, management server, storage device, log-in control method, network boot system, and method of accessing individual storage unit
US8775782B2 (en) * 2005-03-10 2014-07-08 Nippon Telegraph And Telephone Corporation Network system, method of controlling access to storage device, administration server, storage device, log-in control method, network boot system, and method of accessing individual storage unit
US20110093936A1 (en) * 2005-03-10 2011-04-21 Nippon Telegraph And Telephone Corporation Network system, method of controlling access to storage device, administration server, storage device, log-in control method, network boot system, and method of accessing individual storage unit
US20110099614A1 (en) * 2005-03-10 2011-04-28 Nippon Telegraph And Telephone Corporation Network system, method of controlling access to storage device, administration server, storage device, log-in control method, network boot system, and method of accessing individual storage unit
US20110099358A1 (en) * 2005-03-10 2011-04-28 Nippon Telegraph And Telephone Corporation Network system, method of controlling access to storage device, administration server, storage device, log-in control method, network boot system, and method of accessing individual storage unit
GB2425217A (en) * 2005-04-15 2006-10-18 Hewlett Packard Development Co Controlling access to at least one storage device
GB2425217B (en) * 2005-04-15 2011-06-15 Hewlett Packard Development Co Controlling access to at least one storage device
US7539829B2 (en) 2005-04-15 2009-05-26 Hewlett-Packard Development Company, L.P. Methods and apparatuses for controlling access to at least one storage device in a tape library
US20060242374A1 (en) * 2005-04-15 2006-10-26 Slater Alastair M Controlling access to at least one storage device
US8117151B2 (en) * 2005-09-27 2012-02-14 Hitachi, Ltd. File system migration in storage system
US20100115008A1 (en) * 2005-09-27 2010-05-06 Yoji Nakatani File system migration in storage system
US20070168704A1 (en) * 2005-11-30 2007-07-19 Oracle International Corporation System and method of configuring a database system with replicated data and automatic failover and recovery
US7549079B2 (en) * 2005-11-30 2009-06-16 Oracle International Corporation System and method of configuring a database system with replicated data and automatic failover and recovery
US8230171B2 (en) 2005-12-19 2012-07-24 Commvault Systems, Inc. System and method for improved media identification in a storage device
US8463994B2 (en) 2005-12-19 2013-06-11 Commvault Systems, Inc. System and method for improved media identification in a storage device
US20070143552A1 (en) * 2005-12-21 2007-06-21 Cisco Technology, Inc. Anomaly detection for storage traffic in a data center
US7793138B2 (en) * 2005-12-21 2010-09-07 Cisco Technology, Inc. Anomaly detection for storage traffic in a data center
US20070214384A1 (en) * 2006-03-07 2007-09-13 Manabu Kitamura Method for backing up data in a clustered file system
US20080072000A1 (en) * 2006-09-15 2008-03-20 Nobuyuki Osaki Method and apparatus incorporating virtualization for data storage and protection
US7594072B2 (en) * 2006-09-15 2009-09-22 Hitachi, Ltd. Method and apparatus incorporating virtualization for data storage and protection
US8656068B2 (en) 2006-09-22 2014-02-18 Commvault Systems, Inc. Systems and methods of media management, such as management of media to and from a media storage library, including removable media
US8539118B2 (en) 2006-09-22 2013-09-17 Commvault Systems, Inc. Systems and methods of media management, such as management of media to and from a media storage library, including removable media
US8886853B2 (en) 2006-09-22 2014-11-11 Commvault Systems, Inc. Systems and methods for uniquely identifying removable media by its manufacturing defects wherein defects includes bad memory or redundant cells or both
US8756203B2 (en) 2006-12-22 2014-06-17 Commvault Systems, Inc. Systems and methods of media management, such as management of media to and from a media storage library
US8484165B2 (en) 2006-12-22 2013-07-09 Commvault Systems, Inc. Systems and methods of media management, such as management of media to and from a media storage library
US20080243420A1 (en) * 2006-12-22 2008-10-02 Parag Gokhale Systems and methods of media management, such as management of media to and from a media storage library
US20110213755A1 (en) * 2006-12-22 2011-09-01 Srinivas Kavuri Systems and methods of hierarchical storage management, such as global management of storage operations
US8832031B2 (en) * 2006-12-22 2014-09-09 Commvault Systems, Inc. Systems and methods of hierarchical storage management, such as global management of storage operations
US7599967B2 (en) * 2007-03-20 2009-10-06 Oracle International Corporation No data loss system with reduced commit latency
US20080235294A1 (en) * 2007-03-20 2008-09-25 Oracle International Corporation No data loss system with reduced commit latency
US20090063765A1 (en) * 2007-08-30 2009-03-05 Rajiv Kottomtharayil Parallel access virtual tape library and drives
US8996823B2 (en) 2007-08-30 2015-03-31 Commvault Systems, Inc. Parallel access virtual tape library and drives
US8706976B2 (en) 2007-08-30 2014-04-22 Commvault Systems, Inc. Parallel access virtual tape library and drives
US8589352B2 (en) * 2008-03-31 2013-11-19 Fujitsu Limited Federated configuration management database, management data repository, and backup data management system
JP5348129B2 (en) * 2008-03-31 2013-11-20 富士通株式会社 Integrated configuration management device, heterogeneous configuration management device, backup data management system
US20110016092A1 (en) * 2008-03-31 2011-01-20 Fujitsu Limited Federated configuration management database, management data repository, and backup data management system
US10110667B2 (en) 2008-04-08 2018-10-23 Geminare Inc. System and method for providing data and application continuity in a computer system
US9674268B2 (en) * 2008-04-08 2017-06-06 Geminare Incorporated System and method for providing data and application continuity in a computer system
US20120198023A1 (en) * 2008-04-08 2012-08-02 Geist Joshua B System and method for providing data and application continuity in a computer system
US11070612B2 (en) 2008-04-08 2021-07-20 Geminare Inc. System and method for providing data and application continuity in a computer system
US11575736B2 (en) 2008-04-08 2023-02-07 Rps Canada Inc. System and method for providing data and application continuity in a computer system
US20110302280A1 (en) * 2008-07-02 2011-12-08 Hewlett-Packard Development Company Lp Performing Administrative Tasks Associated with a Network-Attached Storage System at a Client
US9891902B2 (en) 2008-07-02 2018-02-13 Hewlett-Packard Development Company, L.P. Performing administrative tasks associated with a network-attached storage system at a client
US9354853B2 (en) * 2008-07-02 2016-05-31 Hewlett-Packard Development Company, L.P. Performing administrative tasks associated with a network-attached storage system at a client
US10547678B2 (en) 2008-09-15 2020-01-28 Commvault Systems, Inc. Data transfer techniques within data storage devices, such as network attached storage performing data migration
US8255476B2 (en) * 2009-03-30 2012-08-28 International Business Machines Corporation Automated tape drive sharing in a heterogeneous server and application environment
US20100250698A1 (en) * 2009-03-30 2010-09-30 Nils Haustein Automated tape drive sharing in a heterogeneous server and application environment
US20120271888A1 (en) * 2009-03-30 2012-10-25 International Business Machines Corporation Automated tape drive sharing in a heterogeneous server and application environment
US8433772B2 (en) * 2009-03-30 2013-04-30 International Business Machines Corporation Automated tape drive sharing in a heterogeneous server and application environment
US9575847B2 (en) 2010-03-26 2017-02-21 Carbonite, Inc. Transfer of user data between logical data sites
US20130024426A1 (en) * 2010-03-26 2013-01-24 Flowers Jeffry C Transfer of user data between logical data sites
US8818956B2 (en) * 2010-03-26 2014-08-26 Carbonite, Inc. Transfer of user data between logical data sites
US9575845B2 (en) 2010-03-26 2017-02-21 Carbonite, Inc. Transfer of user data between logical data sites
US8903773B2 (en) 2010-03-31 2014-12-02 Novastor Corporation Computer file storage, backup, restore and retrieval
US10983870B2 (en) 2010-09-30 2021-04-20 Commvault Systems, Inc. Data recovery operations, such as recovery from modified network data management protocol data
US9557929B2 (en) 2010-09-30 2017-01-31 Commvault Systems, Inc. Data recovery operations, such as recovery from modified network data management protocol data
US10284437B2 (en) 2010-09-30 2019-05-07 Efolder, Inc. Cloud-based virtual machines and offices
US11640338B2 (en) 2010-09-30 2023-05-02 Commvault Systems, Inc. Data recovery operations, such as recovery from modified network data management protocol data
US10275318B2 (en) 2010-09-30 2019-04-30 Commvault Systems, Inc. Data recovery operations, such as recovery from modified network data management protocol data
US9244779B2 (en) 2010-09-30 2016-01-26 Commvault Systems, Inc. Data recovery operations, such as recovery from modified network data management protocol data
US20130238941A1 (en) * 2010-10-14 2013-09-12 Fujitsu Limited Storage control apparatus, method of setting reference time, and computer-readable storage medium storing reference time setting program
US9152519B2 (en) * 2010-10-14 2015-10-06 Fujitsu Limited Storage control apparatus, method of setting reference time, and computer-readable storage medium storing reference time setting program
US9229645B2 (en) * 2012-02-10 2016-01-05 Hitachi, Ltd. Storage management method and storage system in virtual volume having data arranged astride storage devices
US20140351545A1 (en) * 2012-02-10 2014-11-27 Hitachi, Ltd. Storage management method and storage system in virtual volume having data arranged astride storage device
US10318542B2 (en) 2012-03-30 2019-06-11 Commvault Systems, Inc. Information management of mobile device data
US9529871B2 (en) 2012-03-30 2016-12-27 Commvault Systems, Inc. Information management of mobile device data
US9785647B1 (en) 2012-10-02 2017-10-10 Axcient, Inc. File system virtualization
US11169714B1 (en) 2012-11-07 2021-11-09 Efolder, Inc. Efficient file replication
US9852140B1 (en) 2012-11-07 2017-12-26 Axcient, Inc. Efficient file replication
US11243849B2 (en) 2012-12-27 2022-02-08 Commvault Systems, Inc. Restoration of centralized data storage manager, such as data storage manager in a hierarchical data storage system
US9069799B2 (en) 2012-12-27 2015-06-30 Commvault Systems, Inc. Restoration of centralized data storage manager, such as data storage manager in a hierarchical data storage system
US10303559B2 (en) 2012-12-27 2019-05-28 Commvault Systems, Inc. Restoration of centralized data storage manager, such as data storage manager in a hierarchical data storage system
US9998344B2 (en) * 2013-03-07 2018-06-12 Efolder, Inc. Protection status determinations for computing devices
US10003646B1 (en) 2013-03-07 2018-06-19 Efolder, Inc. Protection status determinations for computing devices
US20160162349A1 (en) * 2013-03-07 2016-06-09 Axcient, Inc. Protection Status Determinations for Computing Devices
US10599533B2 (en) 2013-05-07 2020-03-24 Efolder, Inc. Cloud storage using merkle trees
US9705730B1 (en) 2013-05-07 2017-07-11 Axcient, Inc. Cloud storage using Merkle trees
US9417903B2 (en) * 2013-06-21 2016-08-16 International Business Machines Corporation Storage management for a cluster of integrated computing systems comprising integrated resource infrastructure using storage resource agents and synchronized inter-system storage priority map
US20140380303A1 (en) * 2013-06-21 2014-12-25 International Business Machines Corporation Storage management for a cluster of integrated computing systems
US9465698B2 (en) * 2014-03-06 2016-10-11 Software Ag Systems and/or methods for data recovery in distributed, scalable multi-tenant environments
US20150254142A1 (en) * 2014-03-06 2015-09-10 Software Ag Systems and/or methods for data recovery in distributed, scalable multi-tenant environments
US10218650B2 (en) 2014-04-01 2019-02-26 Ricoh Company, Ltd. Information processing system
US9928144B2 (en) 2015-03-30 2018-03-27 Commvault Systems, Inc. Storage management of data using an open-archive architecture, including streamlined access to primary data originally stored on network-attached storage and archived to secondary storage
US11500730B2 (en) 2015-03-30 2022-11-15 Commvault Systems, Inc. Storage management of data using an open-archive architecture, including streamlined access to primary data originally stored on network-attached storage and archived to secondary storage
US10733058B2 (en) 2015-03-30 2020-08-04 Commvault Systems, Inc. Storage management of data using an open-archive architecture, including streamlined access to primary data originally stored on network-attached storage and archived to secondary storage
US10318157B2 (en) 2015-09-02 2019-06-11 Commvault Systems, Inc. Migrating data to disk without interrupting running operations
US11157171B2 (en) 2015-09-02 2021-10-26 Commvault Systems, Inc. Migrating data to disk without interrupting running operations
US10101913B2 (en) 2015-09-02 2018-10-16 Commvault Systems, Inc. Migrating data to disk without interrupting running backup operations
US10747436B2 (en) 2015-09-02 2020-08-18 Commvault Systems, Inc. Migrating data to disk without interrupting running operations
US11474697B2 (en) * 2016-05-16 2022-10-18 International Business Machines Corporation Opportunistic data analytics using memory bandwidth in disaggregated computing systems
US11275619B2 (en) 2016-05-16 2022-03-15 International Business Machines Corporation Opportunistic data analytics using memory bandwidth in disaggregated computing systems
US11575747B2 (en) 2017-12-12 2023-02-07 Commvault Systems, Inc. Enhanced network attached storage (NAS) services interfacing to cloud storage
US10742735B2 (en) 2017-12-12 2020-08-11 Commvault Systems, Inc. Enhanced network attached storage (NAS) services interfacing to cloud storage
US11593223B1 (en) 2021-09-02 2023-02-28 Commvault Systems, Inc. Using resource pool administrative entities in a data storage management system to provide shared infrastructure to tenants
US11928031B2 (en) 2021-09-02 2024-03-12 Commvault Systems, Inc. Using resource pool administrative entities to provide shared infrastructure to tenants

Also Published As

Publication number Publication date
US20040073675A1 (en) 2004-04-15
US20040073676A1 (en) 2004-04-15
US6950871B1 (en) 2005-09-27

Similar Documents

Publication Publication Date Title
US6950871B1 (en) Computer system having a storage area network and method of handling data in the computer system
JP2002007304A (en) Computer system using storage area network and data handling method therefor
US7188187B2 (en) File transfer method and system
US7433903B1 (en) Method for reading audit data from a remote mirrored disk for application to remote database backup copy
US8504741B2 (en) Systems and methods for performing multi-path storage operations
US7406473B1 (en) Distributed file system using disk servers, lock servers and file servers
US7594072B2 (en) Method and apparatus incorporating virtualization for data storage and protection
US7039777B2 (en) Method and apparatus for managing replication volumes
CN100489796C (en) Methods and system for implementing shared disk array management functions
EP1415425B1 (en) Systems and methods of information backup
US6804690B1 (en) Method for physical backup in data logical order
US6640278B1 (en) Method for configuration and management of storage resources in a storage network
EP2159680B1 (en) Secure virtual tape management system with balanced storage and multi-mirror options
US20040225659A1 (en) Storage foundry
US20040153481A1 (en) Method and system for effective utilization of data storage capacity
EP1357465A2 (en) Storage system having virtualized resource
US20080301201A1 (en) Storage System and Method of Managing Data Using Same
US20080301132A1 (en) Data back up method and its programs for permitting a user to obtain information relating to storage areas of the storage systems and select one or more storage areas which satisfy a user condition based on the information
US20060095664A1 (en) Systems and methods for presenting managed data
EP1887470A2 (en) Backup system and method
US20080320051A1 (en) File-sharing system and method of using file-sharing system to generate single logical directory structure
JP2002297427A (en) Method, device, system, program and storage medium for data backup
Dell
Naegel Challenges and Solutions in Allocating Data in a SAN Environment
CN111143287A (en) SAN shared file storage and archiving method and system

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION