EP1357463A2 - Storage system - Google Patents

Storage system Download PDF

Info

Publication number
EP1357463A2
EP1357463A2 EP02018182A EP02018182A EP1357463A2 EP 1357463 A2 EP1357463 A2 EP 1357463A2 EP 02018182 A EP02018182 A EP 02018182A EP 02018182 A EP02018182 A EP 02018182A EP 1357463 A2 EP1357463 A2 EP 1357463A2
Authority
EP
European Patent Office
Prior art keywords
storage system
storage
interface control
information
control devices
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP02018182A
Other languages
German (de)
French (fr)
Other versions
EP1357463A3 (en
Inventor
Naoto Hitachi Ltd. Int. Prop. Gp. Matsunami
Manabu Hitachi Ltd. Int. Prop. Gp. Kitamura
Koji Hitachi Ltd. Int. Prop. Gp. Sonoda
Shizuo Hitachi Ltd. Int. Prop. Gp. Yokohata
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of EP1357463A2 publication Critical patent/EP1357463A2/en
Publication of EP1357463A3 publication Critical patent/EP1357463A3/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0631Configuration or reconfiguration of storage systems by allocating resources to storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0632Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2089Redundant storage control functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99932Access augmentation or optimizing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • Y10S707/99934Query formulation, input preparation, or translation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99943Generating database or data structure, e.g. via user interface

Definitions

  • the present invention relates to a storage system used in a computer system.
  • Interfaces to connect a storage system to a computer are mainly classified into two types.
  • a block input/output (I/O) interface is used to conduct I/O operations between a storage system and a computer using a unit of blocks as a data control unit in the storage system.
  • the block I/O interface includes a Fibre Channel, a small computer systems interface (SCSI), and the like.
  • SAN storage area network
  • JP-A-10-333839 describes an example of a storage system connected to a storage area network using a Fibre Channel.
  • a file I/O interface is used to conduct I/O operations between a storage system and a computer using a unit of files recognized by an application program executed by the computer.
  • the file I/O interfaces uses, in many cases, a protocol of a network file system employed by a file server of the prior art.
  • NAS network attached storage
  • LAN local area network
  • a storage are network In a storage are network (SAN), the computer is connected to the storage system via a high-speed network exclusively constructed for the storage system, the network being separated from a network used to exchange messages between computers. Therefore, in the SAN higher speed data communication can be executed when compared with the NAS connected via the LAN to the computer. Since the SAN adopts the block I/O interface, overhead of protocol processing is lowered when compared with the NAS, and hence a high-speed response can be achieved.
  • SAN storage are network
  • the SAN since the SAN requires a network system dedicated to the storage system, installation of the SAN requires a higher cost. Therefore, the SAN is primarily used in a backbone system of an enterprise to construct a database in many cases.
  • the NAS uses a standardized network file system and hence the NAS user can manage data in the unit of general files. Therefore, data can be easily managed and files can be easily shared among a plurality of computers.
  • the NAS communicates data with computers via the LAN also used for communications between the computers. There exists risk that the load of the LAN is increased. Since the protocol processing used in the network file system has high overhead and hence the response time becomes longer as compared with the SAN. Therefore, the NAS is primarily used in a file management or control system of an enterprise for application to manage web contents, data files of computer aided design, and the like in many cases.
  • the NAS and the SAN are complementary to each other and are applied to mutually different fields. Therefore, the NAS and the SAN are respectively used in appropriate fields.
  • a storage pool in which each resource (to be referred to as a device hereinbelow) of the storage system can be easily set and can be easily allocated to each computer.
  • JP-A-2001-142648 describes an example of the storage pool.
  • the system user In the system including both the storage system having the NAS functions and the storage system having the SAN functions, the system user must manage the storage capacity for each storage system. Specifically, for each storage, the user must set an environment configuration and must manage its storage capacity. Consequently, there arises a problem that the management of the storage system becomes complex and difficult.
  • Another object of the present invention is to provide a storage system which facilitates the management of the storage capacity of the overall system and which allows efficient use of the storage capacity.
  • a storage system including a plurality of slots in which interface controllers of a plurality of types are installed, a plurality of disk devices, and a device for allocating storage areas of the disk devices to the slots.
  • the interface controllers of a plurality of types may include a block I/O interface controller having functions of a SAN and a file I/O interface controller having functions of a NAS.
  • the interface controllers have the same shape or form.
  • the storage system may include a device for allocating a storage area allocated to an interface controller to an interface controller of another type.
  • Fig. 1 shows an embodiment of a storage system according to the present invention.
  • the storage system 1 is connected via an LAN 20 and an LAN 21 to NAS clients 400.
  • each of the LAN 20 and 21 is an internet protocol (IP) network (to be simply referred to as a network hereinbelow).
  • IP internet protocol
  • the storage system 1 is also connected via an SAN 30 to SAN clients 500.
  • the SAN 30 includes a Fibre Channel.
  • the storage system 1 is connected to a control of management terminal 18.
  • the terminal 18 includes input/output devices such as a display to present various kinds of information to the user and a keyboard and a mouse for the user to input information.
  • the terminal 18 can communicate via a communication network 19 with the storage system 1 and is operated by the user to conduct various setting operations of the storage system 1.
  • the storage system 1 is a storage system including the SAN which includes the Fibre Channel and which uses the block I/O interface and the NAS which includes the IP networks and which uses the file I/O interface. That is, the storage system 1 includes different type interfaces.
  • any combination of the interface for SAN and the interface for NAS is available for the storage system 1.
  • the system 1 may be configured with the interface only for SAN or NAS.
  • the storage system 1 includes a disk controller (to be abbreviated as DKC hereinbelow) 11 and storage devices (to be referred to as disks hereinbelow) 1700.
  • DKC disk controller
  • disks storage devices
  • the disk controller 11 includes network channel adapters (CHN) 1100, and Fibre Channel adapters (CHF) 1110, disk adapters (DKA) 1200, a shared memory (SM) 13, a cache memory (CM) 14, and a disk pool manager (DPM) 15.
  • CHN network channel adapters
  • CHF Fibre Channel adapters
  • DKA disk adapters
  • SM shared memory
  • CM cache memory
  • DPM disk pool manager
  • the network channel adapter 1100 is an interface controller connected via the file I/O interface to NAS clients 400.
  • the Fibre Channel adapter 1110 is an interface controller connected via the block I/O interface to SAN clients 500.
  • the adapters 1100 and 1200 will be collectively referred to as channel adapters (CH) hereinbelow.
  • the disk adapter 1200 is connected to disks 1700.
  • the disk adapter 1200 controls data transfer between disks connected thereto and associated clients.
  • the shared memory 13 stores configuration control information to control the configuration of the storage system 1, control information of the cache memory 14, and the like. Data stored in the shared memory 13 is shared among all channel adapters and disk adapters.
  • the cache memory 14 temporarily stores data of the disk 1700.
  • the cache memory 14 is also shared among all channel adapters and disk adapters.
  • any channel adapter can access the cache memory 14 and all disks 1700.
  • Such a configuration can be constructed using, for example, a crossbar switch.
  • the disk pool manager (DPM) 15 manages all disks 1700 in a centralized fashion. Concretely, the manager 15 stores information to regard the overall storage capacity of the disks 1700 as one disk pool.
  • the channel adapters, the disk adapters, and disk pool manager 15 are mutually connected to each other via a management or control network 16.
  • the disk pool manager 15 is connected via the communication network 19 to the manager terminal 18.
  • Fig. 2 shows an appearance of the storage system 1.
  • a DKC cabinet 115 is disposed to store the CHN 1100, the CHF 1110, the DKA 1200, the SM 13, and the CM 14.
  • a disk unit cabinet (DKU) 180 is used to store disks 1700.
  • the shared memory 13 actually includes a plurality of controller boards 1300.
  • the cache memory 14 also includes a plurality of cache boards 1400. The user of the storage system 1 obtains desired storage capacity by adjusting the number of cache boards 1400 and disks 1700.
  • the DKC cabinet 115 includes a plurality of interface adapter slots 190.
  • Each slot 190 is used to install an adapter board in which a channel adapter 1100 and the like are mounted is installed.
  • the interface adapter slots 190 have the same form, the adapter boards have the same size, and the connectors have the same form regardless of the interface types to keep compatibility of interface adapters of various types. Therefore, any kinds of adapter board can be installed in the slot 190 of the DKC cabinet 115 regardless of the interface type.
  • the user of the storage system 1 can install any combination of the numbers of adapter boards, CHN 1100 and CHF 1110 in the adapter slots 190 of the storage system 1. The user therefore can freely configure the interface of the storage system 1.
  • Fig. 3 shows a configuration of an adapter board with a channel adapter 1100.
  • a connector 11007 is used to establish connection to a connector of the DKC cabinet 115.
  • the adapters 1100 and 1110 have the same form as described above.
  • an interface connector 2001 is associated with the LAN.
  • the interface connector 2001 corresponds to a Fibre Channel.
  • Fig. 4 shows an internal configuration of the network channel adapter 1100.
  • the adapter 1100 includes a processor 11001, an LAN controller 11002, a management network controller 11003, a memory 11004, an SM interface (I/F) controller 11005, and a CM I/F controller 11006.
  • the processor 11001 controls overall operation of the adapter 1100.
  • the LAN controller 11002 controls communication between the adapter and the LAN.
  • the management network controller 11003 controls communication between the adapter 1100 and the management network 16.
  • the memory 11004 connected to the processor 11001 stores programs to be executed by the processor 11001 and control data.
  • the SM I/F controller 11005 controls data transfer between the adapter 1100 and the shared memory 13.
  • the CM I/F controller 11006 controls data transfer between the adapter 1100 and the cache memory 14.
  • the memory 11004 stores an operating system program 110040, an LAN controller driver program 110041, a TCP/IP program 110042, a file system program 110043, a network file system program 110044, a disk volume control program 110045, a cache control program 110046, a disk pool information acquisition program 110048 and a disk pool information management table 110049.
  • the memory 11004 also includes a data buffer 110047.
  • the operating system program 110040 manages the programs stored in the memory 11004 and controls input and output operations.
  • the LAN controller driver program 110041 controls the LAN controller 11002.
  • the TCP/IP program 110042 controls the communication protocol, namely, TCP/IP on the LAN.
  • the file system program 110043 manages the files stored on the disks 1700.
  • the network file system program 110044 controls the protocol of the network file system to supply files from the disks 1700 to the NAS clients 400.
  • the disk volume control program 110045 controls accesses to disk volumes set to the disks 1700.
  • the cache control program 110046 manages data stored in the cache memory 14 and controls to judge a data hit/miss condition and the like in the cache memory 14.
  • the disk pool information acquisition program 110048 is executed when any one of the channel adapters 1100 acquires information of the disk pool stored in the shared memory 13.
  • the disk pool information management table 110049 stores disk pool information obtained as a result.
  • the data buffer 110047 adjusts a data transfer speed between the NAS clients 400 and the cache memory 14 and is used to store file data in the cache memory 14.
  • the processor 11001 may be a single processor or a set of a plurality of processors.
  • the processor 11001 may be configured as a symmetric multiprocessor for horizontal load balancing of control processing.
  • the processor 11001 may also be configured as an asymmetric multiprocessor in which a processor executes network file system protocol processing of the file I/O interface and another processor controls disk volumes.
  • the processor 11001 may also be configured as a combination of these configurations.
  • Fig. 5 shows an internal configuration of the Fibre Channel adapter 1110.
  • the adapter 1110 includes a processor 11101, a Fibre Channel controller 11102, a management network controller 11103, a memory 11104, an SM controller 11005, a CM controller 11006, and a data buffer 1107.
  • the processor 11101 controls overall operation of the adapter 1110.
  • the Fibre Channel controller 11102 controls communication between the adapter 1110 and the SAN.
  • the management network controller 11103 controls communication between the adapter 1110 and the management network 16.
  • the memory 11104 connected to the processor 11101 stores programs to be executed by the processor 11101 and control data.
  • the SM controller 11005 controls data transfer between the network channel adapter 1100 and the SM 13.
  • the CM controller 11006 controls data transfer between the network channel adapter 1100 and the CM 14.
  • the data buffer 11107 serves as a buffer memory to minimize the difference of data transfer speed between the SAN clients 500 and the cache memory 14.
  • the memory 11104 stores a Fibre Channel controller driver program 111041, an operating system program 111040, a disk volume control program 111045, a cache control program 110046, a disk pool information acquisition program 111048, and a disk pool information management table 111049.
  • the Fibre Channel controller driver program 111041 controls the Fibre Channel controller 11002.
  • the other programs are almost equal in their functions as those stored in the memory of the network channel adapter 1100 shown in Fig. 4 and hence description thereof will be avoided.
  • the processor 11101 may also be configured as a single processor or a multiprocessor.
  • Fig. 6 shows a layout of the shared memory 13.
  • the memory 13 includes a configuration managing information area 131.
  • the area 131 includes a channel adapter centralized management table 1310 and a disk pool information centralized management table 1311.
  • the memory 13 stores information of all configurations of the storage system 1, description thereof will be avoided in conjunction with the embodiment.
  • the channel adapter centralized management table 1310 stores management information of all channel adapters of the storage system 1. Details of the table 1310 will be described later.
  • the table 1311 stores management information of a disk pool 6 which is a virtual storage area configured by all disks 1700 of the storage system 1.
  • Fig. 7 shows an internal configuration of the disk pool manager 15.
  • the manager 15 includes a processor 151, a memory 152, a management network controller 153, and a communication controller 155.
  • the processor 151 controls overall operation of the disk pool manager 15.
  • the memory 152 stores control programs executed by the processor 151 and data used for control.
  • the management network controller 153 controls data transfer between the disk pool manager 15 and the other devices of the disk controller 11.
  • the communication controller 19 controls data transfer between the management terminal 18 and the disk pool manager 15.
  • the memory 152 stores a disk pool manager program 1521, a disk pool information centralized management table 1522, a channel adapter recognition and authentication manager program 1523, and a channel adapter centralized management table 1524.
  • the disk pool manager program 1521 is executed when the storage system 1 configures and manages the disk pool 6 using the disks 1700.
  • the channel adapter recognition and authentication manager program 1523 is executed when the storage system 1 senses a state of installation of the channel adapters 1100 and 1110 to confirm normal operation thereof.
  • the disk pool information centralized management table 1522 stores information for the disk pool manager program 1521 to manage the disk pool 6 thus constructed and information to manage allocation of storage areas of the disk pool to the respective channel adapters.
  • the contents of the table 1522 are basically the same as those of the table 1311 stored in the shared memory 13.
  • the channel adapter centralized management table 1524 stores information of channel adapters sensed and authenticated by the manager program 1523.
  • the contents of the table 1524 are basically the same as those of the channel adapter centralized management table 1310 stored in the shared memory 13.
  • Fig. 8 shows a configuration of the disk pool 6.
  • the disk pool 6 is a set of storage areas of a plurality of disks 1700 defined as one virtual storage area.
  • a plurality of disk pools 6 may be set to the storage system 1. Specifically, according to the difference in the characteristic of the disk devices, for example, performance thereof (rotating speed of 7200 rpm or 15000 rpm) or redundancy (RAID1 or RAIDS) of the disk group, a disk pool may be set for each of a group of disks having the same characteristic. Alternatively, a disk pool may be set for each group of users (user group) using the storage system.
  • the disk pool may include an arbitrary number of disks 1700. By adjusting the number thereof according to necessity, the user can change the storage capacity of the disk pool 6.
  • Fig. 8 shows a specific example of a correspondence between one disk pool 6 and a plurality of disks 1700, concretely, RAID group (RG) 17000.
  • the RG 17000 is redundant arrays of inexpensive disks (RAID) including a plurality of disks 1700.
  • the storage areas allocated to one disk pool 6 are divided into a plurality of logical devices (LDEV) 170.
  • a plurality of LDEV 170 are collectively defined as a volume, i.e., a logical volume (LV) 1750.
  • LV logical volume
  • an RG 17000 (RG 0) including four disks 1700 is allocated to one disk pool 6.
  • the storage capacity of the RG 0 is subdivided into LDEV 0 to LDEV k each of which has a storage capacity of L.
  • a set including LDEV 0 to LDEV 2 configures one LV 0.
  • a set of LDEV 3 and LDEV 4 configures one LV 1 (1751). In this way, several LV are similarly constructed.
  • the LV0 and the LV1 are respectively allocated to the CHN 0 and the CHN 2.
  • the LDEV may correspond to one stripe size of the RAID.
  • Fig. 9 shows a layout of the disk pool information centralized management table 1311 stored in the shared memory 13.
  • the table 1311 is substantially equal in structure to the disk pool information centralized management table 1522 stored in the disk pool manager 15. Description will now given of the disk pool information centralized management table 1311.
  • the table 1311 includes an entry 13110 for a number assigned to the disk pool 6, an entry 1311 for a number assigned to the RG 17000, an entry 1312 for a number assigned to the LDEV 170, an entry 1313 for information of the capacity of the LDEV 170, an entry 1314 for a number assigned to the logical volume (LV), a CN entry 1315 for a sequence number of the LDEV 170 of the LV, an entry 1316 for a number of a channel adapter to which an associated LV is assigned, and entry 1317 for an S-N sharing information indicating possibility of sharing of the channel adapter with respect to the disk pool 6, RG, and LV, namely, between SAN (CHF) and NAS (CHN); an entry 1318 for S-S sharing information indicating possibility of sharing between SAN adapters (CHFs), an entry 1319 for N-N sharing information indicating possibility of sharing between NAS adapters (CHNs), and an entry 1320 for user class sharing information.
  • SAN SAN
  • the sharing information between a plurality of CHs (to be referred to as intra-CH sharing information hereinbelow) and the like will be described later.
  • the CN is a number indicating a sequence of connection when a plurality of LDEV 170 are connected to each other to create a logical volume.
  • the storage system 1 confirms the RG 17000 and LDEV 170 constituting the disk pool 6 according to the information registered to the table 1311. The storage system 1 also confirms, according to the information registered to the table 1311, to which one of the channel adapters the LDEV 170 is allocated and available storage capacity of the disk pool 6.
  • Fig. 10 shows a configuration of the channel adapter centralized management table 1310 stored in the cache memory 13.
  • the table 1310 is substantially equal in structure to the channel adapter centralized management table 1524 of the disk pool manager 15.
  • the table 1310 includes an entry 13100 for an address of a channel adapter in the management network 16, the channel adapter being installed in the storage system 1, an entry 13101 for an identifier number of the channel adapter, an entry 13102 for information of a type of the channel adapter, and an entry 13103 for information of a state of operation of the channel adapter.
  • the storage system mainly conducts two operations as below.
  • the volume allocation processing of (A) is processing in which the storage system 1 allocates a storage capacity of the disk pool 6 as a logical volume to the channel adapter.
  • the volume recognition processing of (B) is processing in which a channel adapter of the storage system 1 recognizes a logical volume allocated to the channel adapter.
  • Fig. 11 shows a flowchart of a procedure of the volume allocation processing.
  • the administrator of the storage system 1 initiates a manager software of the management terminal 18 (step 6100).
  • the administrator inputs information indicating CHNi and information of the logical volume to be allocated to CHNi to the management terminal 18.
  • the management terminal 18 displays information including an icon for LDEV 180 in an area indicating as an available item on its display in response to an execution of the manager software
  • the administrator moves by a mouse or the like the icon to an area indicating CHNi on a screen of the display.
  • the administrator can simultaneously move a plurality of LDEV for required storage capacity on the screen.
  • the LDEV are beforehand set to the storage system 1 when the product is delivered to the user (step 6101).
  • the information of CHNi selected by the administrator and the information of LDEV 170 to be allocated to CHNi are sent from the terminal 18 to the disk pool manager 15.
  • the disk pool manager 15 receives the information (step 6150).
  • the disk pool manager 15 executes the disk pool manager program 1521.
  • the disk pool manager 15 determines a pool number associated with the specified LDEV 170 and processes the disk pool information centralized management table 1522 to change the item in the allocation destination CH information entry 1312 from "not allocated” to "CHNi" (step 6151).
  • the disk pool manager 15 allocates a new logical volume number LVj to the LDEV 170 and registers information items of the allocated logical volume to the LV number entry 1314 and the CN number entry 1315 of the table 1522 (step 6152).
  • the disk pool manager 15 then notifies via the management network 16 to the CHNi that the LDEV 170 has been added and has been defined as a logical volume LVj (step 6153).
  • the CHNi executes the disk pool information acquisition program 110048 to achieve processing as follows.
  • the channel adapter CHNi updates the information in the disk pool information management table 110049 of its own according to the information notified from the disk pool manager 15 (step 6161).
  • the adapter CHNi also controls the shared memory 13 to update associated items in the disk pool information centralized management table 1311 stored in the shared memory 13 (step 6162).
  • the channel adapter CHNi executes the disk volume control program 110045 to register to the disk information control table 110049 information to configure the allocated LDEV group as a logic volume LVj and information such as an LV number to use the allocated LDEV group as the volum LVj (step 6163).
  • the CHNi sends a termination report to the disk pool manager 15 (step 6164). Having received the report, the manager 15 sends the termination report to the management terminal 18 (step 6155). The volume allocation processing is then terminated.
  • a storage capacity may also be added to an existing volume LVp in similar processing. This processing differs from the processing described above in the following points.
  • the administrator instructs LVp to add LDEV 170 via the screen of the terminal 18. Having received the instruction of addition of LDEV 170, the disk pool manager 15 changes the disk pool information centralized management table 1522 to add LDEV 170 to LVp and sends information of the change to CHNi. Having received the information of the change, CHNi changes the disk pool information management table 110049 and the disk pool information centralized management table in the shared memory 13.
  • Fig. 12 is a flowchart showing a procedure of the volume recognition processing executed by the channel adapter when the storage system 1 is initiated or when a channel adapter is additionally installed.
  • each channel adapter In response to an event that the storage system 1 is powered or that a channel adapter is added to the storage system 1 (step 6200), each channel adapter initializes itself (step 6201).
  • each channel adapter After the initialization is finished, each channel adapter starts execution of the disk pool information acquisition program 110048 (step 6202). Each channel adapter then refers to the disk pool information centralized management table 1311 in the shared memory 13 to acquire information of a logical volume to be used (step 6203).
  • the channel adapter When the information of the logical volume is obtained, the channel adapter registers the information to own disk pool information management table 110049 (step 6204). As a result of the processing, each channel adapter can recognize the logical volume allocated thereto (step 6205).
  • each channel adapter can use any logical volume of the storage system 1 according to the information registered to the disk pool information centralized management table 1311 containing information of all logical volumes of the storage system 1 in a centralized fashion. Particularly, since the logical volumes are exclusively allocated in the embodiment to the channel adapters in the disk pool information centralized management table 1311 to avoid duplication of the allocation, each channel adapter can use the logical volume in an exclusive fashion.
  • the recognition processing is applicable to any channel adapters regardless of the channel adapter types such as a network channel adapter 1100 and a Fibre Channel adapter 1110.
  • the disks are subdivided into logical devices LDEV 170 to define a set of LDEV 170 as a logical volume LV.
  • the disks 1700 are not divided into LDEV 170.
  • the entire storage capacity of one or more RG 17000 forms a disk pool 6 and a predetermined storage capacity thereof is allocated as a logical volume to the channel adapter.
  • a storage system including an arbitrary combination of a plurality of Fibre Channel adapters having the SAN or the block I/O interface and a plurality of network channel adapters having the NAS or the file I/O interface. It is therefore possible to construct a storage system having high scalability and a high degree of freedom for its configuration.
  • the setting of the storage areas of the storage system and the management and operation of the storage system can be conducted in a centralized manner by a single manager terminal. Therefore, the management and operation of the system is facilitated and the system management cost is reduced.
  • the user can safely use the storage capacity of a storage system including the SAN and the NAS by exclusively controlling the storage areas. Therefore, by managing the storage capacity in a centralized fashion, an easily accessible environment can be provided, and the storage capacity can be optimally distributed. This resultantly reduces the system management and operation cost.
  • intra-CH sharing information items 1317 to 1319 shown in Fig. 9 are used.
  • the information items 1317 to 1319 are set for each of the disk pool 6, the RAID group 17000, and the logical volume.
  • the intra-CH sharing information mainly includes information of three attributes as follows.
  • attributes are hierarchically ordered as P > R > R/W.
  • the attribute can not be changed to an attribute of a higher level. That is, when R is set to a logical volume, R can be changed to P. However, the attribute R cannot be changed to the attribute R/W.
  • a logical volume for which the sharing is once prohibited can be changed to a sharable logical volume. For example, in a case in which a logical volume is used by assuming that a channel adapter exclusively use the logical volume, when the attribute of the logical volume is change to the read-only sharing attribute (R) or the like, there may occur a case in which data to be kept secret is open to the public. Therefore, the data is changed, and logical inconsistency occurs depending on cases.
  • the user When it is desired to set a lower attribute to a logical volume having a higher attribute, the user must once delete the logical volume to return the storage area such as LDEV allocated to the logical volume to the disk pool 6. The user re-configures the logical volume again. Attributes other than those described above may also be set.
  • the intra-CH sharing information is taken over in a sequence of "disk pool 6" > "RG 17000” > "LV or LDEV 170". That is, sharing information of a higher level (the disk pool is at the highest level) takes precedence over sharing information of RG 17000 and LDEV at a lower level. Sharing information of RG 17000 takes precedence over sharing information of LDEV 170.
  • the associated items of a lower level such as RG 17000 have also the sharing prohibition attribute. Therefore, the storage areas of the disk pool 6 in the storage system 1 cannot be shared at all. Since the sharing prohibition attribute has the highest level, an attribute of a lower level cannot be individually assigned to the associated items of a lower level such as RG 17000.
  • the disk pool 6 has the R/W attribute
  • the associated items of a lower level such as RG 17000 have also the R/W attribute. Therefore, the storage areas of the disk pool 6 in the storage system 1 can be shared. In this case, an attribute of a lower level can be individually assigned to the associated items of a lower level such as RG 17000.
  • the first mode is called "NAS over SAN".
  • NAS over SAN For a file system created in a logical volume allocated to a network channel adapter 1100, a computer outside the storage system 1 sends a file access request via the LAN 20 to the channel adapter 1100. The computer having issued the request communicates data via the SAN 30.
  • the second mode is called "backup via SAN".
  • a backup operation for data stored in a logical volume allocated to the network channel adapter 1100 to create a file system is conducted via an SAN connected to the Fibre Channel adapter 1110.
  • a logical volume 8 including the disk pool 6 (1), RG 17000 (2), and LDEV 170 (6 to 8) is defined as a logical volume shared between a computer associated with SAN and a computer associated with NAS. This can be seen from that a plurality of types of channel adapters are registered to the channel adapter number entry 1316.
  • the S-N sharing information entry 1317 corresponding to LV 8 contains information that disk pool 1 is R/W, RG2 is R/W, and LV 8 is R/W.
  • the attribute of any logical volume other than LV8 has been changed to the P attribute. That is, the sharing is prohibited for such logical volumes.
  • the R/W attribute is set to LDEV 170 not used, i.e., reserved.
  • the sharing attribute is set to the disk pool 6 when the disk pool 6 is defined.
  • the attribute is set to RG 17000 when RG 17000 is registered to the disk pool 6.
  • the attribute is set to LV and LDEV when the volume allocation processing is executed.
  • the user inputs specific setting information from the management terminal 18.
  • the SAN-SAN (S-S) sharing information and the NAS-NAS (N-N) sharing information are also respectively set to the entries 1318 and 1319 in a similar way.
  • attributes having a plurality of levels are set to share a logical volume.
  • an attribute having a plurality of levels is set to each user who accesses a logical volume. When a user desires to access the logical volume, the user is restricted according to the attribute level.
  • the logical volume is shared between network attached storages (NAS).
  • the user class sharing information entry 1320 in the disk pool information centralized management table 1311 (1522) has a subentry for each attribute (user class) associated with the user.
  • the numbers of users belonging to the respective user classes (1) to (5) are sequentially ordered as (1) to (5) in a descending order.
  • each attribute similar to the intra-volume sharing attribute described in the second embodiment namely, the attribute R/W, R, or P is assigned.
  • the meaning of each attribute, levels thereof, operation to take over the attribute, and possibility to change the attribute are substantially equal to those described in the second embodiment, and hence description thereof will be omitted.
  • the other users namely, the users in the same domain or firm cannot change the contents of the logical volume.
  • the user class sharing attribute is set to the disk pool 6 when the disk pool 6 is defined.
  • the attribute is set to RG 17000 when RG 17000 is registered to the disk pool 6.
  • the attribute is set to LV and LDEV when the volume allocation processing is executed.
  • the user instructs the setting of the attribute from the manager terminal 18.
  • the storage system 1 For the management of users, the storage system 1 must have a user class definition information table containing information for each user, for example, information indicating a name of group of one of the user classes 1 to 5 to which the user belongs. For each logical volume, the user information is registered to the user class definition table.
  • the NAS since the NAS includes a unit to manage users for each file system, the user must set user management information to the network channel adapter 1100.
  • the user information is set to the adapter 1100, the user class definition information table is generated.
  • the administrator selects an appropriate user class.
  • security can be set at a logical volume level corresponding to the user class. Even if a user of a user class not allowed to access the logical volume attempts to access the volume, the user is checked for authentication for each logical volume according to the above function, and hence the access request from the user not having the access permission is rejected.
  • the embodiment can be easily applied to the NAS.
  • the embodiment can be easily applied to the NAS.
  • the user management function of this embodiment in the SAN, it is also similarly possible to implement the disk pool management and the volume allocation while guaranteeing the access security between the user classes.
  • the access security between user classes is guaranteed in one channel adapter.
  • the configuration to share a volume between channel adapters as described above is combined with the configuration to guarantee the access security between user classes. That is, the storage system 1 provides a service in which the system 1 allows only a user of an appropriate user class to access a volume while the volume is shared between multiple channel adapters.
  • the management and operation cost of the storage system can be reduced.

Abstract

A storage (1) has NAS and SAN functions and a high degree of freedom to configure a system to reduce the management and operation cost. The storage includes a plurality of interface slots (190) in which a plurality of interface controllers (1100, 1110) can be installed, a block I/O interface controller (1110) which has SAN functions and which can be installed in the slot, a file I/O interface controller (1100) which has NAS functions and which can be installed in the slots, a storage capacity pool including a plurality of disk devices (6) accessible from the interface controllers, and a storage capacity pool controller (15) to control the storage capacity pool.

Description

FIELD OF THE INVENTION
The present invention relates to a storage system used in a computer system.
BACKGROUND OF THE INVENTION
Interfaces to connect a storage system to a computer are mainly classified into two types.
First, a block input/output (I/O) interface is used to conduct I/O operations between a storage system and a computer using a unit of blocks as a data control unit in the storage system. The block I/O interface includes a Fibre Channel, a small computer systems interface (SCSI), and the like.
A network in which a plurality of storage systems having a block I/O interface are connected to a plurality of computers is called a storage area network (SAN). SAN includes a Fibre Channel in many cases. JP-A-10-333839 describes an example of a storage system connected to a storage area network using a Fibre Channel.
Second, a file I/O interface is used to conduct I/O operations between a storage system and a computer using a unit of files recognized by an application program executed by the computer. The file I/O interfaces uses, in many cases, a protocol of a network file system employed by a file server of the prior art.
Particularly, an apparatus in which functions of the file server customized for the storage system are combined with the storage system is called a network attached storage (NAS). The NAS is directly connected to a local area network (LAN) and the like.
In a storage are network (SAN), the computer is connected to the storage system via a high-speed network exclusively constructed for the storage system, the network being separated from a network used to exchange messages between computers. Therefore, in the SAN higher speed data communication can be executed when compared with the NAS connected via the LAN to the computer. Since the SAN adopts the block I/O interface, overhead of protocol processing is lowered when compared with the NAS, and hence a high-speed response can be achieved.
However, since the SAN requires a network system dedicated to the storage system, installation of the SAN requires a higher cost. Therefore, the SAN is primarily used in a backbone system of an enterprise to construct a database in many cases.
On the other hand, since the NAS can directly use an existing LAN, the installation cost thereof is lowered and the installation becomes easier. The NAS uses a standardized network file system and hence the NAS user can manage data in the unit of general files. Therefore, data can be easily managed and files can be easily shared among a plurality of computers.
However, the NAS communicates data with computers via the LAN also used for communications between the computers. There exists risk that the load of the LAN is increased. Since the protocol processing used in the network file system has high overhead and hence the response time becomes longer as compared with the SAN. Therefore, the NAS is primarily used in a file management or control system of an enterprise for application to manage web contents, data files of computer aided design, and the like in many cases.
As above, the NAS and the SAN are complementary to each other and are applied to mutually different fields. Therefore, the NAS and the SAN are respectively used in appropriate fields.
For each computer to use a necessary amount of storage capacity of the storage system according to necessity, there exists a technique called a storage pool in which each resource (to be referred to as a device hereinbelow) of the storage system can be easily set and can be easily allocated to each computer. JP-A-2001-142648 describes an example of the storage pool.
SUMMARY OF THE INVENTION
When it is desired to construct a system including an SAN and an NAS according to the prior art, an NAS and a storage system having functions of the SAN are required. In this case, the user of the constructed system must separately manage or control the storage systems of different types. This results in a problem of increase in the cost for the management and operation of the system.
In the system including both the storage system having the NAS functions and the storage system having the SAN functions, the system user must manage the storage capacity for each storage system. Specifically, for each storage, the user must set an environment configuration and must manage its storage capacity. Consequently, there arises a problem that the management of the storage system becomes complex and difficult.
Assume that the storage capacity of the storage system used by the SAN is insufficient in a system including the SAN and the NAS. In this situation, even if NAS has available or enough storage capacity, the user cannot freely use the storage capacity of the NAS because the respective storage systems are separately or differently managed. That is, there arises a problem that the user cannot efficiently use the total capacity of the overall system.
It is therefore an object of the present invention to provide a storage system capable of reducing the management and operation cost for storage systems of a plurality of types.
Another object of the present invention is to provide a storage system which facilitates the management of the storage capacity of the overall system and which allows efficient use of the storage capacity.
To solve the objects according to the present invention, there is provided a storage system including a plurality of slots in which interface controllers of a plurality of types are installed, a plurality of disk devices, and a device for allocating storage areas of the disk devices to the slots.
The interface controllers of a plurality of types may include a block I/O interface controller having functions of a SAN and a file I/O interface controller having functions of a NAS.
According to a favorable embodiment of the present invention, the interface controllers have the same shape or form.
According to a favorable embodiment of the present invention, the storage system may include a device for allocating a storage area allocated to an interface controller to an interface controller of another type.
Other objects, features and advantages of the invention will become apparent from the following description of the embodiments of the invention taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
  • Fig. 1 is a diagram schematically showing a system configuration of an embodiment of the present invention;
  • Fig. 2 is a schematic diagram showing a system configuration of an embodiment of the present invention;
  • Fig. 3 is a perspective view of a channel adapter;
  • Fig. 4 is a block diagram showing a configuration of a network channel adapter (CHN);
  • Fig. 5 is a block diagram showing a configuration of a Fibre Channel adapter board (CHF);
  • Fig. 6 is a diagram showing a configuration of a shared memory (SM);
  • Fig. 7 is a block diagram showing a configuration of a disk pool controller;
  • Fig. 8 is a schematic diagram showing a configuration of a disk pool;
  • Fig. 9 is a diagram showing a layout of a disk pool information centralized control table;
  • Fig. 10 is a diagram showing a channel adapter centralized control table;
  • Fig. 11 is a flowchart showing operation of volume allocation processing; and
  • Fig. 12 is a flowchart showing operation of volume recognition processing.
  • DESCRIPTION OF THE EMBODIMENTS
    Fig. 1 shows an embodiment of a storage system according to the present invention.
    The storage system 1 is connected via an LAN 20 and an LAN 21 to NAS clients 400. In the embodiment, each of the LAN 20 and 21 is an internet protocol (IP) network (to be simply referred to as a network hereinbelow). The storage system 1 is also connected via an SAN 30 to SAN clients 500. In the embodiment, the SAN 30 includes a Fibre Channel.
    The storage system 1 is connected to a control of management terminal 18. The terminal 18 includes input/output devices such as a display to present various kinds of information to the user and a keyboard and a mouse for the user to input information. The terminal 18 can communicate via a communication network 19 with the storage system 1 and is operated by the user to conduct various setting operations of the storage system 1.
    As above, the storage system 1 is a storage system including the SAN which includes the Fibre Channel and which uses the block I/O interface and the NAS which includes the IP networks and which uses the file I/O interface. That is, the storage system 1 includes different type interfaces.
    Any combination of the interface for SAN and the interface for NAS is available for the storage system 1. As one combination, the system 1 may be configured with the interface only for SAN or NAS.
    The storage system 1 includes a disk controller (to be abbreviated as DKC hereinbelow) 11 and storage devices (to be referred to as disks hereinbelow) 1700.
    The disk controller 11 includes network channel adapters (CHN) 1100, and Fibre Channel adapters (CHF) 1110, disk adapters (DKA) 1200, a shared memory (SM) 13, a cache memory (CM) 14, and a disk pool manager (DPM) 15.
    The network channel adapter 1100 is an interface controller connected via the file I/O interface to NAS clients 400. The Fibre Channel adapter 1110 is an interface controller connected via the block I/O interface to SAN clients 500. The adapters 1100 and 1200 will be collectively referred to as channel adapters (CH) hereinbelow.
    The disk adapter 1200 is connected to disks 1700. The disk adapter 1200 controls data transfer between disks connected thereto and associated clients. The shared memory 13 stores configuration control information to control the configuration of the storage system 1, control information of the cache memory 14, and the like. Data stored in the shared memory 13 is shared among all channel adapters and disk adapters. The cache memory 14 temporarily stores data of the disk 1700. The cache memory 14 is also shared among all channel adapters and disk adapters.
    In the storage system 1, any channel adapter can access the cache memory 14 and all disks 1700. Such a configuration can be constructed using, for example, a crossbar switch.
    The disk pool manager (DPM) 15 manages all disks 1700 in a centralized fashion. Concretely, the manager 15 stores information to regard the overall storage capacity of the disks 1700 as one disk pool. The channel adapters, the disk adapters, and disk pool manager 15 are mutually connected to each other via a management or control network 16. The disk pool manager 15 is connected via the communication network 19 to the manager terminal 18.
    Fig. 2 shows an appearance of the storage system 1.
    A DKC cabinet 115 is disposed to store the CHN 1100, the CHF 1110, the DKA 1200, the SM 13, and the CM 14. A disk unit cabinet (DKU) 180 is used to store disks 1700. The shared memory 13 actually includes a plurality of controller boards 1300. The cache memory 14 also includes a plurality of cache boards 1400. The user of the storage system 1 obtains desired storage capacity by adjusting the number of cache boards 1400 and disks 1700.
    The DKC cabinet 115 includes a plurality of interface adapter slots 190. Each slot 190 is used to install an adapter board in which a channel adapter 1100 and the like are mounted is installed. In the embodiment, the interface adapter slots 190 have the same form, the adapter boards have the same size, and the connectors have the same form regardless of the interface types to keep compatibility of interface adapters of various types. Therefore, any kinds of adapter board can be installed in the slot 190 of the DKC cabinet 115 regardless of the interface type.
    The user of the storage system 1 can install any combination of the numbers of adapter boards, CHN 1100 and CHF 1110 in the adapter slots 190 of the storage system 1. The user therefore can freely configure the interface of the storage system 1.
    Fig. 3 shows a configuration of an adapter board with a channel adapter 1100. A connector 11007 is used to establish connection to a connector of the DKC cabinet 115. In the embodiment, the adapters 1100 and 1110 have the same form as described above. For the adapter board of a network channel adapter, an interface connector 2001 is associated with the LAN. For the adapter board of a Fibre Channel adapter 1110, the interface connector 2001 corresponds to a Fibre Channel.
    Fig. 4 shows an internal configuration of the network channel adapter 1100. The adapter 1100 includes a processor 11001, an LAN controller 11002, a management network controller 11003, a memory 11004, an SM interface (I/F) controller 11005, and a CM I/F controller 11006.
    The processor 11001 controls overall operation of the adapter 1100. The LAN controller 11002 controls communication between the adapter and the LAN. The management network controller 11003 controls communication between the adapter 1100 and the management network 16. The memory 11004 connected to the processor 11001 stores programs to be executed by the processor 11001 and control data.
    The SM I/F controller 11005 controls data transfer between the adapter 1100 and the shared memory 13. The CM I/F controller 11006 controls data transfer between the adapter 1100 and the cache memory 14.
    The memory 11004 stores an operating system program 110040, an LAN controller driver program 110041, a TCP/IP program 110042, a file system program 110043, a network file system program 110044, a disk volume control program 110045, a cache control program 110046, a disk pool information acquisition program 110048 and a disk pool information management table 110049. The memory 11004 also includes a data buffer 110047.
    The operating system program 110040 manages the programs stored in the memory 11004 and controls input and output operations. The LAN controller driver program 110041 controls the LAN controller 11002. The TCP/IP program 110042 controls the communication protocol, namely, TCP/IP on the LAN. The file system program 110043 manages the files stored on the disks 1700.
    The network file system program 110044 controls the protocol of the network file system to supply files from the disks 1700 to the NAS clients 400. The disk volume control program 110045 controls accesses to disk volumes set to the disks 1700. The cache control program 110046 manages data stored in the cache memory 14 and controls to judge a data hit/miss condition and the like in the cache memory 14.
    The disk pool information acquisition program 110048 is executed when any one of the channel adapters 1100 acquires information of the disk pool stored in the shared memory 13. The disk pool information management table 110049 stores disk pool information obtained as a result. The data buffer 110047 adjusts a data transfer speed between the NAS clients 400 and the cache memory 14 and is used to store file data in the cache memory 14.
    The processor 11001 may be a single processor or a set of a plurality of processors. For example, the processor 11001 may be configured as a symmetric multiprocessor for horizontal load balancing of control processing. The processor 11001 may also be configured as an asymmetric multiprocessor in which a processor executes network file system protocol processing of the file I/O interface and another processor controls disk volumes. Alternatively, the processor 11001 may also be configured as a combination of these configurations.
    Fig. 5 shows an internal configuration of the Fibre Channel adapter 1110. The adapter 1110 includes a processor 11101, a Fibre Channel controller 11102, a management network controller 11103, a memory 11104, an SM controller 11005, a CM controller 11006, and a data buffer 1107.
    The processor 11101 controls overall operation of the adapter 1110. The Fibre Channel controller 11102 controls communication between the adapter 1110 and the SAN. The management network controller 11103 controls communication between the adapter 1110 and the management network 16. The memory 11104 connected to the processor 11101 stores programs to be executed by the processor 11101 and control data.
    The SM controller 11005 controls data transfer between the network channel adapter 1100 and the SM 13. The CM controller 11006 controls data transfer between the network channel adapter 1100 and the CM 14. The data buffer 11107 serves as a buffer memory to minimize the difference of data transfer speed between the SAN clients 500 and the cache memory 14.
    The memory 11104 stores a Fibre Channel controller driver program 111041, an operating system program 111040, a disk volume control program 111045, a cache control program 110046, a disk pool information acquisition program 111048, and a disk pool information management table 111049.
    The Fibre Channel controller driver program 111041 controls the Fibre Channel controller 11002. The other programs are almost equal in their functions as those stored in the memory of the network channel adapter 1100 shown in Fig. 4 and hence description thereof will be avoided.
    Like the processor 11001 of the network channel adapter 1100, the processor 11101 may also be configured as a single processor or a multiprocessor.
    Fig. 6 shows a layout of the shared memory 13. The memory 13 includes a configuration managing information area 131. The area 131 includes a channel adapter centralized management table 1310 and a disk pool information centralized management table 1311. Although the memory 13 stores information of all configurations of the storage system 1, description thereof will be avoided in conjunction with the embodiment.
    The channel adapter centralized management table 1310 stores management information of all channel adapters of the storage system 1. Details of the table 1310 will be described later. The table 1311 stores management information of a disk pool 6 which is a virtual storage area configured by all disks 1700 of the storage system 1.
    Fig. 7 shows an internal configuration of the disk pool manager 15. The manager 15 includes a processor 151, a memory 152, a management network controller 153, and a communication controller 155.
    The processor 151 controls overall operation of the disk pool manager 15. The memory 152 stores control programs executed by the processor 151 and data used for control. The management network controller 153 controls data transfer between the disk pool manager 15 and the other devices of the disk controller 11. The communication controller 19 controls data transfer between the management terminal 18 and the disk pool manager 15.
    The memory 152 stores a disk pool manager program 1521, a disk pool information centralized management table 1522, a channel adapter recognition and authentication manager program 1523, and a channel adapter centralized management table 1524.
    The disk pool manager program 1521 is executed when the storage system 1 configures and manages the disk pool 6 using the disks 1700. The channel adapter recognition and authentication manager program 1523 is executed when the storage system 1 senses a state of installation of the channel adapters 1100 and 1110 to confirm normal operation thereof.
    The disk pool information centralized management table 1522 stores information for the disk pool manager program 1521 to manage the disk pool 6 thus constructed and information to manage allocation of storage areas of the disk pool to the respective channel adapters. The contents of the table 1522 are basically the same as those of the table 1311 stored in the shared memory 13.
    The channel adapter centralized management table 1524 stores information of channel adapters sensed and authenticated by the manager program 1523. The contents of the table 1524 are basically the same as those of the channel adapter centralized management table 1310 stored in the shared memory 13.
    Fig. 8 shows a configuration of the disk pool 6.
    The disk pool 6 is a set of storage areas of a plurality of disks 1700 defined as one virtual storage area.
    A plurality of disk pools 6 may be set to the storage system 1. Specifically, according to the difference in the characteristic of the disk devices, for example, performance thereof (rotating speed of 7200 rpm or 15000 rpm) or redundancy (RAID1 or RAIDS) of the disk group, a disk pool may be set for each of a group of disks having the same characteristic. Alternatively, a disk pool may be set for each group of users (user group) using the storage system.
    The disk pool may include an arbitrary number of disks 1700. By adjusting the number thereof according to necessity, the user can change the storage capacity of the disk pool 6.
    Fig. 8 shows a specific example of a correspondence between one disk pool 6 and a plurality of disks 1700, concretely, RAID group (RG) 17000. The RG 17000 is redundant arrays of inexpensive disks (RAID) including a plurality of disks 1700.
    The storage areas allocated to one disk pool 6 are divided into a plurality of logical devices (LDEV) 170. A plurality of LDEV 170 are collectively defined as a volume, i.e., a logical volume (LV) 1750.
    In Fig. 8, an RG 17000 (RG 0) including four disks 1700 is allocated to one disk pool 6. The storage capacity of the RG 0 is subdivided into LDEV 0 to LDEV k each of which has a storage capacity of L. A set including LDEV 0 to LDEV 2 configures one LV 0. Similarly, a set of LDEV 3 and LDEV 4 configures one LV 1 (1751). In this way, several LV are similarly constructed. In Fig. 8, the LV0 and the LV1 are respectively allocated to the CHN 0 and the CHN 2. The LDEV may correspond to one stripe size of the RAID.
    Fig. 9 shows a layout of the disk pool information centralized management table 1311 stored in the shared memory 13. The table 1311 is substantially equal in structure to the disk pool information centralized management table 1522 stored in the disk pool manager 15. Description will now given of the disk pool information centralized management table 1311.
    The table 1311 includes an entry 13110 for a number assigned to the disk pool 6, an entry 1311 for a number assigned to the RG 17000, an entry 1312 for a number assigned to the LDEV 170, an entry 1313 for information of the capacity of the LDEV 170, an entry 1314 for a number assigned to the logical volume (LV), a CN entry 1315 for a sequence number of the LDEV 170 of the LV, an entry 1316 for a number of a channel adapter to which an associated LV is assigned, and entry 1317 for an S-N sharing information indicating possibility of sharing of the channel adapter with respect to the disk pool 6, RG, and LV, namely, between SAN (CHF) and NAS (CHN); an entry 1318 for S-S sharing information indicating possibility of sharing between SAN adapters (CHFs), an entry 1319 for N-N sharing information indicating possibility of sharing between NAS adapters (CHNs), and an entry 1320 for user class sharing information. The sharing information between a plurality of CHs (to be referred to as intra-CH sharing information hereinbelow) and the like will be described later. The CN is a number indicating a sequence of connection when a plurality of LDEV 170 are connected to each other to create a logical volume.
    The storage system 1 confirms the RG 17000 and LDEV 170 constituting the disk pool 6 according to the information registered to the table 1311. The storage system 1 also confirms, according to the information registered to the table 1311, to which one of the channel adapters the LDEV 170 is allocated and available storage capacity of the disk pool 6.
    Fig. 10 shows a configuration of the channel adapter centralized management table 1310 stored in the cache memory 13. The table 1310 is substantially equal in structure to the channel adapter centralized management table 1524 of the disk pool manager 15.
    The table 1310 includes an entry 13100 for an address of a channel adapter in the management network 16, the channel adapter being installed in the storage system 1, an entry 13101 for an identifier number of the channel adapter, an entry 13102 for information of a type of the channel adapter, and an entry 13103 for information of a state of operation of the channel adapter.
    Description will next be given of operation of the storage system in the embodiment of the present invention. The storage system mainly conducts two operations as below.
  • (A) Volume allocation processing
  • (B) Volume recognition processing
  • The volume allocation processing of (A) is processing in which the storage system 1 allocates a storage capacity of the disk pool 6 as a logical volume to the channel adapter.
    The volume recognition processing of (B) is processing in which a channel adapter of the storage system 1 recognizes a logical volume allocated to the channel adapter.
    Fig. 11 shows a flowchart of a procedure of the volume allocation processing.
    When a new logical volume is allocated to a channel adapter (to be referred to as CHNi hereinbelow), the administrator of the storage system 1 initiates a manager software of the management terminal 18 (step 6100).
    The administrator inputs information indicating CHNi and information of the logical volume to be allocated to CHNi to the management terminal 18. Concretely, when the management terminal 18 displays information including an icon for LDEV 180 in an area indicating as an available item on its display in response to an execution of the manager software, the administrator moves by a mouse or the like the icon to an area indicating CHNi on a screen of the display. In the operation, the administrator can simultaneously move a plurality of LDEV for required storage capacity on the screen. In the embodiment, it is assumed that the LDEV are beforehand set to the storage system 1 when the product is delivered to the user (step 6101).
    The information of CHNi selected by the administrator and the information of LDEV 170 to be allocated to CHNi are sent from the terminal 18 to the disk pool manager 15. The disk pool manager 15 receives the information (step 6150).
    Having received the information from the terminal 18, the disk pool manager 15 executes the disk pool manager program 1521.
    The disk pool manager 15 determines a pool number associated with the specified LDEV 170 and processes the disk pool information centralized management table 1522 to change the item in the allocation destination CH information entry 1312 from "not allocated" to "CHNi" (step 6151).
    Subsequently, the disk pool manager 15 allocates a new logical volume number LVj to the LDEV 170 and registers information items of the allocated logical volume to the LV number entry 1314 and the CN number entry 1315 of the table 1522 (step 6152).
    The disk pool manager 15 then notifies via the management network 16 to the CHNi that the LDEV 170 has been added and has been defined as a logical volume LVj (step 6153).
    Having received the notification (step 6160), the CHNi executes the disk pool information acquisition program 110048 to achieve processing as follows.
    The channel adapter CHNi updates the information in the disk pool information management table 110049 of its own according to the information notified from the disk pool manager 15 (step 6161). The adapter CHNi also controls the shared memory 13 to update associated items in the disk pool information centralized management table 1311 stored in the shared memory 13 (step 6162).
    Next, the channel adapter CHNi executes the disk volume control program 110045 to register to the disk information control table 110049 information to configure the allocated LDEV group as a logic volume LVj and information such as an LV number to use the allocated LDEV group as the volum LVj (step 6163).
    Thereafter, the CHNi sends a termination report to the disk pool manager 15 (step 6164). Having received the report, the manager 15 sends the termination report to the management terminal 18 (step 6155). The volume allocation processing is then terminated.
    Although a new volume is allocated in the description, a storage capacity may also be added to an existing volume LVp in similar processing. This processing differs from the processing described above in the following points. The administrator instructs LVp to add LDEV 170 via the screen of the terminal 18. Having received the instruction of addition of LDEV 170, the disk pool manager 15 changes the disk pool information centralized management table 1522 to add LDEV 170 to LVp and sends information of the change to CHNi. Having received the information of the change, CHNi changes the disk pool information management table 110049 and the disk pool information centralized management table in the shared memory 13.
    The processing described above is similarly applicable to addition of a logical volume to a Fibre Channel adapter 1110.
    Fig. 12 is a flowchart showing a procedure of the volume recognition processing executed by the channel adapter when the storage system 1 is initiated or when a channel adapter is additionally installed.
    In response to an event that the storage system 1 is powered or that a channel adapter is added to the storage system 1 (step 6200), each channel adapter initializes itself (step 6201).
    After the initialization is finished, each channel adapter starts execution of the disk pool information acquisition program 110048 (step 6202). Each channel adapter then refers to the disk pool information centralized management table 1311 in the shared memory 13 to acquire information of a logical volume to be used (step 6203).
    When the information of the logical volume is obtained, the channel adapter registers the information to own disk pool information management table 110049 (step 6204). As a result of the processing, each channel adapter can recognize the logical volume allocated thereto (step 6205).
    According to the control operation, each channel adapter can use any logical volume of the storage system 1 according to the information registered to the disk pool information centralized management table 1311 containing information of all logical volumes of the storage system 1 in a centralized fashion. Particularly, since the logical volumes are exclusively allocated in the embodiment to the channel adapters in the disk pool information centralized management table 1311 to avoid duplication of the allocation, each channel adapter can use the logical volume in an exclusive fashion.
    The recognition processing is applicable to any channel adapters regardless of the channel adapter types such as a network channel adapter 1100 and a Fibre Channel adapter 1110.
    In the description of the embodiment, the disks are subdivided into logical devices LDEV 170 to define a set of LDEV 170 as a logical volume LV. However, there may exist an embodiment in which the disks 1700 are not divided into LDEV 170. In this case, the entire storage capacity of one or more RG 17000 forms a disk pool 6 and a predetermined storage capacity thereof is allocated as a logical volume to the channel adapter.
    According to the embodiment, there can be implemented a storage system including an arbitrary combination of a plurality of Fibre Channel adapters having the SAN or the block I/O interface and a plurality of network channel adapters having the NAS or the file I/O interface. It is therefore possible to construct a storage system having high scalability and a high degree of freedom for its configuration.
    The setting of the storage areas of the storage system and the management and operation of the storage system can be conducted in a centralized manner by a single manager terminal. Therefore, the management and operation of the system is facilitated and the system management cost is reduced.
    The user can safely use the storage capacity of a storage system including the SAN and the NAS by exclusively controlling the storage areas. Therefore, by managing the storage capacity in a centralized fashion, an easily accessible environment can be provided, and the storage capacity can be optimally distributed. This resultantly reduces the system management and operation cost.
    Description will now be given of another embodiment of the present invention. In the embodiment described above, mutually different logical volumes are respectively allocated to the Fibre Channel adapters 1110 or the network channel adapters 1100. However, a logical volume is allocated to a plurality of channel adapters in this embodiment. Description will be given of a method of managing the disk pool and a method of sharing the logical volume.
    In the embodiment, intra-CH sharing information items 1317 to 1319 shown in Fig. 9 are used. The information items 1317 to 1319 are set for each of the disk pool 6, the RAID group 17000, and the logical volume.
    The intra-CH sharing information mainly includes information of three attributes as follows.
  • (1) Readable/writable sharing attribute (R/W)
  • (2) Read-only sharing attribute (R)
  • (3) Sharing prohibision attribute (P)
  • These attributes are hierarchically ordered as P > R > R/W. Once the attribute is set to logical volumes, the attribute can not be changed to an attribute of a higher level. That is, when R is set to a logical volume, R can be changed to P. However, the attribute R cannot be changed to the attribute R/W. Assume that a logical volume for which the sharing is once prohibited can be changed to a sharable logical volume. For example, in a case in which a logical volume is used by assuming that a channel adapter exclusively use the logical volume, when the attribute of the logical volume is change to the read-only sharing attribute (R) or the like, there may occur a case in which data to be kept secret is open to the public. Therefore, the data is changed, and logical inconsistency occurs depending on cases.
    When it is desired to set a lower attribute to a logical volume having a higher attribute, the user must once delete the logical volume to return the storage area such as LDEV allocated to the logical volume to the disk pool 6. The user re-configures the logical volume again. Attributes other than those described above may also be set.
    The intra-CH sharing information is taken over in a sequence of "disk pool 6" > "RG 17000" > "LV or LDEV 170". That is, sharing information of a higher level (the disk pool is at the highest level) takes precedence over sharing information of RG 17000 and LDEV at a lower level. Sharing information of RG 17000 takes precedence over sharing information of LDEV 170.
    When the disk pool 6 has the sharing prohibition attribute (P), the associated items of a lower level such as RG 17000 have also the sharing prohibition attribute. Therefore, the storage areas of the disk pool 6 in the storage system 1 cannot be shared at all. Since the sharing prohibition attribute has the highest level, an attribute of a lower level cannot be individually assigned to the associated items of a lower level such as RG 17000. When the disk pool 6 has the R/W attribute, the associated items of a lower level such as RG 17000 have also the R/W attribute. Therefore, the storage areas of the disk pool 6 in the storage system 1 can be shared. In this case, an attribute of a lower level can be individually assigned to the associated items of a lower level such as RG 17000.
    As above, by setting the possibility of the sharing to each logical level of storage areas of the storage system such as the disk pool 6 and RG 17000, it is possible at allocation of a logical volume to completely prohibit or allow the sharing of the storage areas of the storage system 1. Therefore, volume allocation is facilitated and strength of security can be controlled.
    Next, description will be given of the setting of a volume to be shared between channel adapters in several cases.
    Description will be given of two modes in a case in which a single logical volume is shared between a Fibre Channel adapter 1110 and a network channel adapter 1100, namely, between a SAN and a NAS.
    The first mode is called "NAS over SAN". In this mode, for a file system created in a logical volume allocated to a network channel adapter 1100, a computer outside the storage system 1 sends a file access request via the LAN 20 to the channel adapter 1100. The computer having issued the request communicates data via the SAN 30.
    The second mode is called "backup via SAN". In this mode, a backup operation for data stored in a logical volume allocated to the network channel adapter 1100 to create a file system is conducted via an SAN connected to the Fibre Channel adapter 1110.
    These examples are associated with the SAN-SAN sharing, specifically, the sharing of a logical volume between computers. Therefore, the S-N sharing information 1317 of the intra-CH sharing information is used.
    In Fig. 9, a logical volume 8 including the disk pool 6 (1), RG 17000 (2), and LDEV 170 (6 to 8) is defined as a logical volume shared between a computer associated with SAN and a computer associated with NAS. This can be seen from that a plurality of types of channel adapters are registered to the channel adapter number entry 1316.
    The S-N sharing information entry 1317 corresponding to LV 8 contains information that disk pool 1 is R/W, RG2 is R/W, and LV 8 is R/W. In the same RG 17000 of the same disk pool, the attribute of any logical volume other than LV8 has been changed to the P attribute. That is, the sharing is prohibited for such logical volumes. The R/W attribute is set to LDEV 170 not used, i.e., reserved.
    The sharing attribute is set to the disk pool 6 when the disk pool 6 is defined. The attribute is set to RG 17000 when RG 17000 is registered to the disk pool 6. The attribute is set to LV and LDEV when the volume allocation processing is executed. The user inputs specific setting information from the management terminal 18.
    The SAN-SAN (S-S) sharing information and the NAS-NAS (N-N) sharing information are also respectively set to the entries 1318 and 1319 in a similar way.
    In the description of the embodiment, attributes having a plurality of levels are set to share a logical volume. Next, description will be given of further another embodiment of the present invention. In addition to the above attributes, an attribute having a plurality of levels is set to each user who accesses a logical volume. When a user desires to access the logical volume, the user is restricted according to the attribute level. In the description, the logical volume is shared between network attached storages (NAS).
    In Fig. 9, the user class sharing information entry 1320 in the disk pool information centralized management table 1311 (1522) has a subentry for each attribute (user class) associated with the user.
    Five user classes are as follows.
  • (1) Everyone: All users (E)
  • (2) Company: Users in the same enterprise (C)
  • (3) Domain: Users in the same domain (E)
  • (4) WorkGroup: Users in the same work group (W)
  • (5) UserGroup: Users in the same user group (U)
  • In general, the numbers of users belonging to the respective user classes (1) to (5) are sequentially ordered as (1) to (5) in a descending order.
    To each user class, an attribute similar to the intra-volume sharing attribute described in the second embodiment, namely, the attribute R/W, R, or P is assigned. The meaning of each attribute, levels thereof, operation to take over the attribute, and possibility to change the attribute are substantially equal to those described in the second embodiment, and hence description thereof will be omitted.
    Description will be given of the sharing operation using logical volume (LV) 0 shown in Fig. 9. The user class sharing information entry 1320 has subentries of E = P, C = R, D = R, W = R/W, and U = R/W. That is, any user of the same user group or the same work group can conduct read and write operations for the logical volume 0. The other users, namely, the users in the same domain or firm cannot change the contents of the logical volume.
    The user class sharing attribute is set to the disk pool 6 when the disk pool 6 is defined. The attribute is set to RG 17000 when RG 17000 is registered to the disk pool 6. The attribute is set to LV and LDEV when the volume allocation processing is executed. The user instructs the setting of the attribute from the manager terminal 18.
    For the management of users, the storage system 1 must have a user class definition information table containing information for each user, for example, information indicating a name of group of one of the user classes 1 to 5 to which the user belongs. For each logical volume, the user information is registered to the user class definition table.
    Specifically, since the NAS includes a unit to manage users for each file system, the user must set user management information to the network channel adapter 1100. When the user information is set to the adapter 1100, the user class definition information table is generated.
    According to the embodiment, when a logical volume is set to the storage system 1, the administrator selects an appropriate user class. As a result, security can be set at a logical volume level corresponding to the user class. Even if a user of a user class not allowed to access the logical volume attempts to access the volume, the user is checked for authentication for each logical volume according to the above function, and hence the access request from the user not having the access permission is rejected.
    By the security function, it is possible to guarantee access security in a system in which one storage system 1 of a large size is installed in a data center and a plurality of firms share one disk pool.
    Since the NAS inherently includes the user management function, the embodiment can be easily applied to the NAS. By installing the user management function of this embodiment in the SAN, it is also similarly possible to implement the disk pool management and the volume allocation while guaranteeing the access security between the user classes.
    In the description of the embodiment, the access security between user classes is guaranteed in one channel adapter. However, it is also possible to implement an embodiment in which the configuration to share a volume between channel adapters as described above is combined with the configuration to guarantee the access security between user classes. That is, the storage system 1 provides a service in which the system 1 allows only a user of an appropriate user class to access a volume while the volume is shared between multiple channel adapters.
    According to the present invention, the management and operation cost of the storage system can be reduced.
    According to the present invention, there can be provided a storage system of which the storage capacity is easily managed.
    According to the present invention, it is possible to increase the flexibility of the storage system.
    It should be further understood by those skilled in the art that although the foregoing description has been made on embodiments of the invention, the invention is not limited thereto and various changes and modifications may be made without departing from the spirit of the invention and the scope of the appended claims.

    Claims (16)

    1. A storage system (1), comprising:
      a plurality of slots (190) in which interface control devices of different types (1100, 1110) are installed;
      a plurality of disk devices (1700); and
      a control section (15) for controlling storage areas of the disk devices, wherein:
      the control section keeps information indicating a correspondence between the storage areas of the disk devices and the interface control devices of different types installed in the slots; and
      the interface control devices of different types use the storage areas according to the information indicating the correspondence.
    2. A storage system according to claim 1, wherein the interface control devices of different types include:
      a channel adapter board (1100) corresponding to a file I/O interface; and
      a channel adapter board (1110) corresponding to a block I/O interface.
    3. A storage system according to claim 2, wherein the interface control devices of different types have substantially the same form.
    4. A storage system according to claim 3, wherein the storage areas of the disk devices are subdivided into a plurality of groups according to a characteristic of each of the disk devices.
    5. A storage system according to claim 4, wherein:
      the control section includes information to establish a correspondence between one of the storage areas associated with one of the interface control devices to other one thereof; and
      the other one interface control device accesses the storage device according to the information.
    6. A storage system according to claim 5, wherein:
      a particular attribute is assigned to each of the storage areas of the disk devices; and
      the control section processes an access request received via the interface control device according to the attribute.
    7. A storage system according to claim 6, wherein the particular attribute includes an attribute indicating whether or not the interface control device corresponding to the storage area is allowed to write data in the storage area.
    8. A storage system according to claim 7, wherein the particular attribute includes an attribute indicating whether or not at least two of the interface control devices correspond to the storage area.
    9. A storage system according to claim 1, wherein the storage system transmits, in response to a file access request received by one of the interface control devices, data corresponding to the file access request via other one of the interface control devices.
    10. A storage system according to claim 8, further comprising a shared memory (13) for storing therein information indicating a correspondence between the interface control devices and the storage areas.
    11. A storage system according to claim 3, wherein the disk devices configure redundant arrays of inexpensive disks (RAID).
    12. A method of using storage areas in a storage system, comprising the steps of:
      installing interface control devices of different types (1100, 1110) in a plurality of slots (190);
      allocating storage areas of the storage system to the interface control devices; and
      using, by the interface control devices, the storage areas according to the allocation.
    13. A method according to claim 12, further comprising the steps of:
      assigning a particular attribute to each of the storage areas; and
      using, by the interface control devices, the storage areas according to the allocation and the attribute.
    14. A method according to claim 13, wherein the particular attribute includes an attribute indicating whether or not the interface control device corresponding to the storage area is allowed to write data in the storage area.
    15. A method according to claim 14, wherein the storage areas are subdivided into a plurality of groups.
    16. A method according to claim 12, further comprising the steps of:
      registering user class information based on which accesses to the storage areas are limited;
      registering user information indicating a user class to which each of users of the storage system belongs; and
      authenticating the access right of the users based on said user information and said user class information.
    EP02018182A 2002-04-26 2002-08-19 Storage system Pending EP1357463A3 (en)

    Applications Claiming Priority (2)

    Application Number Priority Date Filing Date Title
    JP2002125172 2002-04-26
    JP2002125172A JP2003316713A (en) 2002-04-26 2002-04-26 Storage device system

    Publications (2)

    Publication Number Publication Date
    EP1357463A2 true EP1357463A2 (en) 2003-10-29
    EP1357463A3 EP1357463A3 (en) 2008-03-19

    Family

    ID=28786803

    Family Applications (1)

    Application Number Title Priority Date Filing Date
    EP02018182A Pending EP1357463A3 (en) 2002-04-26 2002-08-19 Storage system

    Country Status (3)

    Country Link
    US (3) US6810462B2 (en)
    EP (1) EP1357463A3 (en)
    JP (1) JP2003316713A (en)

    Cited By (7)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    EP1582970A1 (en) * 2004-04-01 2005-10-05 Hitachi Ltd. Storage control system
    US7120742B2 (en) 2004-01-29 2006-10-10 Hitachi, Ltd. Storage system having a plurality of interfaces
    EP1746489A1 (en) * 2005-07-05 2007-01-24 Hitachi, Ltd. Storage control system
    WO2007019076A3 (en) * 2005-08-03 2007-05-03 Sandisk Corp Mass data storage system
    EP1901160A2 (en) * 2006-09-08 2008-03-19 Hitachi, Ltd. Storage system, storage system control method, and storage controller
    WO2008052880A1 (en) * 2006-10-30 2008-05-08 International Business Machines Corporation Blade server system
    US8209516B2 (en) 2005-12-21 2012-06-26 Sandisk Technologies Inc. Method and system for dual mode access for storage devices

    Families Citing this family (54)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US7162600B2 (en) 2005-03-29 2007-01-09 Hitachi, Ltd. Data copying method and apparatus in a thin provisioned system
    US7108975B2 (en) * 2001-09-21 2006-09-19 Regents Of The University Of Michigan Atlastin
    US7054925B2 (en) * 2001-11-21 2006-05-30 International Business Machines Corporation Efficient method for determining record based I/O on top of streaming protocols
    JP3964212B2 (en) * 2002-01-16 2007-08-22 株式会社日立製作所 Storage system
    JP2003345528A (en) * 2002-05-22 2003-12-05 Hitachi Ltd Storage system
    JP3966459B2 (en) 2002-05-23 2007-08-29 株式会社日立製作所 Storage device management method, system, and program
    US6928514B2 (en) * 2002-08-05 2005-08-09 Lsi Logic Corporation Method and apparatus for teaming storage controllers
    US7873700B2 (en) * 2002-08-09 2011-01-18 Netapp, Inc. Multi-protocol storage appliance that provides integrated support for file and block access protocols
    US6957294B1 (en) * 2002-11-15 2005-10-18 Unisys Corporation Disk volume virtualization block-level caching
    JP4252301B2 (en) * 2002-12-26 2009-04-08 株式会社日立製作所 Storage system and data backup method thereof
    JP2004220216A (en) * 2003-01-14 2004-08-05 Hitachi Ltd San/nas integrated storage device
    JP2004227097A (en) * 2003-01-20 2004-08-12 Hitachi Ltd Control method of storage device controller, and storage device controller
    JP4372427B2 (en) * 2003-01-20 2009-11-25 株式会社日立製作所 Storage device controller
    US20040199618A1 (en) * 2003-02-06 2004-10-07 Knight Gregory John Data replication solution
    JP4651913B2 (en) * 2003-02-17 2011-03-16 株式会社日立製作所 Storage system
    JP4322031B2 (en) 2003-03-27 2009-08-26 株式会社日立製作所 Storage device
    JP4060235B2 (en) 2003-05-22 2008-03-12 株式会社日立製作所 Disk array device and disk array device control method
    JP2004348464A (en) 2003-05-22 2004-12-09 Hitachi Ltd Storage device and communication signal shaping circuit
    US8650372B2 (en) * 2003-10-10 2014-02-11 Hewlett-Packard Development Company, L.P. Methods and systems for calculating required scratch media
    US8010757B2 (en) * 2003-10-10 2011-08-30 Hewlett-Packard Development Company, L.P. Media vaulting
    US20050081008A1 (en) * 2003-10-10 2005-04-14 Stephen Gold Loading of media
    US20050097132A1 (en) * 2003-10-29 2005-05-05 Hewlett-Packard Development Company, L.P. Hierarchical storage system
    JP4156499B2 (en) 2003-11-28 2008-09-24 株式会社日立製作所 Disk array device
    JP4497918B2 (en) 2003-12-25 2010-07-07 株式会社日立製作所 Storage system
    JP2005196673A (en) * 2004-01-09 2005-07-21 Hitachi Ltd Memory control system for storing operation information
    JP4634049B2 (en) 2004-02-04 2011-02-16 株式会社日立製作所 Error notification control in disk array system
    JP2005267111A (en) * 2004-03-17 2005-09-29 Hitachi Ltd Storage control system and method for controlling storage control system
    US20050210028A1 (en) * 2004-03-18 2005-09-22 Shoji Kodama Data write protection in a storage area network and network attached storage mixed environment
    JP2005293478A (en) 2004-04-05 2005-10-20 Hitachi Ltd Storage control system, channel controller equipped with the same system and data transferring device
    JP4528551B2 (en) * 2004-04-14 2010-08-18 株式会社日立製作所 Storage system
    US7200716B1 (en) * 2004-04-30 2007-04-03 Network Appliance, Inc. Method and apparatus to offload operations in a networked storage system
    JP4455153B2 (en) * 2004-05-14 2010-04-21 株式会社日立製作所 Storage device management method and system
    JP4575028B2 (en) * 2004-05-27 2010-11-04 株式会社日立製作所 Disk array device and control method thereof
    WO2006013641A1 (en) * 2004-08-04 2006-02-09 Hitachi, Ltd. Integrated circuit device and signal transmission system
    US7096338B2 (en) * 2004-08-30 2006-08-22 Hitachi, Ltd. Storage system and data relocation control device
    US7395396B2 (en) * 2004-08-30 2008-07-01 Hitachi, Ltd. Storage system and data relocation control device
    JP5038589B2 (en) * 2004-10-04 2012-10-03 株式会社日立製作所 Disk array device and load balancing method thereof
    US8066515B2 (en) * 2004-11-17 2011-11-29 Nvidia Corporation Multiple graphics adapter connection systems
    JP4819369B2 (en) 2005-02-15 2011-11-24 株式会社日立製作所 Storage system
    JP4699808B2 (en) * 2005-06-02 2011-06-15 株式会社日立製作所 Storage system and configuration change method
    JP4723921B2 (en) * 2005-06-13 2011-07-13 株式会社日立製作所 Storage control device and control method thereof
    US7769978B2 (en) 2005-12-21 2010-08-03 Sandisk Corporation Method and system for accessing non-volatile storage devices
    US7793068B2 (en) 2005-12-21 2010-09-07 Sandisk Corporation Dual mode access for non-volatile storage devices
    JP4885575B2 (en) * 2006-03-08 2012-02-29 株式会社日立製作所 Storage area allocation optimization method and management computer for realizing the method
    JP4802843B2 (en) 2006-04-24 2011-10-26 富士通株式会社 Logical volume duplicate allocation prevention apparatus, duplicate allocation prevention method, and duplicate allocation prevention program
    JP2008181416A (en) * 2007-01-25 2008-08-07 Hitachi Ltd Storage system and data management method
    US7809915B2 (en) * 2007-06-26 2010-10-05 International Business Machines Corporation Handling multi-rank pools and varying degrees of control in volume allocation on storage controllers
    KR101623119B1 (en) * 2010-02-01 2016-05-20 삼성전자주식회사 Error control method of solid state drive
    US8527481B2 (en) * 2010-03-29 2013-09-03 International Business Machines Corporation Methods and systems for obtaining and correcting an index record for a virtual storage access method keyed sequential data set
    WO2012127636A1 (en) * 2011-03-22 2012-09-27 富士通株式会社 Information processing system, shared memory apparatus, and method of storing memory data
    US20140229695A1 (en) * 2013-02-13 2014-08-14 Dell Products L.P. Systems and methods for backup in scale-out storage clusters
    JP6241178B2 (en) * 2013-09-27 2017-12-06 富士通株式会社 Storage control device, storage control method, and storage control program
    CN104182010A (en) * 2014-09-11 2014-12-03 浪潮电子信息产业股份有限公司 Rack based on data-switch data transmission
    US10817220B2 (en) 2019-01-31 2020-10-27 EMC IP Holding Company LLC Sharing processor cores in a multi-threading block i/o request processing data storage system

    Citations (4)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    EP0689143A1 (en) * 1994-06-20 1995-12-27 International Business Machines Corporation Data storage subsystem
    US6220873B1 (en) * 1999-08-10 2001-04-24 Stratos Lightwave, Inc. Modified contact traces for interface converter
    EP1100001A2 (en) * 1999-10-25 2001-05-16 Sun Microsystems, Inc. Storage system supporting file-level and block-level accesses
    US6343324B1 (en) * 1999-09-13 2002-01-29 International Business Machines Corporation Method and system for controlling access share storage devices in a network environment by configuring host-to-volume mapping data structures in the controller memory for granting and denying access to the devices

    Family Cites Families (26)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US4310883A (en) * 1978-02-13 1982-01-12 International Business Machines Corporation Method and apparatus for assigning data sets to virtual volumes in a mass store
    JPH0792775B2 (en) * 1989-12-11 1995-10-09 株式会社日立製作所 Space management method for external storage devices
    JP3448068B2 (en) * 1991-12-24 2003-09-16 富士通株式会社 Data processing system and storage management method
    US5689678A (en) * 1993-03-11 1997-11-18 Emc Corporation Distributed storage array system having a plurality of modular control units
    JPH0713705A (en) * 1993-06-16 1995-01-17 Hitachi Ltd Disk device
    JP3264465B2 (en) 1993-06-30 2002-03-11 株式会社日立製作所 Storage system
    JP3228182B2 (en) 1997-05-29 2001-11-12 株式会社日立製作所 Storage system and method for accessing storage system
    US6247077B1 (en) * 1998-02-06 2001-06-12 Ncr Corporation Highly-scalable parallel processing computer system architecture
    US7142650B1 (en) * 1998-06-12 2006-11-28 Mci Communication Corporation System and method for resource management
    US6295575B1 (en) * 1998-06-29 2001-09-25 Emc Corporation Configuring vectors of logical storage units for data storage partitioning and sharing
    US7165152B2 (en) * 1998-06-30 2007-01-16 Emc Corporation Method and apparatus for managing access to storage devices in a storage system with access control
    US6487561B1 (en) * 1998-12-31 2002-11-26 Emc Corporation Apparatus and methods for copying, backing up, and restoring data using a backup segment size larger than the storage block size
    US6553408B1 (en) * 1999-03-25 2003-04-22 Dell Products L.P. Virtual device architecture having memory for storing lists of driver modules
    JP3843713B2 (en) 1999-08-27 2006-11-08 株式会社日立製作所 Computer system and device allocation method
    US6854034B1 (en) 1999-08-27 2005-02-08 Hitachi, Ltd. Computer system and a method of assigning a storage device to a computer
    US6606629B1 (en) * 2000-05-17 2003-08-12 Lsi Logic Corporation Data structures containing sequence and revision number metadata used in mass storage data integrity-assuring technique
    US6925528B2 (en) * 2000-06-20 2005-08-02 Storage Technology Corporation Floating virtualization layers
    US6591335B1 (en) * 2000-09-29 2003-07-08 Emc Corporation Fault tolerant dual cache system
    US6968463B2 (en) * 2001-01-17 2005-11-22 Hewlett-Packard Development Company, L.P. System for controlling access to resources in a storage area network
    US20020178162A1 (en) * 2001-01-29 2002-11-28 Ulrich Thomas R. Integrated distributed file system with variable parity groups
    US6779063B2 (en) 2001-04-09 2004-08-17 Hitachi, Ltd. Direct access storage system having plural interfaces which permit receipt of block and file I/O requests
    JP3617632B2 (en) * 2001-07-19 2005-02-09 富士通株式会社 RAID control apparatus and control method thereof
    US8055555B2 (en) * 2001-09-25 2011-11-08 Emc Corporation Mediation device for scalable storage service
    US8046469B2 (en) * 2001-10-22 2011-10-25 Hewlett-Packard Development Company, L.P. System and method for interfacing with virtual storage
    US6782450B2 (en) * 2001-12-06 2004-08-24 Raidcore, Inc. File mode RAID subsystem
    US6973595B2 (en) * 2002-04-05 2005-12-06 International Business Machines Corporation Distributed fault detection for data storage networks

    Patent Citations (4)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    EP0689143A1 (en) * 1994-06-20 1995-12-27 International Business Machines Corporation Data storage subsystem
    US6220873B1 (en) * 1999-08-10 2001-04-24 Stratos Lightwave, Inc. Modified contact traces for interface converter
    US6343324B1 (en) * 1999-09-13 2002-01-29 International Business Machines Corporation Method and system for controlling access share storage devices in a network environment by configuring host-to-volume mapping data structures in the controller memory for granting and denying access to the devices
    EP1100001A2 (en) * 1999-10-25 2001-05-16 Sun Microsystems, Inc. Storage system supporting file-level and block-level accesses

    Non-Patent Citations (1)

    * Cited by examiner, † Cited by third party
    Title
    SANDHU R S ET AL: "ACCESS CONTROL: PRINCIPLES AND PRACTICE" IEEE COMMUNICATIONS MAGAZINE, IEEE SERVICE CENTER,NEW YORK, NY, US, vol. 32, no. 9, 1 September 1994 (1994-09-01), pages 40-48, XP000476554 ISSN: 0163-6804 *

    Cited By (16)

    * Cited by examiner, † Cited by third party
    Publication number Priority date Publication date Assignee Title
    US7191287B2 (en) 2004-01-29 2007-03-13 Hitachi, Ltd. Storage system having a plurality of interfaces
    US7120742B2 (en) 2004-01-29 2006-10-10 Hitachi, Ltd. Storage system having a plurality of interfaces
    US7404038B2 (en) 2004-01-29 2008-07-22 Hitachi, Ltd. Storage system having a plurality of interfaces
    US7206901B2 (en) 2004-04-01 2007-04-17 Hitachi, Ltd. Storage control system
    EP1582970A1 (en) * 2004-04-01 2005-10-05 Hitachi Ltd. Storage control system
    US7549019B2 (en) 2004-04-01 2009-06-16 Hitachi, Ltd. Storage control system
    US9104315B2 (en) 2005-02-04 2015-08-11 Sandisk Technologies Inc. Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage
    US10055147B2 (en) 2005-02-04 2018-08-21 Sandisk Technologies Llc Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage
    US10126959B2 (en) 2005-02-04 2018-11-13 Sandisk Technologies Llc Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage
    EP1746489A1 (en) * 2005-07-05 2007-01-24 Hitachi, Ltd. Storage control system
    WO2007019076A3 (en) * 2005-08-03 2007-05-03 Sandisk Corp Mass data storage system
    US8209516B2 (en) 2005-12-21 2012-06-26 Sandisk Technologies Inc. Method and system for dual mode access for storage devices
    EP1901160A2 (en) * 2006-09-08 2008-03-19 Hitachi, Ltd. Storage system, storage system control method, and storage controller
    EP1901160A3 (en) * 2006-09-08 2010-06-02 Hitachi, Ltd. Storage system, storage system control method, and storage controller
    WO2008052880A1 (en) * 2006-10-30 2008-05-08 International Business Machines Corporation Blade server system
    US7765331B2 (en) 2006-10-30 2010-07-27 International Business Machines Corporation Integrated RAID controller and SAS switch

    Also Published As

    Publication number Publication date
    US20070094447A1 (en) 2007-04-26
    US20050033915A1 (en) 2005-02-10
    US20030204671A1 (en) 2003-10-30
    EP1357463A3 (en) 2008-03-19
    JP2003316713A (en) 2003-11-07
    US7231491B2 (en) 2007-06-12
    US6810462B2 (en) 2004-10-26
    US7444468B2 (en) 2008-10-28

    Similar Documents

    Publication Publication Date Title
    US6810462B2 (en) Storage system and method using interface control devices of different types
    US8583876B2 (en) Logical unit security for clustered storage area networks
    JP3837953B2 (en) Computer system
    US6742034B1 (en) Method for storage device masking in a storage area network and storage controller and storage subsystem for using such a method
    US6907498B2 (en) Computer system and a method of assigning a storage device to a computer
    US7065616B2 (en) System and method for policy based storage provisioning and management
    US7437462B2 (en) Method for zoning data storage network using SAS addressing
    US6295575B1 (en) Configuring vectors of logical storage units for data storage partitioning and sharing
    US20020029319A1 (en) Logical unit mapping in a storage area network (SAN) environment
    US20020095602A1 (en) System for controlling access to resources in a storage area network
    US20030236884A1 (en) Computer system and a method for storage area allocation
    US20020103913A1 (en) System and method for host based target device masking based on unique hardware addresses
    JP2004005381A (en) System for partitioning storage area network related to data library
    US20070079103A1 (en) Method for resource management in a logically partitioned storage system
    US20030200399A1 (en) System and method for controlling access to storage in a distributed information handling system
    US7366867B2 (en) Computer system and storage area allocation method
    US7082462B1 (en) Method and system of managing an access to a private logical unit of a storage system
    JP3897049B2 (en) Computer system
    JP2008140413A (en) Storage device system
    JP2005322254A (en) Computer system, computer used for this computer system, and storage device
    JP4564035B2 (en) Computer system, and computer and storage device used in the computer system
    JP4438785B2 (en) Computer system
    JP4723532B2 (en) Computer system, and computer and storage device used in the computer system

    Legal Events

    Date Code Title Description
    PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

    Free format text: ORIGINAL CODE: 0009012

    AK Designated contracting states

    Kind code of ref document: A2

    Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

    AX Request for extension of the european patent

    Extension state: AL LT LV MK RO SI

    17P Request for examination filed

    Effective date: 20060331

    PUAL Search report despatched

    Free format text: ORIGINAL CODE: 0009013

    RIC1 Information provided on ipc code assigned before grant

    Ipc: H04L 29/08 20060101ALI20071204BHEP

    Ipc: G06F 3/06 20060101AFI20030807BHEP

    AK Designated contracting states

    Kind code of ref document: A3

    Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

    AX Request for extension of the european patent

    Extension state: AL LT LV MK RO SI

    PUAF Information related to the publication of a search report (a3 document) modified or deleted

    Free format text: ORIGINAL CODE: 0009199SEPU

    PUAL Search report despatched

    Free format text: ORIGINAL CODE: 0009013

    D17D Deferred search report published (deleted)
    AK Designated contracting states

    Kind code of ref document: A3

    Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR

    AX Request for extension of the european patent

    Extension state: AL LT LV MK RO SI

    17Q First examination report despatched

    Effective date: 20080916

    AKX Designation fees paid

    Designated state(s): DE FR GB

    STAA Information on the status of an ep patent application or granted ep patent

    Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN