WO2004102949A1 - Method and system for remote and adaptive visualization of graphical image data - Google Patents

Method and system for remote and adaptive visualization of graphical image data Download PDF

Info

Publication number
WO2004102949A1
WO2004102949A1 PCT/DK2003/000312 DK0300312W WO2004102949A1 WO 2004102949 A1 WO2004102949 A1 WO 2004102949A1 DK 0300312 W DK0300312 W DK 0300312W WO 2004102949 A1 WO2004102949 A1 WO 2004102949A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
screen image
image
connection
computer
Prior art date
Application number
PCT/DK2003/000312
Other languages
French (fr)
Inventor
Andrew Bruno Dobbs
Niels Husted Kjaer
Alexander Dimitrov Karaivanov
Morten Sylvest Olsen
Original Assignee
Medical Insight A/S
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medical Insight A/S filed Critical Medical Insight A/S
Priority to PCT/DK2003/000312 priority Critical patent/WO2004102949A1/en
Priority to EP03720290A priority patent/EP1634438A1/en
Priority to CA002566638A priority patent/CA2566638A1/en
Priority to CNB038266539A priority patent/CN100477717C/en
Priority to AU2003223929A priority patent/AU2003223929A1/en
Publication of WO2004102949A1 publication Critical patent/WO2004102949A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/4143Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a Personal Computer [PC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/327Initiating, continuing or ending a single-mode communication; Handshaking therefor
    • H04N1/32765Initiating a communication
    • H04N1/32771Initiating a communication in response to a request, e.g. for a particular document
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/327Initiating, continuing or ending a single-mode communication; Handshaking therefor
    • H04N1/32765Initiating a communication
    • H04N1/32771Initiating a communication in response to a request, e.g. for a particular document
    • H04N1/32776Initiating a communication in response to a request, e.g. for a particular document using an interactive, user-operated device, e.g. a computer terminal, mobile telephone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/333Mode signalling or mode changing; Handshaking therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/333Mode signalling or mode changing; Handshaking therefor
    • H04N1/33353Mode signalling or mode changing; Handshaking therefor according to the available bandwidth used for a single communication, e.g. the number of ISDN channels used
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/333Mode signalling or mode changing; Handshaking therefor
    • H04N1/33376Mode signalling or mode changing; Handshaking therefor according to characteristics or state of one of the communicating parties, e.g. available memory capacity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving video stream encryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4405Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video stream decryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8146Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
    • H04N21/8153Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics comprising still images, e.g. texture, background image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/333Mode signalling or mode changing; Handshaking therefor
    • H04N2201/33307Mode signalling or mode changing; Handshaking therefor of a particular mode
    • H04N2201/33342Mode signalling or mode changing; Handshaking therefor of a particular mode of transmission mode
    • H04N2201/33357Compression mode

Definitions

  • the present invention relates to a method and system for remote visualization and data analysis of graphical data, in particular the invention relates to remote visualization and data analysis of graphical medical data.
  • 3D-scanners such as: Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), and Single Photon Emission Computed Tomography (SPECT), as well as 2D-scanners, such as: Computed Radiography (CR) and Digital Radiography (DR) are available.
  • CT Computed Tomography
  • MRI Magnetic Resonance Imaging
  • US Ultrasound
  • PET Positron Emission Tomography
  • SPECT Single Photon Emission Computed Tomography
  • 2D-scanners such as: Computed Radiography (CR) and Digital Radiography (DR) are available.
  • CR Computed Radiography
  • DR Digital Radiography
  • the CT scanner detects X-ray absorption in a specific volume element of the patient who is scanned
  • the MRI scanner uses magnetic fields to detect the presence of water in a specific volume element of the patient who is scanned.
  • Both these scanners provide slices of the body, which can be assembled to form a complete 3D image of the scanned section of the patient.
  • a common factor of most medical scanners is that the acquired data sets, especially with the 3D-scanners, are quite large, consisting of several hundreds of megabytes for each patient. Such large data sets require significant computing power in order to visualize the data, and especially to process and manipulate the data. Furthermore, transmitting such image data across common networks presents challenges regarding security and traffic congestion.
  • the image data generated with medical image scanners are generally managed and stored via electronic database systems under the broad category of Picture Archiving and Communications Systems (PACS systems) which implement the Digital Imaging and Communications in Medicine standard (DICOM standard).
  • PPS systems Picture Archiving and Communications Systems
  • DICOM standard Digital Imaging and Communications in Medicine standard
  • the scanner is connected to a central server computer, or a cluster of server computers, which stores the patient data sets.
  • the data may then be accessed from a single or a few dedicated visualization workstations.
  • Such workstations are expensive and can therefore normally only be accessed in dedicated diagnostic suites, and not in clinicians offices, hospital wards or operating theaters.
  • Another type of less expensive system exists in which a general client-server architecture is used.
  • the central server computer may be accessed from a variety of different client types, e.g. a thin client.
  • client types e.g. a thin client.
  • a visualization program is run on the central server, and the output of the program is via a network connection routed to a remote display of the client.
  • a client-server system is the OpenGL VizserverTM system provided by Silicon Graphics, Inc. (http://www.SQi.com/software/vizserver/).
  • the system enables clients such as Silicon Graphics® Octane®, and PC based workstations to access the rendering capabilities of an SGI® Onyx® server.
  • special software is required to be installed at the client side. This not only limits the type of client, which may be used to access the server, but also adds additional maintenance requirements, as the VizserverTM client software must be installed locally on each client workstation. Further more, the VizserverTM server software does not attempt to re-use information from previously sent frames. It is therefore only feasible to run such a system if a dedicated high-speed data network is available. This is often not the case for many hospitals; furthermore installation of such a network is an expensive task.
  • a system for adaptively transporting video over networks wherein the available bandwidth varies with time comprises a video/audio encoder/decoder that functions to compress, code, decode and decompress video streams that are transmitted over the network connection.
  • the system adjusts the compression ratio to accommodate a plurality of bandwidths.
  • Bandwidth adjustability is provided by offering a trade-off between video resolution, frame rate and individual frame quality.
  • the raw video source is split into frames where each frame comprises a multitude of levels of data representing varying degrees of quality.
  • a video client receives a number of levels for each frame depending upon the bandwidth, the higher the level received for each frame, the higher the quality of the frame.
  • the invention provides a method for transferring graphical data from a first device to an at least second device in a computer-network system, the method comprises the steps of:
  • the graphical data may be any type of graphical data but is preferably medical image data, e.g. data acquired in connection with a medical scanning of a patient.
  • the graphical data is stored on a first device that may be a central computer, or a central cluster of computers.
  • the first device may comprise any type of computer, or cluster of computers, with the necessary aggregate storage capacity to store large data sets which, e.g., arise from scanning of a large number of patients at a hospital.
  • the first device should furthermore be equipped with the necessary computing power to be able to handle the demanding tasks of analyzing and manipulating large 3D data sets, such as a 3D image of a human head, a chest, etc.
  • the at least second device can be any type of computer machine equipped with a screen for graphical visualization.
  • the term visualization should be interpreted to include both 2D visualization and 3D visualization.
  • the at least second device may, e.g., be a thin client, a wireless handheld device such as a personal digital assistant (PDA), a personal computer (PC), a tablet PC, a laptop computer or a workstation.
  • PDA personal digital assistant
  • PC personal computer
  • the at least second device machine may merely act as a graphical terminal of the first device.
  • the at least second device may be capable of receiving request actions from a user and transferring the requests to the first device, as well as receiving and showing screen images generated by the first device.
  • the screen of the at least second device can in many respects be looked upon as a screen connected to the first device.
  • An action is requested, e.g. by the user of the at least second device, or by a program call.
  • the action may, e.g., result in that a list of possible choices may be shown on the screen of the at least second device, or the action may, e.g., result in that an image related patient data may be shown on the screen of the at least second device.
  • the request may be based upon user instructions received from user interaction events such as keystrokes, mouse movements, mouse clicks, etc.
  • the first device Upon receiving a request, the first device interprets the request in terms of a request for a specific screen image.
  • the first device obtains the relevant patient data from a storage medium to which it is connected.
  • the storage medium may be any type of storage medium, such as a hard disk.
  • a screen image is generated as a result of the request.
  • the present bandwidth of the connection is estimated, and based on the estimated available bandwidth and the type of the request, the screen image is compressed using a corresponding compression method.
  • the first device forwards the compressed screen image to the at least second device.
  • the first device may, however, also without receiving a request from the at least second device generate a non-requested screen image.
  • the non-requested screen image may be based upon relevant patient data, or the non-requested screen image may be unrelated to patient data or any request made by the user.
  • the non-requested screen image may be generated due to instructions present at the first device.
  • the generation of the screen image may further be conditioned upon a type of the at least second device. If, e.g., the at least second device is a PDA it may be redundant to generate a high-resolution image, since the PDA's available today are limited in their resolution. Therefore the same images generated to a PDA and a thin client, may be generated with lower screen resolution in the case of the PDA than in the case of the thin client.
  • the at least second device is a PDA it may be redundant to generate a high-resolution image, since the PDA's available today are limited in their resolution. Therefore the same images generated to a PDA and a thin client, may be generated with lower screen resolution in the case of the PDA than in the case of the thin client.
  • the compression method may further be conditioned upon a type of the request.
  • Compression of a graphical image may involve a loss, i.e. the image resulting after a compression decompression process is not identical to the image before the compression decompression process, such methods are normally referred to as lossy compression methods. Compression methods that involve a loss are usually faster to perform and the images may be compressed to a higher rate.
  • the type of request may be taken into account in situations where it is important that the decompressed image is lossless, or in situations where a loss is unimportant.
  • the type of the request may be such as: show an image, rotate an image, zoom in on an image, move an image, etc.
  • the compression method may further be conditioned upon a type of the at least second device. Especially the computing power of the at least second device may be taken into account. If, e.g., the at least second device is equipped with a computing power so that the task of decompression is estimated to be too time consuming, a different and less demanding compression method may be used.
  • the first device may comprise means for encrypting the screen image before it is sent to the at least second device.
  • the at least second device may possess means for decrypting the received screen images before a screen image is generated on the screen of the at least second device.
  • the system may include a feature where the user manually sets the level of encryption, or the system may automatically set an appropriate encryption level.
  • the time it takes to decrypt the received screen images may depend on the processing means of the at least second device machine, especially handheld devices may be limited in processing power. In certain cases it may therefore be a limiting factor to use demanding encryption routines.
  • the encryption routine used for encrypting the data may therefore be dependent upon the type of the at least second device.
  • the applications for data analysis, data manipulation and data visualization may be stored on the first device, and may be run from the first device.
  • the applications may also be stored on and may be run from a device that is connected to the first device via a computer network connection.
  • a multitude of applications may be accessible from the first device.
  • the application may include software which is adapted to manipulate both 3D graphical medical data such as data from: MRI, CT, US, PET, and SPECT, as well as 2D graphical medical data such as data from: CR and DR, as well as data from other devices that produce medical images.
  • the manipulation may be any standard manipulation of the data such as rotation, zooming in and out, cutting an area, or subset of the data, etc.
  • the manipulation may also be less standard manipulation, or it may be unique manipulation specially developed for the present system.
  • compression methods may be used.
  • the compression method may either be selected manually at session start or may be chosen automatically by the software.
  • the different compression methods are applied according to the required compression rate.
  • Compression methods may differ in compression time, compression rate as well as, which type of data they are most suitable for.
  • a variety of compression method may be used, both standard methods, as well as methods especially developed for the present system.
  • GCC Gray Cell Compression
  • the GCC method is especially well suited for compressing images where a large fraction of the image is gray scale.
  • the GCC method is therefore well suited for compression of medical images since many medical objects may often be imaged in gray scale.
  • a session manager at the first device site may create and maintain a session between the at least second device machine and the first device and upload control components to the at least second device.
  • the at least second device may be a computer without an operating system (OS), e.g. a thin client.
  • OS operating system
  • the at least second device may also be a computer with an OS, e.g. a PDA or a PC.
  • OS is already functioning on the at least second device, and in this case it may be necessary only to upload a computer application to enable a session.
  • a session may, however, also be created and/or maintained without uploading a computer application from the first device to the at least second device. For example, it may suffice to allow the at least second device to receive screen images from the first device. It is not necessary to run a computer application on the at least second device in order to receive, view and/or even manipulate screen images on an at least second device.
  • a frame sizer may be present which sets the frame buffer resolution of the at least second device in accordance with the detected available bandwidth, and optionally also in accordance with specifications of the at least second device. That is, if the detected bandwidth is low, the frame buffer resolution may be set to a low value, and the screen image may be generated according to the frame buffer resolution. Setting the frame buffer to a low resolution is a fast way of compressing the data.
  • the graphical hardware of most computer systems possess the functionality that if a screen image with a lower resolution than the screen resolution is received, the screen image will automatically be blown up to fill the entire screen. The final screen output on the at least second device is naturally limited in resolution in this case.
  • the frame buffer resolution may be set to the screen resolution of the at least second device. In this case, more bandwidth is occupied, but full resolution is sustained.
  • the specifications of the at least second device may be taken into account if the at least second device is, e.g. a PDA, since the screen resolution of PDA's which are available today is limited. It would be a waste of bandwidth to transfer an image with a resolution that is too high, only for it to be down sampled at the at least second device.
  • An object subsampler which sets the visualization and rendering parameters in accordance with the detected available bandwidth, and optionally also in accordance with the specifications of the at least second device may be present.
  • the color depth of the generated screen image may be varied, 8 bit colors may be used while the bandwidth is low, and 16, 24 or 32 bits may be used if the bandwidth permits it.
  • the computing power of the at least second device may be taken into account. The time it takes to decompress the received screen images may depend on the processing means of the at least second device machine, especially handheld devices may be limited in processing power. In certain cases it may therefore be faster not to compress, or only slightly compress, the screen images.
  • the sized, subsampled, compressed and possibly encrypted data is transferred by an I/O- manager at the first device side to an I/O-manager at the at least second device side, which also handles the transferring of the user-interactions to the first device.
  • the requested screen image will only contain a small change from the screen image which is already present on the at least second device screen.
  • the screen image generated at the at least second device side is either based on a screen image received from the first device, on the content of a frame buffer at the at least second device side, or on a combination of the received screen image and the contents of the frame buffer. That is, the received screen image contains changes to the previously sent screen image, so that the displayed screen image is a superposition of the previously displayed screen image available through the at least second device's frame buffer, and the received image changes.
  • Most networks are shared resources, and the available bandwidth over a network connection at any particular instant varies with both time and location.
  • the present available bandwidth is estimated and the rate with which the data is transferred is varied accordingly.
  • the at least second device refreshes the screen from the frame buffer of the at least second device in this case. Therefore, the network connection occupies variable amounts of bandwidth.
  • Many hospitals, clinics or other medical institutions already have a data network installed, furthermore the medical clinician may sit at home or at a small medical office without access to a high capacity network. It is therefore important that the at least second device and first device may communicate via a number of possible common network connections, such as an Internet connection or an Intranet connection, e.g.
  • the second device and the first device may communicate through any type of network, which utilizes the Internet protocol (IP) such as the Internet or other TCP/IP networks.
  • IP Internet protocol
  • the second device and the first device may communicate both through dedicated and non-dedicated network connections.
  • the graphical data may be graphical medical data based on data that conforms to the Digital Imaging and Communications in Medicine standard (DICOM standard) implemented on Picture Archiving and Communications Systems (PACS systems). Most medical scanners support the DICOM standard, which is a standard handling compatibility between different systems.
  • Textual data may be presented in a connection with the graphical data. Preferably the textual data is based on data which conforms to the Health Level Seven (HL7) standard or the Electronic Data Interchange for Administration, Commerce and Transport (EDIFACT) standard.
  • HL7 Health Level Seven
  • EDIFACT Electronic Data Interchange for Administration, Commerce and Transport
  • the interchange of graphical and/or medical data may be based on the International Health Exchange (IHE) framework for data interchange.
  • IHE International Health Exchange
  • a system for transferring graphical data in a computer-network system comprises:
  • a second device equipped with means for registering a user input as well as visualization means for visualizing graphical data
  • a first device equipped with:
  • the first device may further comprise means for encrypting data to be sent via the computer connection between the first device and the at least second device, and the at least second device may comprise means for decrypting the received data.
  • the at least second device and the first device may communicate via a common network connection.
  • the first device may be a computer server system and the at least second device may, e.g., be a thin client, a workstation, a PC, a tablet PC, a laptop computer or a wireless handheld device.
  • the first device may be, or may be part of, a PACS system.
  • Fig. 1 shows a schematic view of a preferred embodiment of the present invention
  • Fig. 2 shows a schematic flow diagram illustrating the functionally of the Adaptive Streaming Module (ASM);
  • ASM Adaptive Streaming Module
  • Fig. 3 shows an example of a rotation and the corresponding bandwidth of a data object
  • Fig. 4 illustrates the correspondence between the compression time, the compression method used, and the obtainable compression rate for loss-less compression
  • Fig. 5 illustrates the correspondence between the compression quality, the compression method used, and the obtainable compression rate for lossy compression.
  • the present invention provides a method and system for transferring graphical data from a first device to an at least second device in a computer-network system.
  • the invention is in the following described with reference to a preferred embodiment where the graphical data is graphical medical data, and where the computer-network system is a client-server system.
  • a schematic view is presented in Fig. 1.
  • Medical image data is acquired by using a medical scanner 1 that is connected to a server computer 2.
  • a multitude of clients 3 may be connected to the server.
  • the server is part of a PACS system.
  • the acquired images 16 may automatically or manually be transferred to and stored on a server machine.
  • the server may be a separate computer, a cluster of computers or computer system connected via a computer connection. Access to the images may be established at any time thereafter.
  • the applications 15 for data analysis and visualization is stored on and may be run from the server machine.
  • the server is equipped with the necessary computing power to be able to handle the demanding tasks of analyzing and manipulating large 3D data sets, such as 3D images of a human head, a chest, etc. All data and data applications 15 for visualization and analysis are stored, operated and processed on the server.
  • the client 3 can be any type of computer machine equipped with a screen for graphical visualization.
  • the client may, e.g., be a thin client 5, a wireless handheld device such as a personal digital assistant (PDA) 6, a personal computer (PC), a laptop computer, a workstation 7, etc.
  • PDA personal digital assistant
  • PC personal computer
  • laptop computer a workstation 7 etc.
  • An adaptive streaming module (ASM) 4 is used in order to ensure a continuous stream of data between the server and the client.
  • the ASM is capable of estimating the present available bandwidth and vary the rate with which the data is transferred accordingly.
  • the ASM 4 is a part of the server machine 2.
  • the client may comprise an ASM 5, 6, 7 or it may not comprise an ASM 17.
  • a client ASM is not necessary for the system to work.
  • the ASM comprises a session manager 8.
  • the session manager creates and maintains a session between the client machine and the server.
  • the session manager 8 uploads control components to the at least second device. For example if the client is a thin client 5, first an operating system (OS) is uploaded, so that the thin client becomes capable of accepting and sending request actions, as well as receiving and showing screen images generated by the server.
  • OS operating system
  • the client is a PDA 6 or a PC
  • an operating system is already functioning on the client, and in this case it may be necessary only to upload a computer program to enable a session.
  • the ASM further comprises a bandwidth manager 9 that continuously measures the available bandwidth.
  • a frame sizer 10 that sets the frame buffer resolution of the client.
  • An object subsampler 11 that sets the visualization and rendering parameters.
  • a compression encoder 12 that compresses an image.
  • An encrypter 13 that comprise means for encrypting the data before it is sent to the client 3.
  • the sized, subsampled, compressed and encrypted data is transferred by an I/O-manager 14.
  • a schematic flow diagram illustrating the functionally of the ASM-module 20 is shown in Fig. 2.
  • the user of the medical data may, e.g., be a surgeon who should plan an operation on the background of scanned 3D images.
  • the user first establishes a connection from a graphical interface 21, such as a thin client present in his or her office.
  • a graphical interface 21 such as a thin client present in his or her office.
  • the user is presented with a list from which the user may request access to the relevant images that are to be presented on the computer screen 23.
  • the user of the medical data is a clinician on rounds at a ward in a hospital.
  • the clinician may carry with him a PDA, onto which he can first log on to the system, and subsequently access the relevant images of the patient.
  • the user of the client is requesting an action, such as a specific image of a patient.
  • the request 24 is sent to the server, which interprets the request in terms of a request for a specific screen image.
  • the server obtains the relevant image data 25 from a storage medium to which it is connected.
  • the present bandwidth 26 of the connection is estimated, and based on the detected available bandwidth and a multitude of other parameters, the screen image is compressed to a corresponding compression rate.
  • two other parameters may be used for generating the screen image.
  • the first parameter may be the color depth 27.
  • the second parameter may be the client type 28. If the requesting client machine is a thin client a 19-inch screen may be used as the graphical interface. In this case an image with 768 times 1024 pixels may be generated. But if the requesting machine is a PDA, a somewhat smaller image should be generated, e.g. an image with 300 times 400 pixels, since most PDA's are limited with respect to screen resolution.
  • the screen image is generated, compressed and encrypted 22.
  • the image is transferred to the client machine, where it is first decrypted and decompressed 29 before it is shown on the screen 23 used by the requesting user.
  • surgeon may use a multitude of 3D graphical routines, such as rotation, zooming, etc., for example to obtain insight into the location of the object to be operated on.
  • 3D graphical routines such as rotation, zooming, etc.
  • An example of a rotation and the corresponding bandwidth of a data object is given in Fig. 3.
  • the user has by using the steps explained above in connection with Fig. 2, requested a 3D image of a cranium 30.
  • a certain amount of bandwidth 34 has been used, but once the image has been transferred, no, or very little bandwidth, is occupied 35.
  • the user now wants to rotate the image in order to obtain a different view 31, 32, 33.
  • the user may, e.g., click on the image and while maintaining the mouse button pressed move the mouse in the direction of the desired rotation.
  • the type of the request is thus a rotation of the object, and while the mouse button remains pressed, the software treats the request as a rotation. Compression of a graphical image is a tradeoff between resolution and rate.
  • Two types of compression methods are used, loss-less compression methods and loss giving compression methods or lossy compression methods. Different compression methods of both types are used. The different compression methods are applied according to the required compression rate. Compression methods may differ in compression time, compression rate as well as which types of images for which they are most suited. The image compression is determined primarily upon the available bandwidth, but also the type of request is important especially with respect to whether a loss-less or a lossy method is used.
  • An example of the correspondence between the compression time and the compression rate is given in Fig. 4 for three standard loss-less compression methods: PackBits (or Run length encoding), BZIP2 and Lempel-Ziv-Obenhumer (LZO).
  • PackBits or Run length encoding
  • BZIP2 or Lempel-Ziv-Obenhumer
  • CCC Color Cell Compression
  • XCCC Extended Color Cell Compression
  • GCC Gray Cell Compression
  • the methods may be used separately or one after the other to obtain a higher compression rate.
  • CCC CCC: : LZO
  • LZO LZO
  • the compression time is compared with the obtainable compression size 40, or the compression rate for the PackBits compression method 41, the BZIP2 method 42 and the LZO method 43.
  • the exact correspondence between compression time and rate depends upon the structure of the image being compressed. This is illustrated by a certain extension of the area occupied by each method.
  • the image quality is compared with the obtainable compression size 50 for a variety of compression methods, single or combined.
  • GCC Gray Cell Compression
  • the Gray Cell Compression (GCC) method is an example of such a compression method.
  • GCC is a variant of the standard CCC technique. It uses the fact that cells containing gray-scale pixels have gray-scale average cell colors. This is exploited for a more efficient encoding of the two average cell colors: In case the average cell color is a gray-scale color, 1 bit is used to mark the color as a gray-scale color and 7 bits are used to represent the gray-scale value. In case the average cell color is non gray-scale color, 1 bit is used to mark the cell as non-gray scale color and 15 bits are used to represent the color itself.
  • the compression rate of the GCC method depends on how large a fraction of the image is gray-scale. In worst case, none of the average colors will be gray-scale colors. In this case, the compression rate is 1:8. In the best case, all average colors are gray-scale colors, yielding a compression rate of 1: 12.
  • the advantage of the GCC method is that images containing large gray-scale areas may be transferred at a lower bandwidth and a higher image quality when comparing to the standard CCC method.

Abstract

The invention relates to a method and system for remote visualization and data analysis of graphical data, in particular graphical medical data. A user operates a client machine (21) such as a thin client, a PC, a PDA, etc., and the client machine is connected to a server machine (20) through a computer network. The server machine runs an adaptive streaming module (ASM) which handles the connection between the client and the server. All data and data applications are stored and run on the server. A user at the client side requests data to be shown on the screen of the client, this request (24) is transferred to the server. At the server side the request is interpreted as a request for a particular screen image, and a data application generates the requested screen image and estimates a present available bandwidth (26) of a connection between the client and the server. Based on the estimated available bandwidth, the generated screen image is compressed using a corresponding compression method so that a compressed screen image is formed. The screen image may also be encrypted. The compressed (and possible encrypted) screen image is forwarded (22) to the client, and shown on the screen of the client (23). The compression method depends foremost upon the available bandwidth, however also the type of client machine (28), the type of request, etc. may be taken into account.

Description

Method and system for remote and adaptive visualization of graphical image data
Field of the invention
The present invention relates to a method and system for remote visualization and data analysis of graphical data, in particular the invention relates to remote visualization and data analysis of graphical medical data.
Background of the invention
In order to visualize a variety of internal features of the human body, e.g. the location of tumors, a variety of medical image scanners has been developed. Both volume scanners, i.e. 3D-scanners, such as: Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), and Single Photon Emission Computed Tomography (SPECT), as well as 2D-scanners, such as: Computed Radiography (CR) and Digital Radiography (DR) are available. The scanners utilize different biophysical mechanisms in order to produce an image of the body. For example, the CT scanner detects X-ray absorption in a specific volume element of the patient who is scanned, whereas the MRI scanner uses magnetic fields to detect the presence of water in a specific volume element of the patient who is scanned. Both these scanners provide slices of the body, which can be assembled to form a complete 3D image of the scanned section of the patient. A common factor of most medical scanners is that the acquired data sets, especially with the 3D-scanners, are quite large, consisting of several hundreds of megabytes for each patient. Such large data sets require significant computing power in order to visualize the data, and especially to process and manipulate the data. Furthermore, transmitting such image data across common networks presents challenges regarding security and traffic congestion.
The image data generated with medical image scanners are generally managed and stored via electronic database systems under the broad category of Picture Archiving and Communications Systems (PACS systems) which implement the Digital Imaging and Communications in Medicine standard (DICOM standard). The scanner is connected to a central server computer, or a cluster of server computers, which stores the patient data sets. On traditional systems the data may then be accessed from a single or a few dedicated visualization workstations. Such workstations are expensive and can therefore normally only be accessed in dedicated diagnostic suites, and not in clinicians offices, hospital wards or operating theaters. Another type of less expensive system exists in which a general client-server architecture is used. Here a high-capacity server with considerable computing power is still needed, but the central server computer may be accessed from a variety of different client types, e.g. a thin client. In such systems a visualization program is run on the central server, and the output of the program is via a network connection routed to a remote display of the client.
One example of a client-server system is the OpenGL Vizserver™ system provided by Silicon Graphics, Inc. (http://www.SQi.com/software/vizserver/). The system enables clients such as Silicon Graphics® Octane®, and PC based workstations to access the rendering capabilities of an SGI® Onyx® server. In this solution, special software is required to be installed at the client side. This not only limits the type of client, which may be used to access the server, but also adds additional maintenance requirements, as the Vizserver™ client software must be installed locally on each client workstation. Further more, the Vizserver™ server software does not attempt to re-use information from previously sent frames. It is therefore only feasible to run such a system if a dedicated high-speed data network is available. This is often not the case for many hospitals; furthermore installation of such a network is an expensive task.
In the US patent 6,014,694 a system for adaptively transporting video over networks wherein the available bandwidth varies with time is disclosed. The system comprises a video/audio encoder/decoder that functions to compress, code, decode and decompress video streams that are transmitted over the network connection. Depending on the channel bandwidth, the system adjusts the compression ratio to accommodate a plurality of bandwidths. Bandwidth adjustability is provided by offering a trade-off between video resolution, frame rate and individual frame quality. The raw video source is split into frames where each frame comprises a multitude of levels of data representing varying degrees of quality. A video client receives a number of levels for each frame depending upon the bandwidth, the higher the level received for each frame, the higher the quality of the frame. Such a system will only work optimally if an already known data stream is to be sent a number of times, as in the case with video streaming. If the data stream is unique each time it is to be sent, the system generates a huge amount of redundant data for each session, and furthermore, the splitting into frames is not possible before the request is received, thus computing power is occupied for generating redundant data.
Description of the invention It is an object of the present invention to overcome the problems related to remote visualization and manipulation of large digital data sets. According to a first aspect the invention provides a method for transferring graphical data from a first device to an at least second device in a computer-network system, the method comprises the steps of:
- generating a request for a screen image,
- in the first device, upon receiving the request for the screen image:
- generating the requested screen image,
- estimating a present available bandwidth of a connection between the first and the at least second device, - based on the estimated available bandwidth, compressing the generated screen image using a corresponding compression method so that a compressed screen image is formed, and
- forwarding the compressed screen image to the at least second device.
The graphical data may be any type of graphical data but is preferably medical image data, e.g. data acquired in connection with a medical scanning of a patient. The graphical data is stored on a first device that may be a central computer, or a central cluster of computers. The first device may comprise any type of computer, or cluster of computers, with the necessary aggregate storage capacity to store large data sets which, e.g., arise from scanning of a large number of patients at a hospital. The first device should furthermore be equipped with the necessary computing power to be able to handle the demanding tasks of analyzing and manipulating large 3D data sets, such as a 3D image of a human head, a chest, etc.
The at least second device can be any type of computer machine equipped with a screen for graphical visualization. The term visualization should be interpreted to include both 2D visualization and 3D visualization. The at least second device may, e.g., be a thin client, a wireless handheld device such as a personal digital assistant (PDA), a personal computer (PC), a tablet PC, a laptop computer or a workstation. The at least second device machine may merely act as a graphical terminal of the first device. The at least second device may be capable of receiving request actions from a user and transferring the requests to the first device, as well as receiving and showing screen images generated by the first device. The screen of the at least second device can in many respects be looked upon as a screen connected to the first device.
An action is requested, e.g. by the user of the at least second device, or by a program call. The action may, e.g., result in that a list of possible choices may be shown on the screen of the at least second device, or the action may, e.g., result in that an image related patient data may be shown on the screen of the at least second device. The request may be based upon user instructions received from user interaction events such as keystrokes, mouse movements, mouse clicks, etc.
Upon receiving a request, the first device interprets the request in terms of a request for a specific screen image. The first device obtains the relevant patient data from a storage medium to which it is connected. The storage medium may be any type of storage medium, such as a hard disk. A screen image is generated as a result of the request. The present bandwidth of the connection is estimated, and based on the estimated available bandwidth and the type of the request, the screen image is compressed using a corresponding compression method. The first device forwards the compressed screen image to the at least second device.
The first device may, however, also without receiving a request from the at least second device generate a non-requested screen image. The non-requested screen image may be based upon relevant patient data, or the non-requested screen image may be unrelated to patient data or any request made by the user. The non-requested screen image may be generated due to instructions present at the first device.
The generation of the screen image may further be conditioned upon a type of the at least second device. If, e.g., the at least second device is a PDA it may be redundant to generate a high-resolution image, since the PDA's available today are limited in their resolution. Therefore the same images generated to a PDA and a thin client, may be generated with lower screen resolution in the case of the PDA than in the case of the thin client.
The compression method may further be conditioned upon a type of the request. Compression of a graphical image may involve a loss, i.e. the image resulting after a compression decompression process is not identical to the image before the compression decompression process, such methods are normally referred to as lossy compression methods. Compression methods that involve a loss are usually faster to perform and the images may be compressed to a higher rate. The type of request may be taken into account in situations where it is important that the decompressed image is lossless, or in situations where a loss is unimportant. The type of the request may be such as: show an image, rotate an image, zoom in on an image, move an image, etc.
The compression method may further be conditioned upon a type of the at least second device. Especially the computing power of the at least second device may be taken into account. If, e.g., the at least second device is equipped with a computing power so that the task of decompression is estimated to be too time consuming, a different and less demanding compression method may be used.
Since the system may be used for transferring delicate personal information across a data network, it may be important that the transferred data may be encrypted. Therefore, the first device may comprise means for encrypting the screen image before it is sent to the at least second device. Likewise, the at least second device may possess means for decrypting the received screen images before a screen image is generated on the screen of the at least second device. Furthermore, the system may include a feature where the user manually sets the level of encryption, or the system may automatically set an appropriate encryption level. The time it takes to decrypt the received screen images may depend on the processing means of the at least second device machine, especially handheld devices may be limited in processing power. In certain cases it may therefore be a limiting factor to use demanding encryption routines. The encryption routine used for encrypting the data, may therefore be dependent upon the type of the at least second device.
In addition to the image data, the applications for data analysis, data manipulation and data visualization may be stored on the first device, and may be run from the first device. The applications may also be stored on and may be run from a device that is connected to the first device via a computer network connection. A multitude of applications may be accessible from the first device. The application may include software which is adapted to manipulate both 3D graphical medical data such as data from: MRI, CT, US, PET, and SPECT, as well as 2D graphical medical data such as data from: CR and DR, as well as data from other devices that produce medical images. The manipulation may be any standard manipulation of the data such as rotation, zooming in and out, cutting an area, or subset of the data, etc. The manipulation may also be less standard manipulation, or it may be unique manipulation specially developed for the present system.
In order to obtain a flexible system different compression methods may be used. The compression method may either be selected manually at session start or may be chosen automatically by the software. The different compression methods are applied according to the required compression rate. Compression methods may differ in compression time, compression rate as well as, which type of data they are most suitable for. A variety of compression method may be used, both standard methods, as well as methods especially developed for the present system.
An example of a special compression method is the so-called Gray Cell Compression (GCC) method, where an RGB-color graphical image or a gray-scale graphical image is compressed. The compression method comprises the steps of: subdividing the graphical image into cells containing 4 x 4 pixels, determining an average cell color for each cell, in the case that the average cell color is a gray-scale color, 1 bit is used to mark the cell as gray scaled and 7 bits are used to represent the gray-scale color, or - in the case that the average cell color is not a gray-scale color, 1 bit is used to mark the cell as non-gray scaled and 15 bits are used to represent the color.
The GCC method is especially well suited for compressing images where a large fraction of the image is gray scale. The GCC method is therefore well suited for compression of medical images since many medical objects may often be imaged in gray scale.
Upon initiation of a session, a session manager at the first device site may create and maintain a session between the at least second device machine and the first device and upload control components to the at least second device. The at least second device may be a computer without an operating system (OS), e.g. a thin client. In this case an OS may be uploaded, so that the at least second device becomes capable of accepting and sending request actions, as well as receiving and showing screen images generated by the first device. However, the at least second device may also be a computer with an OS, e.g. a PDA or a PC. For these machines an OS is already functioning on the at least second device, and in this case it may be necessary only to upload a computer application to enable a session. A session may, however, also be created and/or maintained without uploading a computer application from the first device to the at least second device. For example, it may suffice to allow the at least second device to receive screen images from the first device. It is not necessary to run a computer application on the at least second device in order to receive, view and/or even manipulate screen images on an at least second device.
A frame sizer may be present which sets the frame buffer resolution of the at least second device in accordance with the detected available bandwidth, and optionally also in accordance with specifications of the at least second device. That is, if the detected bandwidth is low, the frame buffer resolution may be set to a low value, and the screen image may be generated according to the frame buffer resolution. Setting the frame buffer to a low resolution is a fast way of compressing the data. The graphical hardware of most computer systems possess the functionality that if a screen image with a lower resolution than the screen resolution is received, the screen image will automatically be blown up to fill the entire screen. The final screen output on the at least second device is naturally limited in resolution in this case. In the case that the detected bandwidth is acceptable, the frame buffer resolution may be set to the screen resolution of the at least second device. In this case, more bandwidth is occupied, but full resolution is sustained. The specifications of the at least second device may be taken into account if the at least second device is, e.g. a PDA, since the screen resolution of PDA's which are available today is limited. It would be a waste of bandwidth to transfer an image with a resolution that is too high, only for it to be down sampled at the at least second device.
An object subsampler which sets the visualization and rendering parameters in accordance with the detected available bandwidth, and optionally also in accordance with the specifications of the at least second device may be present. The color depth of the generated screen image may be varied, 8 bit colors may be used while the bandwidth is low, and 16, 24 or 32 bits may be used if the bandwidth permits it. Also the computing power of the at least second device may be taken into account. The time it takes to decompress the received screen images may depend on the processing means of the at least second device machine, especially handheld devices may be limited in processing power. In certain cases it may therefore be faster not to compress, or only slightly compress, the screen images.
The sized, subsampled, compressed and possibly encrypted data is transferred by an I/O- manager at the first device side to an I/O-manager at the at least second device side, which also handles the transferring of the user-interactions to the first device.
In many instances the requested screen image will only contain a small change from the screen image which is already present on the at least second device screen. In this situation it may be advantageous that the screen image generated at the at least second device side is either based on a screen image received from the first device, on the content of a frame buffer at the at least second device side, or on a combination of the received screen image and the contents of the frame buffer. That is, the received screen image contains changes to the previously sent screen image, so that the displayed screen image is a superposition of the previously displayed screen image available through the at least second device's frame buffer, and the received image changes.
Most networks are shared resources, and the available bandwidth over a network connection at any particular instant varies with both time and location. The present available bandwidth is estimated and the rate with which the data is transferred is varied accordingly. When no request actions are received no screen frames are sent to the at least second device, the at least second device refreshes the screen from the frame buffer of the at least second device in this case. Therefore, the network connection occupies variable amounts of bandwidth. Many hospitals, clinics or other medical institutions already have a data network installed, furthermore the medical clinician may sit at home or at a small medical office without access to a high capacity network. It is therefore important that the at least second device and first device may communicate via a number of possible common network connections, such as an Internet connection or an Intranet connection, e.g. an Ethernet connection, either through a cable connection or through a wireless connection. Especially, the second device and the first device may communicate through any type of network, which utilizes the Internet protocol (IP) such as the Internet or other TCP/IP networks. The second device and the first device may communicate both through dedicated and non-dedicated network connections.
The graphical data may be graphical medical data based on data that conforms to the Digital Imaging and Communications in Medicine standard (DICOM standard) implemented on Picture Archiving and Communications Systems (PACS systems). Most medical scanners support the DICOM standard, which is a standard handling compatibility between different systems. Textual data may be presented in a connection with the graphical data. Preferably the textual data is based on data which conforms to the Health Level Seven (HL7) standard or the Electronic Data Interchange for Administration, Commerce and Transport (EDIFACT) standard. The interchange of graphical and/or medical data may be based on the International Health Exchange (IHE) framework for data interchange.
According to a second aspect of the invention, a system for transferring graphical data in a computer-network system is provided. The system comprises:
- at least a second device equipped with means for registering a user input as well as visualization means for visualizing graphical data, - a first device equipped with:
- software adapted to generate screen images,
- means for estimating an available bandwidth of a connection between the first and the at least second devices,
- software adapted to compress a screen image using a multitude of compression methods so that a compressed screen image is formed, and
- means for forwarding the compressed screen image to the at least second device.
The first device may further comprise means for encrypting data to be sent via the computer connection between the first device and the at least second device, and the at least second device may comprise means for decrypting the received data. The at least second device and the first device may communicate via a common network connection. The first device may be a computer server system and the at least second device may, e.g., be a thin client, a workstation, a PC, a tablet PC, a laptop computer or a wireless handheld device. The first device may be, or may be part of, a PACS system.
Brief description of the drawings
Preferred embodiments of the invention will now be described in details with reference to the drawings in which:
Fig. 1 shows a schematic view of a preferred embodiment of the present invention;
Fig. 2 shows a schematic flow diagram illustrating the functionally of the Adaptive Streaming Module (ASM);
Fig. 3 shows an example of a rotation and the corresponding bandwidth of a data object;
Fig. 4 illustrates the correspondence between the compression time, the compression method used, and the obtainable compression rate for loss-less compression; and
Fig. 5 illustrates the correspondence between the compression quality, the compression method used, and the obtainable compression rate for lossy compression.
Detailed description of the invention
The present invention provides a method and system for transferring graphical data from a first device to an at least second device in a computer-network system. The invention is in the following described with reference to a preferred embodiment where the graphical data is graphical medical data, and where the computer-network system is a client-server system. A schematic view is presented in Fig. 1.
Medical image data is acquired by using a medical scanner 1 that is connected to a server computer 2. A multitude of clients 3 may be connected to the server. The server is part of a PACS system. When a patient has undergone scanning the acquired images 16 may automatically or manually be transferred to and stored on a server machine. Reference is only made to a server or server machine, however, the server may be a separate computer, a cluster of computers or computer system connected via a computer connection. Access to the images may be established at any time thereafter. In addition to the image data, the applications 15 for data analysis and visualization is stored on and may be run from the server machine. The server is equipped with the necessary computing power to be able to handle the demanding tasks of analyzing and manipulating large 3D data sets, such as 3D images of a human head, a chest, etc. All data and data applications 15 for visualization and analysis are stored, operated and processed on the server.
The client 3 can be any type of computer machine equipped with a screen for graphical visualization. The client may, e.g., be a thin client 5, a wireless handheld device such as a personal digital assistant (PDA) 6, a personal computer (PC), a laptop computer, a workstation 7, etc.
An adaptive streaming module (ASM) 4 is used in order to ensure a continuous stream of data between the server and the client. The ASM is capable of estimating the present available bandwidth and vary the rate with which the data is transferred accordingly. The ASM 4 is a part of the server machine 2.
The client may comprise an ASM 5, 6, 7 or it may not comprise an ASM 17. A client ASM is not necessary for the system to work.
The ASM comprises a session manager 8. The session manager creates and maintains a session between the client machine and the server. The session manager 8 uploads control components to the at least second device. For example if the client is a thin client 5, first an operating system (OS) is uploaded, so that the thin client becomes capable of accepting and sending request actions, as well as receiving and showing screen images generated by the server. In the case that the client is a PDA 6 or a PC, an operating system is already functioning on the client, and in this case it may be necessary only to upload a computer program to enable a session.
The ASM further comprises a bandwidth manager 9 that continuously measures the available bandwidth. A frame sizer 10 that sets the frame buffer resolution of the client. An object subsampler 11 that sets the visualization and rendering parameters. A compression encoder 12 that compresses an image. An encrypter 13 that comprise means for encrypting the data before it is sent to the client 3. The sized, subsampled, compressed and encrypted data is transferred by an I/O-manager 14.
A schematic flow diagram illustrating the functionally of the ASM-module 20 is shown in Fig. 2. The user of the medical data, may, e.g., be a surgeon who should plan an operation on the background of scanned 3D images. The user first establishes a connection from a graphical interface 21, such as a thin client present in his or her office. First the user should log on to the system in order to be identified. Then the user is presented with a list from which the user may request access to the relevant images that are to be presented on the computer screen 23. In another example, the user of the medical data, is a clinician on rounds at a ward in a hospital. In order to facilitate a discussion, or to facilitate a patient's knowledge of his or her condition, the clinician may carry with him a PDA, onto which he can first log on to the system, and subsequently access the relevant images of the patient.
The user of the client is requesting an action, such as a specific image of a patient. The request 24 is sent to the server, which interprets the request in terms of a request for a specific screen image. The server obtains the relevant image data 25 from a storage medium to which it is connected. The present bandwidth 26 of the connection is estimated, and based on the detected available bandwidth and a multitude of other parameters, the screen image is compressed to a corresponding compression rate. As an example two other parameters may be used for generating the screen image. The first parameter may be the color depth 27. If the user requests, e.g., an image of the veins in the brain a 24-bit RGB color depth may be used, but if the user, e.g., requests an image of the cranium an 8-bit color depth may be sufficient. The second parameter may be the client type 28. If the requesting client machine is a thin client a 19-inch screen may be used as the graphical interface. In this case an image with 768 times 1024 pixels may be generated. But if the requesting machine is a PDA, a somewhat smaller image should be generated, e.g. an image with 300 times 400 pixels, since most PDA's are limited with respect to screen resolution.
The screen image is generated, compressed and encrypted 22. The image is transferred to the client machine, where it is first decrypted and decompressed 29 before it is shown on the screen 23 used by the requesting user.
The surgeon may use a multitude of 3D graphical routines, such as rotation, zooming, etc., for example to obtain insight into the location of the object to be operated on. An example of a rotation and the corresponding bandwidth of a data object is given in Fig. 3.
The user has by using the steps explained above in connection with Fig. 2, requested a 3D image of a cranium 30. During the transferring of the image a certain amount of bandwidth 34 has been used, but once the image has been transferred, no, or very little bandwidth, is occupied 35. The user now wants to rotate the image in order to obtain a different view 31, 32, 33. The user may, e.g., click on the image and while maintaining the mouse button pressed move the mouse in the direction of the desired rotation. The type of the request is thus a rotation of the object, and while the mouse button remains pressed, the software treats the request as a rotation. Compression of a graphical image is a tradeoff between resolution and rate. The lower the resolution that is required, the higher the rate of compression may be used. When rotating an object only an indication of the image is necessary during rotation 31, 32, and not until the rotation has stopped is it necessary to transfer a high quality image 33. The images 31 and 32 are transferred using the steps, as explaining in connection with Fig. 2, but the compression rate of the image is higher resulting in a lower required bandwidth. When the mouse button is released, the transferred image 33 is no longer treated as a rotation, and a lower compression is used.
Two types of compression methods are used, loss-less compression methods and loss giving compression methods or lossy compression methods. Different compression methods of both types are used. The different compression methods are applied according to the required compression rate. Compression methods may differ in compression time, compression rate as well as which types of images for which they are most suited. The image compression is determined primarily upon the available bandwidth, but also the type of request is important especially with respect to whether a loss-less or a lossy method is used. An example of the correspondence between the compression time and the compression rate is given in Fig. 4 for three standard loss-less compression methods: PackBits (or Run length encoding), BZIP2 and Lempel-Ziv-Obenhumer (LZO). In Fig. 5, an example is given for the correspondence between the image quality and the compression rate for lossy compression methods, for two standard compression methods: Color Cell Compression (CCC) and Extended Color Cell Compression (XCCC), as well as for a special compression method, the so-called Gray Cell Compression (GCC).
The methods may be used separately or one after the other to obtain a higher compression rate. For example, it is possible to combine a CCC compression with an LZO compression (CCC: : LZO).
In Fig. 4, the compression time is compared with the obtainable compression size 40, or the compression rate for the PackBits compression method 41, the BZIP2 method 42 and the LZO method 43. The exact correspondence between compression time and rate depends upon the structure of the image being compressed. This is illustrated by a certain extension of the area occupied by each method.
In Fig. 5, the image quality is compared with the obtainable compression size 50 for a variety of compression methods, single or combined.
In case the image contains large gray-scale areas, it may be beneficial to use a special compression method, which exploits this information. The Gray Cell Compression (GCC) method is an example of such a compression method. GCC is a variant of the standard CCC technique. It uses the fact that cells containing gray-scale pixels have gray-scale average cell colors. This is exploited for a more efficient encoding of the two average cell colors: In case the average cell color is a gray-scale color, 1 bit is used to mark the color as a gray-scale color and 7 bits are used to represent the gray-scale value. In case the average cell color is non gray-scale color, 1 bit is used to mark the cell as non-gray scale color and 15 bits are used to represent the color itself.
The compression rate of the GCC method depends on how large a fraction of the image is gray-scale. In worst case, none of the average colors will be gray-scale colors. In this case, the compression rate is 1:8. In the best case, all average colors are gray-scale colors, yielding a compression rate of 1: 12. The advantage of the GCC method is that images containing large gray-scale areas may be transferred at a lower bandwidth and a higher image quality when comparing to the standard CCC method.
Although the present invention has been described in connection with preferred embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims.

Claims

Claims
1. A method for transferring graphical data from a first device to an at least second device in a computer-network system, the method comprises the steps of:
- generating a request for a screen image,
- in the first device, upon receiving the request for the screen image:
- generating the requested screen image,
- estimating a present available bandwidth of a connection between the first and the at least second device, - based on the estimated available bandwidth, compressing the generated screen image using a corresponding compression method so that a compressed screen image is formed, and
- forwarding the compressed screen image to the at least second device.
2. A method according to claim 1, wherein the first device without receiving the request from the at least second device is:
- generating a non-requested screen image,
- estimating the present available bandwidth of the connection between the first and the at least second device, - based on the estimated available bandwidth, compressing the generated screen image using the corresponding compression method so that the compressed screen image is formed, and
- forwarding the compressed screen image to the at least second device.
3. A method according to any of the preceding claims, wherein the generation of the screen image is further conditioned upon a type of the at least second device.
4. A method according to any of the preceding claims, wherein the compression method used is further conditioned upon the type of the request.
5. A method according to any of the preceding claims, wherein the compression method used is further conditioned upon a type of the at least second device.
6. A method according to any of the preceding claims, wherein the graphical data that is transmitted between the first device and the at least second device is encrypted.
7. A method according to any of the preceding claims, wherein the graphical data is graphical medical data.
8. A method according to any of the preceding claims, wherein the graphical data and a multitude of applications for data analysis and visualization are stored/run on the first device, or on a device which is in computer-network connection with the first device.
5
9. A method according to any of the preceding claims, wherein different compression methods are applied according to the required compression rate.
10. A method according to any of the claims 1-8, wherein the compression method is 10 either selected manually at session start or chosen automatically by the software.
11. A method according to any of the preceding claims, wherein control components are uploaded to the at least second device from the first device.
15 12. A method according to any of the preceding claims, wherein a frame sizer at the first device side sets a frame buffer resolution at the at least second device in accordance with the estimated available bandwidth, and optionally also in accordance with specifications of the at least second device.
20 13. A method according to any of the preceding claims, wherein an object subsampler sets the visualization and rendering parameters in accordance with the estimated available bandwidth, and optionally also in accordance with the specifications of the at least second device.
25 14. A method according to any of the preceding claims, wherein an I/O-manager at the first device side sends sized, subsampled, compressed and possibly encrypted frame buffer data to the at least second device, and wherein an I/O-manager at the at least second device side receives the graphical data.
30 15. A method according to any of the preceding claims, wherein the screen image generated at the at least second device side is either based on a screen image received from the first device, on the content of the frame buffer of the at least second device, or on a combination of the received screen image and the contents of the frame buffer.
35 16. A method according to any of the preceding claims, wherein the computer network connection occupies variable amounts of bandwidth, and wherein minimal bandwidth is occupied when data is not transferred from the first device to the at least second device.
17. A method according to any of the preceding claims, wherein the at least second device and the first device communicate via a common network connection, such as an Internet connection or an intranet connection, e.g. an Ethernet connection, either through a cable connection or through a wireless connection.
5
18. A method according to claim 17, wherein the connection protocol is a TCP/IP protocol.
19. A method according to any of the preceding claims, wherein the generation of the screen image is based on data which conforms to the DICOM, the HL7 or the EDIFACT
10 standards implemented on PACS systems.
20. A method according to any of the claims 1, 9 or 10, wherein an RGB-color graphical image or a gray-scale graphical image is compressed, said compression method comprises the steps of:
15 - subdividing the graphical image into cells containing 4 x 4 pixels, determining an average cell color for each cell, in the case that the average cell color is a gray-scale color, 1 bit is used to mark the cell as gray scaled and 7 bits are used to represent the gray-scale color, or in the case that the average cell color is not a gray-scale color, 1 bit is used to mark
20 the cell as non-gray scaled and 15 bits are used to represent the color.
21. A computer program adapted to perform the method of claims 1 - 20, when said program is run on a computer-network system.
25 22. A computer readable data carrier loaded with a computer program according to claim 21.
23. A system for transferring graphical data between devices in a computer-network system, said system comprises: 30
- at least a second device equipped with means for registering a user input as well as visualization means for visualizing graphical data,
- a first device equipped with:
- software adapted to generate screen images,
35 - means for estimating an available bandwidth of a connection between the first and the at least second device,
- software adapted to compress a screen image using a multitude of compression methods so that a compressed screen image is formed, and
- means for forwarding the compressed screen image to the at least second device.
24. A system according to claim 23, wherein the first device further comprises means for encrypting data to be sent via the computer connection between the first device and the at least second device, and wherein the at least second device comprises means for
5 decrypting the received data.
25. A system according to any of the claims 23 - 24, wherein the at least second device and the first device communicate via a common network connection.
10 26. A system according to claims 25, wherein the network connection is a non-dedicated network connection.
27. A system according to any of the claims 23 - 25, wherein the first device is computer server system.
15
28. A system according to any of the claims 23 - 25, wherein the at least second device is a thin client, a work station computer, a PC, a lap top computer, a tablet PC, a mobile phone or a wireless handheld device.
20 29. A system according to any of the claims 23 - 27, wherein the first device is, or is part of, a PACS system.
PCT/DK2003/000312 2003-05-13 2003-05-13 Method and system for remote and adaptive visualization of graphical image data WO2004102949A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/DK2003/000312 WO2004102949A1 (en) 2003-05-13 2003-05-13 Method and system for remote and adaptive visualization of graphical image data
EP03720290A EP1634438A1 (en) 2003-05-13 2003-05-13 Method and system for remote and adaptive visualization of graphical image data
CA002566638A CA2566638A1 (en) 2003-05-13 2003-05-13 Method and system for remote and adaptive visualization of graphical image data
CNB038266539A CN100477717C (en) 2003-05-13 2003-05-13 Method and system for remote and adaptive visualization of graphical image data
AU2003223929A AU2003223929A1 (en) 2003-05-13 2003-05-13 Method and system for remote and adaptive visualization of graphical image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/DK2003/000312 WO2004102949A1 (en) 2003-05-13 2003-05-13 Method and system for remote and adaptive visualization of graphical image data

Publications (1)

Publication Number Publication Date
WO2004102949A1 true WO2004102949A1 (en) 2004-11-25

Family

ID=33442589

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DK2003/000312 WO2004102949A1 (en) 2003-05-13 2003-05-13 Method and system for remote and adaptive visualization of graphical image data

Country Status (5)

Country Link
EP (1) EP1634438A1 (en)
CN (1) CN100477717C (en)
AU (1) AU2003223929A1 (en)
CA (1) CA2566638A1 (en)
WO (1) WO2004102949A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004038668A1 (en) * 2004-08-09 2006-02-23 Siemens Ag Image data object transmitting method for use in medical area, involves classifying object-based data into safety and non-safety critical data, sending safety data after authorization and non-safety data to target device
WO2006081634A2 (en) * 2005-02-04 2006-08-10 Barco N.V. Method and device for image and video transmission over low-bandwidth and high-latency transmission channels
EP1843570A1 (en) * 2006-04-06 2007-10-10 General Electric Company Adaptive selection of image streaming mode
WO2008058690A2 (en) * 2006-11-11 2008-05-22 Visus Technology Transfer Gmbh System for the representation of medical images
US8145001B2 (en) 2008-10-01 2012-03-27 International Business Machines Corporation Methods and systems for proxy medical image compression and transmission
FR2975556A1 (en) * 2011-05-19 2012-11-23 Keosys System for retrieval of processed e.g. radiotherapy data, in hospital, has reception block receiving processed medical data generated by processing block, and retrieval block retrieving processed medical data
CN103121324A (en) * 2013-02-06 2013-05-29 心医国际数字医疗系统(大连)有限公司 Medical image centralized printing system
US8548560B2 (en) 2008-06-04 2013-10-01 Koninklijke Philips N.V. Adaptive data rate control
CN103914998A (en) * 2014-04-21 2014-07-09 中国科学院苏州生物医学工程技术研究所 Remote medical consultation training system with medical image processing function
US10397627B2 (en) 2013-09-13 2019-08-27 Huawei Technologies Co.,Ltd. Desktop-cloud-based media control method and device
CN112534842A (en) * 2018-08-07 2021-03-19 昕诺飞控股有限公司 Compressive sensing system and method using edge nodes of a distributed computing network
US11361861B2 (en) 2016-09-16 2022-06-14 Siemens Healthcare Gmbh Controlling cloud-based image processing by assuring data confidentiality

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4935385B2 (en) * 2007-02-01 2012-05-23 ソニー株式会社 Content playback method and content playback system
CN101606863B (en) * 2008-06-19 2011-05-11 深圳市巨烽显示科技有限公司 Method and system for controlling medical display network
TWI364220B (en) 2008-08-15 2012-05-11 Acer Inc A video processing method and a video system
CN101686382B (en) * 2008-09-24 2012-05-30 宏碁股份有限公司 Video signal processing method and video signal system
WO2010096683A1 (en) 2009-02-20 2010-08-26 Citrix Systems, Inc. Systems and methods for intermediaries to compress data communicated via a remote display protocol
JP5067409B2 (en) * 2009-09-28 2012-11-07 カシオ計算機株式会社 Thin client system and program
CN102857791B (en) * 2012-09-14 2015-07-08 武汉善观科技有限公司 Method for processing and displaying image data in PACS system by mobile terminal
CN104423783B (en) * 2013-09-02 2019-03-29 联想(北京)有限公司 The method and electronic equipment of information transmission
US10820151B2 (en) * 2016-10-06 2020-10-27 Mars, Incorporated System and method for compressing high fidelity motion data for transmission over a limited bandwidth network
CN106485079B (en) * 2016-10-12 2019-06-07 南京巨鲨医疗科技有限公司 A kind of medical image cloud processing method
CN111833788B (en) * 2019-04-19 2023-08-04 北京小米移动软件有限公司 Screen dimming method and device, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1056273A2 (en) * 1999-05-25 2000-11-29 SeeItFirst, Inc. Method and system for providing high quality images from a digital video stream
WO2001099430A2 (en) * 2000-06-21 2001-12-27 Kyxpyx Technologies Inc. Audio/video coding and transmission method and system
GB2367219A (en) * 2000-09-20 2002-03-27 Vintage Global Streaming of media file data over a dynamically variable bandwidth channel
WO2002051148A1 (en) * 2000-12-18 2002-06-27 Agile Tv Corporation Method and processor engine architecture for the delivery of audio and video content over a broadband network
US20020140851A1 (en) * 2001-03-30 2002-10-03 Indra Laksono Adaptive bandwidth footprint matching for multiple compressed video streams in a fixed bandwidth network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1056273A2 (en) * 1999-05-25 2000-11-29 SeeItFirst, Inc. Method and system for providing high quality images from a digital video stream
WO2001099430A2 (en) * 2000-06-21 2001-12-27 Kyxpyx Technologies Inc. Audio/video coding and transmission method and system
GB2367219A (en) * 2000-09-20 2002-03-27 Vintage Global Streaming of media file data over a dynamically variable bandwidth channel
WO2002051148A1 (en) * 2000-12-18 2002-06-27 Agile Tv Corporation Method and processor engine architecture for the delivery of audio and video content over a broadband network
US20020140851A1 (en) * 2001-03-30 2002-10-03 Indra Laksono Adaptive bandwidth footprint matching for multiple compressed video streams in a fixed bandwidth network

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004038668A1 (en) * 2004-08-09 2006-02-23 Siemens Ag Image data object transmitting method for use in medical area, involves classifying object-based data into safety and non-safety critical data, sending safety data after authorization and non-safety data to target device
WO2006081634A2 (en) * 2005-02-04 2006-08-10 Barco N.V. Method and device for image and video transmission over low-bandwidth and high-latency transmission channels
WO2006081634A3 (en) * 2005-02-04 2006-12-28 Barco Nv Method and device for image and video transmission over low-bandwidth and high-latency transmission channels
CN101079716B (en) * 2006-04-06 2012-11-14 通用电气公司 Adaptive selection of image streaming mode
JP2007312365A (en) * 2006-04-06 2007-11-29 General Electric Co <Ge> Adaptive selection of image streaming mode
US7996495B2 (en) 2006-04-06 2011-08-09 General Electric Company Adaptive selection of image streaming mode
EP1843570A1 (en) * 2006-04-06 2007-10-10 General Electric Company Adaptive selection of image streaming mode
US8631096B2 (en) 2006-04-06 2014-01-14 General Electric Company Adaptive selection of image streaming mode
WO2008058690A2 (en) * 2006-11-11 2008-05-22 Visus Technology Transfer Gmbh System for the representation of medical images
WO2008058690A3 (en) * 2006-11-11 2009-06-11 Visus Technology Transfer Gmbh System for the representation of medical images
EP2083736B1 (en) 2006-11-11 2020-01-08 VISUS Health IT GmbH System for the representation of medical images
EP3637752A1 (en) * 2006-11-11 2020-04-15 VISUS Health IT GmbH System for reproducing medical images
US8548560B2 (en) 2008-06-04 2013-10-01 Koninklijke Philips N.V. Adaptive data rate control
US8145001B2 (en) 2008-10-01 2012-03-27 International Business Machines Corporation Methods and systems for proxy medical image compression and transmission
FR2975556A1 (en) * 2011-05-19 2012-11-23 Keosys System for retrieval of processed e.g. radiotherapy data, in hospital, has reception block receiving processed medical data generated by processing block, and retrieval block retrieving processed medical data
CN103121324A (en) * 2013-02-06 2013-05-29 心医国际数字医疗系统(大连)有限公司 Medical image centralized printing system
US10397627B2 (en) 2013-09-13 2019-08-27 Huawei Technologies Co.,Ltd. Desktop-cloud-based media control method and device
CN103914998A (en) * 2014-04-21 2014-07-09 中国科学院苏州生物医学工程技术研究所 Remote medical consultation training system with medical image processing function
US11361861B2 (en) 2016-09-16 2022-06-14 Siemens Healthcare Gmbh Controlling cloud-based image processing by assuring data confidentiality
CN112534842A (en) * 2018-08-07 2021-03-19 昕诺飞控股有限公司 Compressive sensing system and method using edge nodes of a distributed computing network

Also Published As

Publication number Publication date
CN100477717C (en) 2009-04-08
CA2566638A1 (en) 2004-11-25
EP1634438A1 (en) 2006-03-15
CN1839618A (en) 2006-09-27
AU2003223929A1 (en) 2004-12-03

Similar Documents

Publication Publication Date Title
WO2004102949A1 (en) Method and system for remote and adaptive visualization of graphical image data
US20040240752A1 (en) Method and system for remote and adaptive visualization of graphical image data
US8508539B2 (en) Method and system for real-time volume rendering on thin clients via render server
US6424996B1 (en) Medical network system and method for transfer of information
US7602950B2 (en) Medical system architecture for interactive transfer and progressive representation of compressed image data
US7280702B2 (en) Methods and apparatus for dynamic transfer of image data
EP1236082B1 (en) Methods and apparatus for resolution independent image collaboration
US8422770B2 (en) Method, apparatus and computer program product for displaying normalized medical images
USRE42952E1 (en) Teleradiology systems for rendering and visualizing remotely-located volume data sets
CN101334818B (en) Method and apparatus for efficient client-server visualization of multi-dimensional data
US20060122482A1 (en) Medical image acquisition system for receiving and transmitting medical images instantaneously and method of using the same
US7492970B2 (en) Reporting system in a networked environment
US8417043B2 (en) Method, apparatus and computer program product for normalizing and processing medical images
US20070223793A1 (en) Systems and methods for providing diagnostic imaging studies to remote users
US20170228918A1 (en) A system and method for rendering a video stream
WO2008022282A2 (en) Online volume rendering system and method
US20070225921A1 (en) Systems and methods for obtaining readings of diagnostic imaging studies
Pohjonen et al. Pervasive access to images and data—the use of computing grids and mobile/wireless devices across healthcare enterprises
CN1392505A (en) Medical information service system
US20030095712A1 (en) Method for determining a data-compression method
WO2005050519A1 (en) Large scale tomography image storage and transmission and system.
Stoian et al. Current trends in medical imaging acquisition and communication
Akhtar et al. Significance of Region of Interest applied on MRI image in Teleradiology-Telemedicine
PRZELASKOWSKI The JPEG2000 standard for medical image applications
Swarnakar et al. Multitier image streaming teleradiology system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 5437/DELNP/2005

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2003720290

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 20038266539

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2003720290

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2566638

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: JP