WO2002100112A1 - System and method for rapid video compression - Google Patents

System and method for rapid video compression Download PDF

Info

Publication number
WO2002100112A1
WO2002100112A1 PCT/IL2001/000509 IL0100509W WO02100112A1 WO 2002100112 A1 WO2002100112 A1 WO 2002100112A1 IL 0100509 W IL0100509 W IL 0100509W WO 02100112 A1 WO02100112 A1 WO 02100112A1
Authority
WO
WIPO (PCT)
Prior art keywords
video data
compressed
digital video
analog
digital
Prior art date
Application number
PCT/IL2001/000509
Other languages
French (fr)
Inventor
Jacob Segal
Arie Kantor
Original Assignee
Seelive Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seelive Ltd. filed Critical Seelive Ltd.
Priority to PCT/IL2001/000509 priority Critical patent/WO2002100112A1/en
Publication of WO2002100112A1 publication Critical patent/WO2002100112A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream

Definitions

  • the present invention is of a system and method for rapid video compression, and in particular, for such a system and method which is both rapid and which is compatible with a wide variety of computational devices.
  • Video data to computational devices over a network can be difficult due to bandwidth restrictions. Indeed, even the transmission of video data over a network through other, non-streaming, mechanisms can be difficult because of the lack of bandwidth. Therefore, more efficient transmission of video data is clearly desirable.
  • video data is typically transmitted in a compressed format. Compression of video data increases the amount of video data which can be transmitted, but may also increase the amount of time required to process the video data, both before transmission, and upon reception, of the data, before the video data can be played back. Efficient, rapid methods for video compression are thus more useful.
  • MPEG-2 Motion Picture Expert Group-2
  • MPEG-4 Motion Picture Expert Group-4
  • MPEG-2 Motion Picture Expert Group-2
  • MPEG-4 Motion Picture Expert Group-4
  • MPEG-2 is currently used for DVD and digital television set-top boxes.
  • video data and audio data are divided into two separate streams, each of which is compressed separately according to the most efficient compression method for that type of data.
  • the separate data streams must be recombined before being played back on an end-user device, such that the MPEG-2 standard includes mechanisms for ensuring that the video data and the audio data are synchronized.
  • MPEG-2 has some advantages for controlling certain parameters and characteristics of the compressed audio and video data; however, end-user devices which are capable of handling this type of compressed data format are not currently as widespread as for MPEG-4.
  • MPEG-4 uses a system of "media objects", in which video data is divided into these objects for transmission to the end-user device.
  • These media objects may include non- moving (still) images, video objects (such as a moving figure) and audio objects, as well as other types of objects. These objects are then integrated for being played back to the end user by the end-user device.
  • MPEG-4 is widely available on end-user devices, particularly for those devices which are operated with one of the WindowsTM operating systems (Microsoft, USA), as certain versions of this operating system incorporates a media player which is capable of decompressing and playing back video data which has been compressed according to MPEG-4.
  • MPEG-4 has been designed to be suitable for compression of video data through the Internet/World Wide Web and for mobile computational devices, such as WAP (wireless application protocol) enabled cellular telephones for example.
  • WAP wireless application protocol
  • currently end-user devices are restricted according to the type of video data compression which they can receive and/or generate, because of the limitations of each of the above methods of video data compression.
  • the background art does not teach or suggest a method or system for efficient video compression which is suitable for a wide range of end-user computational devices and which is able to perform multi-stage compression within a single device or other implementation.
  • the background art also does not teach or suggest such a method or system which uses the best qualities of more than one type of MPEG compression.
  • the present invention overcomes these deficiencies of the background art by providing a system and method which is useful for providing compressed video data to a wide range of end- user computational devices.
  • the present invention has been shown to be more efficient in the degree of compression of the video data which is achieved than other background art methods and devices, while still providing a solution which is capable with a wide range of commonly available end-user devices.
  • the present invention is also adaptable to many different types of end-user computational devices.
  • the present invention achieves these goals by providing a system and method which are an integrated solution for capturing an analog video signal, processing this signal to form processed video data, and efficiently compressing the video data.
  • the compressed video data can then optionally be stored as a digital file and/or streamed directly as a digital signal over a network, such as the Internet for example.
  • the present invention compresses the video data in a plurality of stages, including at least one stage for processing the analog data in order to increase the efficiency of compression, and a second stage for actual compression of the resultant digital video data.
  • the first stage also includes compression of the video data after digitization of the analog signal.
  • the compressed video data from the first stage is decompressed before being recompressed in the second stage.
  • the method of compression of the video data for the first stage is performed according to MPEG-2, while the method of compression of the video data for the second stage is performed according to MPEG-4.
  • a method for compressing video data comprising: processing an analog video signal according to at least one parameter for determining at least one characteristic of the resultant compressed video data, including converting the processed analog video signal into digital video data; and compressing the digital video data to form the compressed video data, wherein at least one characteristic of the resultant compressed video data is determined according to the at least one parameter.
  • a system for compressing video data comprising: (a) an analog output device for producing an analog video signal; (b) an integrated hardware device for capturing the analog video signal, processing the analog video signal to form digital video data, and for compressing the digital video data in two stages to form compressed video data; and (c) a computational device for controlling the integrated hardware device and for being in communication with the integrated hardware device.
  • computational device includes, but is not limited to, any type of wireless device and any type of computer.
  • Examples of such computational devices include, but are not limited to, handheld devices, including devices operating with the Palm OS®; hand-held computers, PDA (personal data assistant) devices, cellular telephones, any type of WAP (wireless application protocol) enabled device, portable computers of any type and wearable computers of any sort, which have an operating system.
  • An end-user device is any type of computational device which is used for playing back video data to a user.
  • the present invention could be implemented in software, firmware or hardware.
  • the present invention could be implemented as substantially any type of integrated circuit or other electronic device capable of performing the functions described herein.
  • the present invention can be described as a plurality of instructions being executed by a data processor.
  • FIG. 1 is a schematic block diagram showing an exemplary system according to the present invention for two-stage video compression
  • FIG. 2 shows exemplary architecture of a video card for insertion into a computational device according to the present invention
  • FIG. 3 is a flowchart of an exemplary method according to the present invention for operation with the system of Figure 1.
  • the present invention is of a system and method for capturing an analog video signal, processing this signal to form processed video data, and efficiently compressing the video data.
  • the compressed video data can then optionally be stored as a digital file and/or streamed directly as a digital signal over a network, such as the Internet for example.
  • the present invention compresses the video data in a plurality of stages, including at least one stage for processing the analog data in order to increase the efficiency of compression, and a second stage for actual compression of the resultant digital video data.
  • the first stage also includes compression of the video data after digitization of the analog signal.
  • the compressed video data from the first stage is decompressed before being recompressed in the second stage.
  • the method of compression of the video data for the first stage is performed according to MPEG-2, while the method of compression of the video data for the second stage is performed according to MPEG-4.
  • a system featuring a set of integrated hardware and software components, preferably including at least a hardware card with a PCI interface installed on a card slot which is compatible with PC (personal computer) hardware, and digital-video compression software which communicates with the hardware card.
  • Examples of a card slot which is compatible with PC hardware includes but is not limited to, card slots structured according to the USB (universal serial bus) architecture standard, the IEEE 1394 ('FireWire') standard, and the ISA (industry standard architecture) standard. These components are preferably connected to a network, optionally an IP network such as a LAN, WAN and/or the Internet.
  • a network optionally an IP network such as a LAN, WAN and/or the Internet.
  • these hardware and software components are combined into a single device, such as a set-top-box for example, which most preferably can be used to perform any one or more of the following tasks: capture a live feed from a video camera and transmit live streaming video data in real time over a network; capture live feed from a video camera and store the resultant video data as a file; capture any data from a playback device and transmit such captured data; and capture any data from a playback device and store such data as a file.
  • the data which is captured from the playback device, and which forms the input to the present invention for processing and compression, is actually preferably an analog signal, which may be in various formats, including but not limited to, VHS, Super VHS, and BetaCam.
  • the video data is in a format which is suitable for storage and/or play back by a computational device.
  • an application layer is more preferably featured.
  • the application layer is laid over the above system, and interacts with the components of the system in order to provide various types of functionality, for example through software program modules. Examples of these different types of functionality include, but are not limited to, security and surveillance, entertainment, and UI (User Interface) with which the system operator (user) interacts in order to operate the system of the present invention.
  • the system operator is distinguished from the end user according to the type of interaction with the system; the former individual controls the various parameters for capturing and transmitting the video data, while the latter controls playing back the video data.
  • the system operator and the end user could optionally be the same individual, for example when video data is played back for the system operator.
  • the preferred components of the system of the present invention are more preferably used to perform a method according to the present invention for compressing video data.
  • the result of the method of the present invention is a significantly low data-rate output per given input and quality parameters (picture size, color type and depth etc.).
  • the captured file is significantly smaller in comparison to other processes with the same quality parameters; and the required streamed bandwidth is much lower than the output of background art streaming solutions, as shown by the test results below.
  • FIG. 1 shows an exemplary system according to the present invention for performing video compression in a plurality of stages.
  • a system 10 preferably features at least an integrated device 12 according to the present invention.
  • integrated device 12 is implemented as a plurality of separate devices, although this implementation is less preferred.
  • Integrated device 12 is connected to a source of a video signal, whether analog or digital, which is preferably a video camera 14 as shown, but which could also be a VCR or other device for recording, playing back, storing and/or generating video signals.
  • Integrated device 12 then captures the video signals, and compresses these signals into compressed video data according to a process described in greater detail below, particularly with regard to the exemplary method of Figure 3.
  • the compressed video data is then preferably transmitted through a network 16, such as a
  • End-user device 18 then decompresses and plays back the video data to the end user.
  • integrated device 12 features a hardware card 20 according to the present invention, which is shown in more detail with regard to Figure 2 below.
  • Hardware card 20 optionally and preferably communicates with an application layer 22, which is a unique software driver.
  • Integrated device 12 is optionally and more preferably implemented as a computer, such that application layer 22 functions as the 'translator' between hardware card 20 and an operating system installed on integrated device 12.
  • Integrated device 12 therefore also optionally and more preferably features a storage device 24, such as a flash memory device and/or a magnetic storage medium device or "hard drive”, and a network card 26 for connecting to network 16.
  • Integrated device 12 may also optionally feature user input & output devices, including but not limited to a keyboard, mouse or other pointing device, and a monitor (not shown).
  • Application layer 22 also preferably features a final encoding unit 28, which is more preferably implemented as an external software component which reads the data from hardware card 20, converts such data to its final format and delivers the data for distribution through network 16.
  • a final encoding unit 28 which is more preferably implemented as an external software component which reads the data from hardware card 20, converts such data to its final format and delivers the data for distribution through network 16.
  • application layer 22 is implemented as a software layer, which is installed on integrated device 12. Also optionally, application layer 22 may feature several different types of software functions, each of which performs a different task and each of which interacts with hardware card 20. Application layer 22 more preferably controls the functionality and performance that can potentially be utilized by using hardware card 20, and also uses hardware card 20 for capturing, storing and/or transmitting video data over network 16.
  • application layer 22 may optionally feature such functions as a User Interface, which provides control over the modifiable parameters in the video/audio capturing and transmitting process, and also provides control over other operational applications which use hardware card 20; a security application; a surveillance application; and an entertainment application.
  • a User Interface which provides control over the modifiable parameters in the video/audio capturing and transmitting process, and also provides control over other operational applications which use hardware card 20; a security application; a surveillance application; and an entertainment application.
  • the user interface is optionally and more preferably provided to the system operator (user) through a system operator station 30, which is a computational device connected to integrated device 12.
  • System operator station 30 enables the system operator to control the parameters of the compression process, and also enables the system operator to control the functions of integrated device 12.
  • System operator station 30 preferably features a system operator UI 32, for communicating with hardware card 20 and application layer 22 of integrated device 12; and a local playback device 34 for receiving video data for playback from storage device 24.
  • FIG 2 shows an exemplary architecture of the hardware card for capturing, compressing and/or transmitting video data of Figure 1 in more detail.
  • Hardware card 20 preferably features an input-device unit 36, which can optionally be attached to the remainder of hardware card 20 through a composite connection, S-Video connection, or any other analog/digital connection for connecting to integrated device 12 (not shown; see Figure 1).
  • the input format can be NTSC (National TV Standards Committee) or PAL (Phase Alternating Line), for example.
  • NTSC is a color TV standard that was developed in the U.S. and which is currently used in the U.S. for television broadcasts.
  • NTSC broadcasts 30 interlaced frames per second (60 half frames per second, or 60 "fields" per second in TV jargon) at 525 lines of resolution.
  • the signal is a composite of red, green and blue and includes an audio FM frequency and an MTS signal for stereo.
  • PAL is a color TV standard that was developed in Germany and is used throughout Europe. It broadcasts 25 interlaced frames per second (50 half frames per second) at 625 lines of resolution.
  • the color signals for PAL are maintained automatically, and the television set does not have a user-adjustable hue control.
  • Input-device unit 36 is the physical connection point to integrated device 12 as a whole.
  • connection-plugs for input-device unit 36 are preferably industry standard connectors, and are optionally third party components which can be purchased "off the shelf. These connectors are standard input connectors and could optionally be implemented as RCA connectors for example. RCA connectors are a standard plug and socket for a two-wire (signal and ground) coaxial cable that is widely used to connect audio and video components. Optionally and more preferably, there are four connection plugs 37 on hardware card 20, optionally according to one of the following combinations: 4 x Composite connections; or 3 x Composite connections + 1 x S-Video connection.
  • Composite connectors rely upon the recording and transmission of video which mixes the color information and synchronization signals together; particular RCA connectors are commonly used for composite video on VCRs, camcorders and other consumer appliances.
  • S-video (Super-video) is a format for recording and transmitting video by keeping luminance (Y) and color information (C) on separate channels.
  • Y luminance
  • C color information
  • S- video uses a special 5-pin connector. Connectors of this type are widely used on camcorders, VCRs and A/V receivers and amplifiers, as the format can improve transmission quality and the image at the receiving end.
  • Hardware card 20 also preferably features a capturing and initial encoding unit 38, for performing at least one, but preferably all, of the following tasks.
  • capturing unit 38 receives the analog signal from input-device unit 36 and digitizes the analog signal. The output of this process is the same original signal in a digital format.
  • capturing unit 38 compresses the video data by using a compression algorithm, which is optionally and more preferably performed according to MPEG 2.
  • the output of this stage in the process is compressed digital data.
  • the MPEG 2 compression format is preferred since it enables some parameters of the output result to be controlled and modified, such as picture size, frame-rate etc.
  • system operator UI 32 (not shown; see Figure 1), the system operator can optionally and preferably modify those parameters, which are then set for the chip operation by the driver for hardware card 20, through the operation of application layer 22.
  • the default picture-size of the output result is optionally and more preferably 'Half-Dl ', which is 320 x 240 pixels per frame.
  • capturing unit 38 is implemented as a single chip and a memory buffer unit 40.
  • Memory buffer unit 40 is preferably implemented as a third party chip which also functions as a temporary data-storage in the process. Memory buffer unit 40 preferably controls the supply and demand of the data to capturing unit 38.
  • Hardware device 20 is also preferably in communication with a card driver (not shown), which is a software library for controlling the interaction of hardware device 20 with the external environment, such as the PCI BUS and the operating system of integrated device 12.
  • the card driver is also preferably used for configuration of hardware device 20.
  • hardware device 20 also preferably features a final encoding unit 28 (not shown; see Figure 1), which is an external software component for reading the compressed-data that was processed by capturing unit 38.
  • This 'reading' action is actually optionally and more preferably performed by 'playing-back' the compressed data, so the input to final encoding unit 28 is thus more preferably a decompressed digital feed.
  • Final encoding unit 28 preferably then re-compresses the decompressed feed by using compression algorithms, which can be standard compression algorithms. Optionally and more preferably, compression is performed according to MPEG 4.
  • Final encoding unit 28 is optionally third party software such as Microsoft WindowsTM Media Tools, which involves the Microsoft Windows Media SDK (Software Development Kit)TM, Windows Media EncoderTM and other components that are a part of the WindowsTM operating system, such as the Microsoft DirectShowTM filters.
  • the output result of this stage is more preferably a re-compressed digital data in MPEG 4 format, optionally and most preferably wrapped with some specific layers conducted by the encoding environment, which in this case is the Microsoft Windows Media ToolsTM.
  • the final data format is one of the file or stream types that the Microsoft WindowsTM environment produces, which are *.ASF, *.WMV or *.WMA.
  • the output data can than be transferred to one of the external environment components, such as storage device 24 of integrated device 12 (not shown; see Figure 1), an external storage device (not shown) or NIC 26 of integrated device 12 (not shown; see Figure 1).
  • MPEG encoding unit 46 performs MPEG-2 compression, which is more preferably implemented as a single chip (such as the FusionTM bt878A chip of Conexant USA for example). Capturing unit 38 may also optionally be implemented on that chip. If purchased "off the shelf, the software driver for such a chip is optionally and preferably used as the communicating interface to the remainder of hardware card 20.
  • MPEG-4 compression is preferably performed by a software application, which as shown in Figure 1 is final encoding unit 28, and is more preferably not located on hardware card 20, but instead is operated by integrated device 12 (not shown; see Figure 1) and/or another computational device.
  • Hardware card 20 also more preferably features an interface 48 to integrated device 12 itself (not shown; see Figure 1), which is most preferably implemented as a PCI bus interface.
  • power is also preferably supplied through interface 48, and is optionally and more preferably controlled through a power controller 50.
  • Figure 3 shows an exemplary method for operation with the present invention.
  • the following diagram describes the workflow for the system of Figures 1 and 2, with the optional basic input and output components.
  • an initial analog signal input 52 is received from the video camera or other video capture device of Figure 1 (shown as an input unit 54).
  • the analog signal could optionally be in the form of a composite or S-video signal, for example. More preferably, there are two sets of video input signals: a first set featuring four separate composite RCA inputs and a second set with on S-video input signal.
  • an input device unit 56 of a capturing and initial encoding unit 58 receives the analog signal.
  • This signal is processed according to the configuration set by the system driver which 'defines' the input source and the input format.
  • the parameters for this capture process may optionally and preferably be determined by system operator through a system operator interface 62 as a set of capturing parameters 64. These capturing parameters 64 would then be used for the subsequent operations for the formation of the digital signal.
  • the analog signal is optionally stored temporarily in a data buffer 66 for the conversion process to form an uncompressed digital signal 68 by a capturing unit 70.
  • Various different formats for the input video analog signal may optionally be used, including but not limited to, and PAL, NTSC. These different formats may also optionally be used concurrently.
  • the frame rate is optionally and preferably 25 Fps (frames per second), while for NTSC video signals, the frame rate is optionally and preferably 30 Fps.
  • Multiple output resolutions are also optionally possible for each of these different formats: for example, PAL video data could optionally be output at resolutions of 192x144; 384x288; and 768x576 pixels per frame; while NTSC video data could optionally be output at resolutions of 160x120; 320x240; and 640x480 pixels per frame.
  • these frame rates, resolutions and video standards are only intended as examples, as many other different implementations are possible within the scope of the present invention.
  • YUV 12, YUV 2, YUV9, 15 bit RGB, 24 bit RGB, 32 bit RGB A and BTYU are different examples of color models used for encoding video data.
  • Y is the luminosity of the black and white signal.
  • U and V are color difference signals.
  • U is red minus Y (R-Y)
  • V is blue minus Y (B-Y).
  • RGB red minus Y
  • B-Y blue minus Y
  • YUV is used because it saves storage space and transmission bandwidth compared to RGB.
  • YUV is not compressed RGB; rather, it is the mathematical equivalent of RGB.
  • YUV elements are written in various ways: (1) Y, R-Y, B-Y, (2) Y, Cr, Cb and (3) Y, Pa, Pb.
  • Certain types of equipment provide YUV component video outputs, which are a typical set of connectors using standard RCA connectors.
  • An initial encoding unit 72 then preferably receives instructions for encoding digital signal 68 for the first stage of compression. These instructions 74 optionally include values for such parameters as encoded picture size and output format, which are optionally and preferably received from the system operator as initial encoding parameters 76. After initial encoding unit 72 performs the first stage of encoding, more preferably according to MPEG-2 and most preferably resulting in the formation of digital MPEG data in the "half-Dl" format, the resultant data is optionally stored in a buffer 78. The data is then preferably decompressed to form processed raw digital data which is passed to a final encoding unit 80.
  • Final encoding unit 80 then preferably performs the final compression of the data for transmission over a network such as the Internet.
  • the parameters for controlling this compression process are preferably received from the system operator as final encoding parameters 82, and optionally ⁇ nd more preferably include such parameters as encoded picture size and encoded file/stream type 84.
  • the distribution of the compressed data to a final destination(s) as a digital signal 88, such as an end-user computational device for example, is optionally and preferably controlled by distribution parameters 86. These distribution parameters are more preferably received from the system operator.
  • the device and system of the present invention as constructed with regard to the preferred embodiments of Figure 2 and as operated with regard to the preferred embodiments of Figure 3, clearly showed better performance in comparison to background art devices when tested for the ability to compress data for storage.
  • Test video data was derived, with a single quality and picture resolution, which was one minute in length.
  • the table below shows the results for the present invention (labeled as "Hyper MPEG4"), in comparison to devices operating according to AVI, MPEGl and MPEG2, and standard MPEG4 (that is, compression only with MPEG4, as opposed to the preferred embodiments of the present invention which incorporate compression according to both MPEG2 and MPEG4).
  • the system and method of the present invention may be optionally and preferably be used for both on-line video broadcasting and for off-line video broadcasting.
  • On-line video broadcasting involves the direct streaming transmission of video data to a distribution point, such as an end user device for example, after the video data has been captured, processed and compressed.
  • Off-line video broadcasting is similar, except that the compressed video data is stored locally and/or transmitted to a remote storage device for future distribution or playback.
  • the system of the present invention features an application layer.
  • this application layer contains software functions which fall into two categories: the system operating environment; and the system service support functions.
  • the system operating environment preferably includes all of the previously described system core functions, together with the ability to configure these functions, operate them and also optionally connect to higher level applications, such as the system service support functions.
  • Examples of such basic operating environment functions include but are not limited to: a "plug and play" mechanism, which enables the hardware card of the present invention to be recognized by the operating system of the integrated device as a new hardware device, attached through the PCI interface; settings and system parameters which are set by the system operator and which are optionally protected with a password; and an authentication mechanism to protect features, settings and configuration of the system.
  • various types of transmission options are provided as previously described, including direct transmission of data, storage of data or a combination thereof.
  • archiving management software for managing the storage of the captured and compressed data.
  • archiving functions may optionally include a file search engine, for searching for video data by file; the provision of either automatic or manual determination of file names; sorting files by name, date, time and/or other file characteristics; uploading files to a remote distribution point Or end user device automatically or manually, with automatic uploading being performed according to a pre-defined policy and scheduling; and playing back file data upon request.
  • the system can optionally support a "live" help desk connection, such that the system operator and/or end user is able to directly access a customer-support center, for example using the previously described video conference, chat and/or telephone communication.
  • a "live" help desk connection such that the system operator and/or end user is able to directly access a customer-support center, for example using the previously described video conference, chat and/or telephone communication.
  • These may be implemented by using standardized video-conference software applications which are known in the art; however the efficiency of operation of these applications is greatly increased according to the present invention since the various parameters for processing and compressing the video data are provided to these applications, which can thus be adjusted for optimal performance with the present invention.
  • the present invention may also optionally be used for separate compression of video/image and audio data.
  • the previous references to "video" data include combined audio and video data, where appropriate.

Abstract

A system and method for capturing an analog video signal, processing this signal to form processed video data, and efficiently compressing the video data. The compressed video data can then optionally be stored as a digital file and/or streamed directly as a digital signal over a network, such as the Internet for example. The present invention compresses the video data in a plurality of stages, including at least one stage for processing the analog data in order to increase the efficiency of compression, and a second stage for actual compression of the resultant digital video data.

Description

SYSTEM AND METHOD FOR RAPID VIDEO COMPRESSION
FIELD OF THE INVENTION
The present invention is of a system and method for rapid video compression, and in particular, for such a system and method which is both rapid and which is compatible with a wide variety of computational devices.
BACKGROUND OF THE INVENTION
Streaming video data to computational devices over a network, such as the Internet, or even over a more local network such as a LAN (local area network) or WAN (wide area network), can be difficult due to bandwidth restrictions. Indeed, even the transmission of video data over a network through other, non-streaming, mechanisms can be difficult because of the lack of bandwidth. Therefore, more efficient transmission of video data is clearly desirable. In order to increase the amount of video data which can be transmitted on any given amount of bandwidth, video data is typically transmitted in a compressed format. Compression of video data increases the amount of video data which can be transmitted, but may also increase the amount of time required to process the video data, both before transmission, and upon reception, of the data, before the video data can be played back. Efficient, rapid methods for video compression are thus more useful.
Two standard methods for video compression are MPEG-2 (Motion Picture Expert Group-2) and MPEG-4. These two methods were both developed by the same industry standards group, the Motion Picture Expert Group. However, the two methods differ in both functionality and availability on end-user computational devices (see for example http://www.cselt.it/mpeg/as of April 15, 2001). MPEG-2 is currently used for DVD and digital television set-top boxes. According to this compression method, video data and audio data are divided into two separate streams, each of which is compressed separately according to the most efficient compression method for that type of data. The separate data streams must be recombined before being played back on an end-user device, such that the MPEG-2 standard includes mechanisms for ensuring that the video data and the audio data are synchronized. MPEG-2 has some advantages for controlling certain parameters and characteristics of the compressed audio and video data; however, end-user devices which are capable of handling this type of compressed data format are not currently as widespread as for MPEG-4.
In contrast, MPEG-4 uses a system of "media objects", in which video data is divided into these objects for transmission to the end-user device. These media objects may include non- moving (still) images, video objects (such as a moving figure) and audio objects, as well as other types of objects. These objects are then integrated for being played back to the end user by the end-user device. MPEG-4 is widely available on end-user devices, particularly for those devices which are operated with one of the Windows™ operating systems (Microsoft, USA), as certain versions of this operating system incorporates a media player which is capable of decompressing and playing back video data which has been compressed according to MPEG-4. Generally, MPEG-4 has been designed to be suitable for compression of video data through the Internet/World Wide Web and for mobile computational devices, such as WAP (wireless application protocol) enabled cellular telephones for example. Thus, currently end-user devices are restricted according to the type of video data compression which they can receive and/or generate, because of the limitations of each of the above methods of video data compression.
SUMMARY OF THE INVENTION The background art does not teach or suggest a method or system for efficient video compression which is suitable for a wide range of end-user computational devices and which is able to perform multi-stage compression within a single device or other implementation.. The background art also does not teach or suggest such a method or system which uses the best qualities of more than one type of MPEG compression. The present invention overcomes these deficiencies of the background art by providing a system and method which is useful for providing compressed video data to a wide range of end- user computational devices. The present invention has been shown to be more efficient in the degree of compression of the video data which is achieved than other background art methods and devices, while still providing a solution which is capable with a wide range of commonly available end-user devices. In addition, the present invention is also adaptable to many different types of end-user computational devices. The present invention achieves these goals by providing a system and method which are an integrated solution for capturing an analog video signal, processing this signal to form processed video data, and efficiently compressing the video data. The compressed video data can then optionally be stored as a digital file and/or streamed directly as a digital signal over a network, such as the Internet for example. The present invention compresses the video data in a plurality of stages, including at least one stage for processing the analog data in order to increase the efficiency of compression, and a second stage for actual compression of the resultant digital video data.
More preferably, the first stage also includes compression of the video data after digitization of the analog signal. Optionally and more preferably, the compressed video data from the first stage is decompressed before being recompressed in the second stage. Most preferably, the method of compression of the video data for the first stage is performed according to MPEG-2, while the method of compression of the video data for the second stage is performed according to MPEG-4. According to the present invention, there is provided a method for compressing video data, comprising: processing an analog video signal according to at least one parameter for determining at least one characteristic of the resultant compressed video data, including converting the processed analog video signal into digital video data; and compressing the digital video data to form the compressed video data, wherein at least one characteristic of the resultant compressed video data is determined according to the at least one parameter.
According to another embodiment of the present invention, there is provided a system for compressing video data, comprising: (a) an analog output device for producing an analog video signal; (b) an integrated hardware device for capturing the analog video signal, processing the analog video signal to form digital video data, and for compressing the digital video data in two stages to form compressed video data; and (c) a computational device for controlling the integrated hardware device and for being in communication with the integrated hardware device. Hereinafter, the term "computational device" includes, but is not limited to, any type of wireless device and any type of computer. Examples of such computational devices include, but are not limited to, handheld devices, including devices operating with the Palm OS®; hand-held computers, PDA (personal data assistant) devices, cellular telephones, any type of WAP (wireless application protocol) enabled device, portable computers of any type and wearable computers of any sort, which have an operating system. An end-user device is any type of computational device which is used for playing back video data to a user.
The present invention could be implemented in software, firmware or hardware. The present invention could be implemented as substantially any type of integrated circuit or other electronic device capable of performing the functions described herein. Also, the present invention can be described as a plurality of instructions being executed by a data processor.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention is herein described, by way of example only, with reference to the accompanying drawings, wherein:
FIG. 1 is a schematic block diagram showing an exemplary system according to the present invention for two-stage video compression;
FIG. 2 shows exemplary architecture of a video card for insertion into a computational device according to the present invention; and FIG. 3 is a flowchart of an exemplary method according to the present invention for operation with the system of Figure 1.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention is of a system and method for capturing an analog video signal, processing this signal to form processed video data, and efficiently compressing the video data. The compressed video data can then optionally be stored as a digital file and/or streamed directly as a digital signal over a network, such as the Internet for example. The present invention compresses the video data in a plurality of stages, including at least one stage for processing the analog data in order to increase the efficiency of compression, and a second stage for actual compression of the resultant digital video data.
More preferably, the first stage also includes compression of the video data after digitization of the analog signal. Optionally and more preferably, the compressed video data from the first stage is decompressed before being recompressed in the second stage. Most preferably, the method of compression of the video data for the first stage is performed according to MPEG-2, while the method of compression of the video data for the second stage is performed according to MPEG-4. According to preferred embodiments of the present invention, there is provided a system featuring a set of integrated hardware and software components, preferably including at least a hardware card with a PCI interface installed on a card slot which is compatible with PC (personal computer) hardware, and digital-video compression software which communicates with the hardware card. Examples of a card slot which is compatible with PC hardware includes but is not limited to, card slots structured according to the USB (universal serial bus) architecture standard, the IEEE 1394 ('FireWire') standard, and the ISA (industry standard architecture) standard. These components are preferably connected to a network, optionally an IP network such as a LAN, WAN and/or the Internet. Optionally and more preferably, these hardware and software components are combined into a single device, such as a set-top-box for example, which most preferably can be used to perform any one or more of the following tasks: capture a live feed from a video camera and transmit live streaming video data in real time over a network; capture live feed from a video camera and store the resultant video data as a file; capture any data from a playback device and transmit such captured data; and capture any data from a playback device and store such data as a file. The data which is captured from the playback device, and which forms the input to the present invention for processing and compression, is actually preferably an analog signal, which may be in various formats, including but not limited to, VHS, Super VHS, and BetaCam. The video data is in a format which is suitable for storage and/or play back by a computational device.
According to other preferred embodiments of the present invention, an application layer is more preferably featured. The application layer is laid over the above system, and interacts with the components of the system in order to provide various types of functionality, for example through software program modules. Examples of these different types of functionality include, but are not limited to, security and surveillance, entertainment, and UI (User Interface) with which the system operator (user) interacts in order to operate the system of the present invention. The system operator is distinguished from the end user according to the type of interaction with the system; the former individual controls the various parameters for capturing and transmitting the video data, while the latter controls playing back the video data. Of course, the system operator and the end user could optionally be the same individual, for example when video data is played back for the system operator. These preferred components of the system of the present invention are more preferably used to perform a method according to the present invention for compressing video data. The result of the method of the present invention is a significantly low data-rate output per given input and quality parameters (picture size, color type and depth etc.). In other words, the captured file is significantly smaller in comparison to other processes with the same quality parameters; and the required streamed bandwidth is much lower than the output of background art streaming solutions, as shown by the test results below.
The principles and operation of a method and a system according to the present invention may be better understood with reference to the drawings and the accompanying description. Turning now to the drawings, Figure 1 shows an exemplary system according to the present invention for performing video compression in a plurality of stages. As shown, a system 10 preferably features at least an integrated device 12 according to the present invention. Optionally, integrated device 12 is implemented as a plurality of separate devices, although this implementation is less preferred. Integrated device 12 is connected to a source of a video signal, whether analog or digital, which is preferably a video camera 14 as shown, but which could also be a VCR or other device for recording, playing back, storing and/or generating video signals. Integrated device 12 then captures the video signals, and compresses these signals into compressed video data according to a process described in greater detail below, particularly with regard to the exemplary method of Figure 3. The compressed video data is then preferably transmitted through a network 16, such as a
LAN, WAN or the Internet for example, to a remote end-user device 18 as shown. End-user device 18 then decompresses and plays back the video data to the end user.
According to preferred embodiments of the present invention, integrated device 12 features a hardware card 20 according to the present invention, which is shown in more detail with regard to Figure 2 below. Hardware card 20 optionally and preferably communicates with an application layer 22, which is a unique software driver. Integrated device 12 is optionally and more preferably implemented as a computer, such that application layer 22 functions as the 'translator' between hardware card 20 and an operating system installed on integrated device 12. Integrated device 12 therefore also optionally and more preferably features a storage device 24, such as a flash memory device and/or a magnetic storage medium device or "hard drive", and a network card 26 for connecting to network 16. Integrated device 12 may also optionally feature user input & output devices, including but not limited to a keyboard, mouse or other pointing device, and a monitor (not shown).
Application layer 22 also preferably features a final encoding unit 28, which is more preferably implemented as an external software component which reads the data from hardware card 20, converts such data to its final format and delivers the data for distribution through network 16.
Optionally, application layer 22 is implemented as a software layer, which is installed on integrated device 12. Also optionally, application layer 22 may feature several different types of software functions, each of which performs a different task and each of which interacts with hardware card 20. Application layer 22 more preferably controls the functionality and performance that can potentially be utilized by using hardware card 20, and also uses hardware card 20 for capturing, storing and/or transmitting video data over network 16.
For example, application layer 22 may optionally feature such functions as a User Interface, which provides control over the modifiable parameters in the video/audio capturing and transmitting process, and also provides control over other operational applications which use hardware card 20; a security application; a surveillance application; and an entertainment application.
The user interface is optionally and more preferably provided to the system operator (user) through a system operator station 30, which is a computational device connected to integrated device 12. System operator station 30 enables the system operator to control the parameters of the compression process, and also enables the system operator to control the functions of integrated device 12. System operator station 30 preferably features a system operator UI 32, for communicating with hardware card 20 and application layer 22 of integrated device 12; and a local playback device 34 for receiving video data for playback from storage device 24.
Figure 2 shows an exemplary architecture of the hardware card for capturing, compressing and/or transmitting video data of Figure 1 in more detail. Hardware card 20 preferably features an input-device unit 36, which can optionally be attached to the remainder of hardware card 20 through a composite connection, S-Video connection, or any other analog/digital connection for connecting to integrated device 12 (not shown; see Figure 1). The input format can be NTSC (National TV Standards Committee) or PAL (Phase Alternating Line), for example. NTSC is a color TV standard that was developed in the U.S. and which is currently used in the U.S. for television broadcasts. NTSC broadcasts 30 interlaced frames per second (60 half frames per second, or 60 "fields" per second in TV jargon) at 525 lines of resolution. The signal is a composite of red, green and blue and includes an audio FM frequency and an MTS signal for stereo. PAL is a color TV standard that was developed in Germany and is used throughout Europe. It broadcasts 25 interlaced frames per second (50 half frames per second) at 625 lines of resolution. The color signals for PAL are maintained automatically, and the television set does not have a user-adjustable hue control. Input-device unit 36 is the physical connection point to integrated device 12 as a whole.
The connection-plugs for input-device unit 36 are preferably industry standard connectors, and are optionally third party components which can be purchased "off the shelf. These connectors are standard input connectors and could optionally be implemented as RCA connectors for example. RCA connectors are a standard plug and socket for a two-wire (signal and ground) coaxial cable that is widely used to connect audio and video components. Optionally and more preferably, there are four connection plugs 37 on hardware card 20, optionally according to one of the following combinations: 4 x Composite connections; or 3 x Composite connections + 1 x S-Video connection. Composite connectors rely upon the recording and transmission of video which mixes the color information and synchronization signals together; particular RCA connectors are commonly used for composite video on VCRs, camcorders and other consumer appliances. S-video (Super-video) is a format for recording and transmitting video by keeping luminance (Y) and color information (C) on separate channels. S- video uses a special 5-pin connector. Connectors of this type are widely used on camcorders, VCRs and A/V receivers and amplifiers, as the format can improve transmission quality and the image at the receiving end.
Hardware card 20 also preferably features a capturing and initial encoding unit 38, for performing at least one, but preferably all, of the following tasks. First, capturing unit 38 receives the analog signal from input-device unit 36 and digitizes the analog signal. The output of this process is the same original signal in a digital format.
Second, capturing unit 38 compresses the video data by using a compression algorithm, which is optionally and more preferably performed according to MPEG 2. The output of this stage in the process is compressed digital data. The MPEG 2 compression format is preferred since it enables some parameters of the output result to be controlled and modified, such as picture size, frame-rate etc. By using system operator UI 32 (not shown; see Figure 1), the system operator can optionally and preferably modify those parameters, which are then set for the chip operation by the driver for hardware card 20, through the operation of application layer 22. The default picture-size of the output result is optionally and more preferably 'Half-Dl ', which is 320 x 240 pixels per frame. Optionally and most preferably, capturing unit 38 is implemented as a single chip and a memory buffer unit 40. Memory buffer unit 40 is preferably implemented as a third party chip which also functions as a temporary data-storage in the process. Memory buffer unit 40 preferably controls the supply and demand of the data to capturing unit 38.
Hardware device 20 is also preferably in communication with a card driver (not shown), which is a software library for controlling the interaction of hardware device 20 with the external environment, such as the PCI BUS and the operating system of integrated device 12. The card driver is also preferably used for configuration of hardware device 20.
In addition, hardware device 20 also preferably features a final encoding unit 28 (not shown; see Figure 1), which is an external software component for reading the compressed-data that was processed by capturing unit 38. This 'reading' action is actually optionally and more preferably performed by 'playing-back' the compressed data, so the input to final encoding unit 28 is thus more preferably a decompressed digital feed. Final encoding unit 28 preferably then re-compresses the decompressed feed by using compression algorithms, which can be standard compression algorithms. Optionally and more preferably, compression is performed according to MPEG 4.
Final encoding unit 28 is optionally third party software such as Microsoft Windows™ Media Tools, which involves the Microsoft Windows Media SDK (Software Development Kit)™, Windows Media Encoder™ and other components that are a part of the Windows™ operating system, such as the Microsoft DirectShow™ filters. The output result of this stage is more preferably a re-compressed digital data in MPEG 4 format, optionally and most preferably wrapped with some specific layers conducted by the encoding environment, which in this case is the Microsoft Windows Media Tools™. At this time, the final data format is one of the file or stream types that the Microsoft Windows™ environment produces, which are *.ASF, *.WMV or *.WMA. The output data can than be transferred to one of the external environment components, such as storage device 24 of integrated device 12 (not shown; see Figure 1), an external storage device (not shown) or NIC 26 of integrated device 12 (not shown; see Figure 1). MPEG encoding unit 46 performs MPEG-2 compression, which is more preferably implemented as a single chip (such as the Fusion™ bt878A chip of Conexant USA for example). Capturing unit 38 may also optionally be implemented on that chip. If purchased "off the shelf, the software driver for such a chip is optionally and preferably used as the communicating interface to the remainder of hardware card 20. MPEG-4 compression is preferably performed by a software application, which as shown in Figure 1 is final encoding unit 28, and is more preferably not located on hardware card 20, but instead is operated by integrated device 12 (not shown; see Figure 1) and/or another computational device.
Hardware card 20 also more preferably features an interface 48 to integrated device 12 itself (not shown; see Figure 1), which is most preferably implemented as a PCI bus interface. In addition, power is also preferably supplied through interface 48, and is optionally and more preferably controlled through a power controller 50.
Figure 3 shows an exemplary method for operation with the present invention. The following diagram describes the workflow for the system of Figures 1 and 2, with the optional basic input and output components.
As shown, an initial analog signal input 52 is received from the video camera or other video capture device of Figure 1 (shown as an input unit 54). The analog signal could optionally be in the form of a composite or S-video signal, for example. More preferably, there are two sets of video input signals: a first set featuring four separate composite RCA inputs and a second set with on S-video input signal.
Next, an input device unit 56 of a capturing and initial encoding unit 58 receives the analog signal. This signal is processed according to the configuration set by the system driver which 'defines' the input source and the input format. The parameters for this capture process may optionally and preferably be determined by system operator through a system operator interface 62 as a set of capturing parameters 64. These capturing parameters 64 would then be used for the subsequent operations for the formation of the digital signal. The analog signal is optionally stored temporarily in a data buffer 66 for the conversion process to form an uncompressed digital signal 68 by a capturing unit 70. Various different formats for the input video analog signal may optionally be used, including but not limited to, and PAL, NTSC. These different formats may also optionally be used concurrently. For PAL video signals, the frame rate is optionally and preferably 25 Fps (frames per second), while for NTSC video signals, the frame rate is optionally and preferably 30 Fps. Multiple output resolutions are also optionally possible for each of these different formats: for example, PAL video data could optionally be output at resolutions of 192x144; 384x288; and 768x576 pixels per frame; while NTSC video data could optionally be output at resolutions of 160x120; 320x240; and 640x480 pixels per frame. Of course, these frame rates, resolutions and video standards are only intended as examples, as many other different implementations are possible within the scope of the present invention.
Different types of image formats for output image data are also possible within the scope of the present invention, such as YUV 12, YUV 2, YUV9, 15 bit RGB, 24 bit RGB, 32 bit RGB A and BTYU, all of which are different examples of color models used for encoding video data. Y is the luminosity of the black and white signal. U and V are color difference signals. U is red minus Y (R-Y), and V is blue minus Y (B-Y). In order to display YUV data on a computer screen, it must be converted into RGB through a process known as "color space conversion." YUV is used because it saves storage space and transmission bandwidth compared to RGB. YUV is not compressed RGB; rather, it is the mathematical equivalent of RGB. Also known as component video, the YUV elements are written in various ways: (1) Y, R-Y, B-Y, (2) Y, Cr, Cb and (3) Y, Pa, Pb. Certain types of equipment provide YUV component video outputs, which are a typical set of connectors using standard RCA connectors.
An initial encoding unit 72 then preferably receives instructions for encoding digital signal 68 for the first stage of compression. These instructions 74 optionally include values for such parameters as encoded picture size and output format, which are optionally and preferably received from the system operator as initial encoding parameters 76. After initial encoding unit 72 performs the first stage of encoding, more preferably according to MPEG-2 and most preferably resulting in the formation of digital MPEG data in the "half-Dl" format, the resultant data is optionally stored in a buffer 78. The data is then preferably decompressed to form processed raw digital data which is passed to a final encoding unit 80.
Final encoding unit 80 then preferably performs the final compression of the data for transmission over a network such as the Internet. The parameters for controlling this compression process are preferably received from the system operator as final encoding parameters 82, and optionally ^nd more preferably include such parameters as encoded picture size and encoded file/stream type 84.
The distribution of the compressed data to a final destination(s) as a digital signal 88, such as an end-user computational device for example, is optionally and preferably controlled by distribution parameters 86. These distribution parameters are more preferably received from the system operator.
The device and system of the present invention, as constructed with regard to the preferred embodiments of Figure 2 and as operated with regard to the preferred embodiments of Figure 3, clearly showed better performance in comparison to background art devices when tested for the ability to compress data for storage. Test video data was derived, with a single quality and picture resolution, which was one minute in length. The table below shows the results for the present invention (labeled as "Hyper MPEG4"), in comparison to devices operating according to AVI, MPEGl and MPEG2, and standard MPEG4 (that is, compression only with MPEG4, as opposed to the preferred embodiments of the present invention which incorporate compression according to both MPEG2 and MPEG4).
Figure imgf000014_0001
According to optional but preferred implementations and applications of the present invention, the system and method of the present invention may be optionally and preferably be used for both on-line video broadcasting and for off-line video broadcasting. On-line video broadcasting involves the direct streaming transmission of video data to a distribution point, such as an end user device for example, after the video data has been captured, processed and compressed. Off-line video broadcasting is similar, except that the compressed video data is stored locally and/or transmitted to a remote storage device for future distribution or playback. In order to provide these different functions, as previously described, the system of the present invention features an application layer. Preferably, this application layer contains software functions which fall into two categories: the system operating environment; and the system service support functions.
The system operating environment preferably includes all of the previously described system core functions, together with the ability to configure these functions, operate them and also optionally connect to higher level applications, such as the system service support functions. Examples of such basic operating environment functions include but are not limited to: a "plug and play" mechanism, which enables the hardware card of the present invention to be recognized by the operating system of the integrated device as a new hardware device, attached through the PCI interface; settings and system parameters which are set by the system operator and which are optionally protected with a password; and an authentication mechanism to protect features, settings and configuration of the system. In addition, various types of transmission options are provided as previously described, including direct transmission of data, storage of data or a combination thereof.
Optionally and more preferably, archiving management software is provided for managing the storage of the captured and compressed data. These archiving functions may optionally include a file search engine, for searching for video data by file; the provision of either automatic or manual determination of file names; sorting files by name, date, time and/or other file characteristics; uploading files to a remote distribution point Or end user device automatically or manually, with automatic uploading being performed according to a pre-defined policy and scheduling; and playing back file data upon request.
Other functions of the system of the present invention may also optionally be automatically scheduled. In addition or alternatively, different functions within the system of the present invention may also optionally be controlled remotely.
According to another preferred embodiment of the present invention, the system can optionally support a point-to-point videoconference through an external software application, such that the hardware card of the present invention is used as the capturing device for a video/audio conference session. Examples of such external software applications include but are not limited to 'Windows™ 2000 dialer', 'NetMeeting™', and 'CUSeeMe™'.
According to still another preferred embodiment of the present invention, the system can optionally support a "live" help desk connection, such that the system operator and/or end user is able to directly access a customer-support center, for example using the previously described video conference, chat and/or telephone communication. These may be implemented by using standardized video-conference software applications which are known in the art; however the efficiency of operation of these applications is greatly increased according to the present invention since the various parameters for processing and compressing the video data are provided to these applications, which can thus be adjusted for optimal performance with the present invention. It should be noted that although the previous description centers around the capture, processing and compression of video data, the present invention may also optionally be used for separate compression of video/image and audio data. In addition, it is also assumed that the previous references to "video" data include combined audio and video data, where appropriate.

Claims

WHAT IS CLAIMED IS:
1. A method for compressing video data, comprising: processing an analog video signal according to at least one parameter for determining at least one characteristic of the resultant compressed video data, including converting said processed analog video signal into digital video data; and compressing said digital video data to form the compressed video data, wherein at least one characteristic of the resultant compressed video data is determined according to said at least one parameter.
2. The method of claim 1, wherein said analog video signal is processed by: converting said analog video signal to digital video data; and compressing said digital video data to form compressed video data.
3. The method of claim 2, wherein compression of said digital video data is performed in two stages.
4. The method of claim 3, wherein said compressed video data is first uncompressed to form said digital video data, such that said digital video data is then compressed again to form the resultant compressed video data.
5. The method of claim 4, wherein said digital video data is compressed according to MPEG-2 and wherein the compressed video data is formed by compression according to MPEG-4.
6. The method of claim 5, wherein said digital video data is compressed according to half-Dl for MPEG-2.
7. The method of any of claims 2-6, wherein said digital video data is compressed according to said at least one parameter, such that said at least one parameter determines said at least one characteristic of the resultant compressed video data.
8. The method of claim 7, wherein said at least one parameter is selected from the group consisting of picture size, frame rate, resolution, and color parameters.
9. The method of claim 8, wherein said at least one parameter is manually set by a system operator.
10. The method of any of claims 1 -9, wherein the resultant compressed video data is transmitted to an end-user device through a network, for being played back.
11. The method of claim 10, wherein the resultant compressed video data is first stored before being transmitted to said end-user device.
12. The method of claim 11, wherein the resultant compressed video data is stored as at least one file.
13. The method of claim 12, wherein said file is one of a plurality of files, and said plurality of files are searchable by the end user through said end-user device.
14. The method of claims 12 or 13, wherein said file is one of a plurality of files, and said plurality of files are sortable according to at least one file characteristic.
15. The method of any of claims 9- 14, wherein said end-user device supports a videoconference function and wherein said analog video signal is received through said videoconference function.
16. The method of claim 15, wherein said videoconference function is used for customer support, such that an end user operating said end-user device obtains assistance through said videoconference function.
17. The method of claim 5, wherein transmission of the resultant compressed video data is performed automatically or manually.
18. The method of claim 17, wherein said automatic transmission is performed according to a schedule as determined by a system operator.
19. A system for compressing video data, comprising:
(a) an analog output device for producing an analog video signal;
(b) an integrated hardware device for capturing said analog video signal, processing said analog video signal to form digital video data, and for compressing said digital video data in two stages to form compressed video data; and
(c) a computational device for controlling said integrated hardware device and for being in communication with said integrated hardware device.
20. The system of claim 19, wherein said integrated hardware device comprises:
(i) an analog to digital conversion unit for converting said analog video signal to said digital video data; (ii) a first compression unit for first compressing said digital video data in said first stage; and (ii) a second compression unit for again compressing said digital video data in said second stage.
21. The system of claim 20, wherein said first compression unit also decompresses said compressed digital video data before said digital video data is given to said second compression unit.
22. The system of any of claims 19-21, wherein said digital video data is compressed according to MPEG-2 in said first stage and said digital video data is compressed according to MPEG-4 in said second stage.
23. The system of claim 22, wherein said digital video data is compressed according to half-Dl for MPEG-2.
24. The system of any of claims 19-23, wherein said digital video data is compressed according to said at least one parameter, such that said at least one parameter determines said at least one characteristic of said compressed video data.
PCT/IL2001/000509 2001-06-03 2001-06-03 System and method for rapid video compression WO2002100112A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IL2001/000509 WO2002100112A1 (en) 2001-06-03 2001-06-03 System and method for rapid video compression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IL2001/000509 WO2002100112A1 (en) 2001-06-03 2001-06-03 System and method for rapid video compression

Publications (1)

Publication Number Publication Date
WO2002100112A1 true WO2002100112A1 (en) 2002-12-12

Family

ID=11043055

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2001/000509 WO2002100112A1 (en) 2001-06-03 2001-06-03 System and method for rapid video compression

Country Status (1)

Country Link
WO (1) WO2002100112A1 (en)

Cited By (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1024605C2 (en) * 2003-10-23 2005-04-27 Guillaume Valentine Fl Defosse Method and device for compressing video data.
US7864573B2 (en) 2008-02-24 2011-01-04 Anobit Technologies Ltd. Programming analog memory cells for reduced variance after retention
US7881107B2 (en) 2007-01-24 2011-02-01 Anobit Technologies Ltd. Memory device with negative thresholds
US7900102B2 (en) 2006-12-17 2011-03-01 Anobit Technologies Ltd. High-speed programming of memory devices
US7924648B2 (en) 2006-11-28 2011-04-12 Anobit Technologies Ltd. Memory power and performance management
US7925936B1 (en) 2007-07-13 2011-04-12 Anobit Technologies Ltd. Memory device with non-uniform programming levels
US7924613B1 (en) 2008-08-05 2011-04-12 Anobit Technologies Ltd. Data storage in analog memory cells with protection against programming interruption
US7924587B2 (en) 2008-02-21 2011-04-12 Anobit Technologies Ltd. Programming of analog memory cells using a single programming pulse per state transition
US7975192B2 (en) 2006-10-30 2011-07-05 Anobit Technologies Ltd. Reading memory cells using multiple thresholds
US7995388B1 (en) 2008-08-05 2011-08-09 Anobit Technologies Ltd. Data storage using modified voltages
US8000141B1 (en) 2007-10-19 2011-08-16 Anobit Technologies Ltd. Compensation for voltage drifts in analog memory cells
US8001320B2 (en) 2007-04-22 2011-08-16 Anobit Technologies Ltd. Command interface for memory devices
US8000135B1 (en) 2008-09-14 2011-08-16 Anobit Technologies Ltd. Estimation of memory cell read thresholds by sampling inside programming level distribution intervals
US8050086B2 (en) 2006-05-12 2011-11-01 Anobit Technologies Ltd. Distortion estimation and cancellation in memory devices
US8059457B2 (en) 2008-03-18 2011-11-15 Anobit Technologies Ltd. Memory device with multiple-accuracy read commands
US8060806B2 (en) 2006-08-27 2011-11-15 Anobit Technologies Ltd. Estimation of non-linear distortion in memory devices
US8068360B2 (en) 2007-10-19 2011-11-29 Anobit Technologies Ltd. Reading analog memory cells using built-in multi-threshold commands
US8085586B2 (en) 2007-12-27 2011-12-27 Anobit Technologies Ltd. Wear level estimation in analog memory cells
US8151166B2 (en) 2007-01-24 2012-04-03 Anobit Technologies Ltd. Reduction of back pattern dependency effects in memory devices
US8151163B2 (en) 2006-12-03 2012-04-03 Anobit Technologies Ltd. Automatic defect management in memory devices
US8156403B2 (en) 2006-05-12 2012-04-10 Anobit Technologies Ltd. Combined distortion estimation and error correction coding for memory devices
US8156398B2 (en) 2008-02-05 2012-04-10 Anobit Technologies Ltd. Parameter estimation based on error correction code parity check equations
US8169825B1 (en) 2008-09-02 2012-05-01 Anobit Technologies Ltd. Reliable data storage in analog memory cells subjected to long retention periods
US8174905B2 (en) 2007-09-19 2012-05-08 Anobit Technologies Ltd. Programming orders for reducing distortion in arrays of multi-level analog memory cells
US8174857B1 (en) 2008-12-31 2012-05-08 Anobit Technologies Ltd. Efficient readout schemes for analog memory cell devices using multiple read threshold sets
US8209588B2 (en) 2007-12-12 2012-06-26 Anobit Technologies Ltd. Efficient interference cancellation in analog memory cell arrays
US8208304B2 (en) 2008-11-16 2012-06-26 Anobit Technologies Ltd. Storage at M bits/cell density in N bits/cell analog memory cell devices, M>N
US8225181B2 (en) 2007-11-30 2012-07-17 Apple Inc. Efficient re-read operations from memory devices
US8230300B2 (en) 2008-03-07 2012-07-24 Apple Inc. Efficient readout from analog memory cells using data compression
US8228701B2 (en) 2009-03-01 2012-07-24 Apple Inc. Selective activation of programming schemes in analog memory cell arrays
US8234545B2 (en) 2007-05-12 2012-07-31 Apple Inc. Data storage with incremental redundancy
US8239734B1 (en) 2008-10-15 2012-08-07 Apple Inc. Efficient data storage in storage device arrays
US8239735B2 (en) 2006-05-12 2012-08-07 Apple Inc. Memory Device with adaptive capacity
US8238157B1 (en) 2009-04-12 2012-08-07 Apple Inc. Selective re-programming of analog memory cells
US8248831B2 (en) 2008-12-31 2012-08-21 Apple Inc. Rejuvenation of analog memory cells
US8261159B1 (en) 2008-10-30 2012-09-04 Apple, Inc. Data scrambling schemes for memory devices
US8259506B1 (en) 2009-03-25 2012-09-04 Apple Inc. Database of memory read thresholds
US8259497B2 (en) 2007-08-06 2012-09-04 Apple Inc. Programming schemes for multi-level analog memory cells
US8270246B2 (en) 2007-11-13 2012-09-18 Apple Inc. Optimized selection of memory chips in multi-chips memory devices
US8369141B2 (en) 2007-03-12 2013-02-05 Apple Inc. Adaptive estimation of memory cell read thresholds
US8400858B2 (en) 2008-03-18 2013-03-19 Apple Inc. Memory device with reduced sense time readout
US8429493B2 (en) 2007-05-12 2013-04-23 Apple Inc. Memory device with internal signap processing unit
US8479080B1 (en) 2009-07-12 2013-07-02 Apple Inc. Adaptive over-provisioning in memory systems
US8482978B1 (en) 2008-09-14 2013-07-09 Apple Inc. Estimation of memory cell read thresholds by sampling inside programming level distribution intervals
US8495465B1 (en) 2009-10-15 2013-07-23 Apple Inc. Error correction coding over multiple memory pages
US8493781B1 (en) 2010-08-12 2013-07-23 Apple Inc. Interference mitigation using individual word line erasure operations
US8527819B2 (en) 2007-10-19 2013-09-03 Apple Inc. Data storage in analog memory cell arrays having erase failures
US8572311B1 (en) 2010-01-11 2013-10-29 Apple Inc. Redundant data storage in multi-die memory systems
US8572423B1 (en) 2010-06-22 2013-10-29 Apple Inc. Reducing peak current in memory systems
US8595591B1 (en) 2010-07-11 2013-11-26 Apple Inc. Interference-aware assignment of programming levels in analog memory cells
US8645794B1 (en) 2010-07-31 2014-02-04 Apple Inc. Data storage in analog memory cells using a non-integer number of bits per cell
US8677054B1 (en) 2009-12-16 2014-03-18 Apple Inc. Memory management schemes for non-volatile memory devices
US8694854B1 (en) 2010-08-17 2014-04-08 Apple Inc. Read threshold setting based on soft readout statistics
US8694853B1 (en) 2010-05-04 2014-04-08 Apple Inc. Read commands for reading interfering memory cells
US8694814B1 (en) 2010-01-10 2014-04-08 Apple Inc. Reuse of host hibernation storage space by memory controller
US8774541B2 (en) 2008-11-05 2014-07-08 Sony Corporation Intra prediction with adaptive interpolation filtering for image compression
US8832354B2 (en) 2009-03-25 2014-09-09 Apple Inc. Use of host system resources by memory controller
US8856475B1 (en) 2010-08-01 2014-10-07 Apple Inc. Efficient selection of memory blocks for compaction
US8924661B1 (en) 2009-01-18 2014-12-30 Apple Inc. Memory system including a controller and processors associated with memory devices
US8949684B1 (en) 2008-09-02 2015-02-03 Apple Inc. Segmented data storage
US9021181B1 (en) 2010-09-27 2015-04-28 Apple Inc. Memory management for unifying memory cell conditions by using maximum time intervals
US9104580B1 (en) 2010-07-27 2015-08-11 Apple Inc. Cache memory for hybrid disk drives
US11556416B2 (en) 2021-05-05 2023-01-17 Apple Inc. Controlling memory readout reliability and throughput by adjusting distance between read thresholds
US11847342B2 (en) 2021-07-28 2023-12-19 Apple Inc. Efficient transfer of hard data and confidence levels in reading a nonvolatile memory

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5491797A (en) * 1992-11-30 1996-02-13 Qwest Communications Schedulable automatically configured video conferencing system
EP0700214A2 (en) * 1994-08-30 1996-03-06 Hughes Aircraft Company Two stage video compression method and system
WO1997039584A1 (en) * 1996-04-12 1997-10-23 Imedia Corporation Video transcoder
EP0805592A2 (en) * 1996-05-03 1997-11-05 Intel Corporation Video transcoding with interim encoding format
EP0823822A2 (en) * 1996-08-05 1998-02-11 Mitsubishi Denki Kabushiki Kaisha Image coded data re-encoding apparatus
US5819004A (en) * 1995-05-08 1998-10-06 Kabushiki Kaisha Toshiba Method and system for a user to manually alter the quality of previously encoded video frames
EP0907287A2 (en) * 1997-10-01 1999-04-07 Matsushita Electric Industrial Co., Ltd. Conversion of DV format encoded video data into MPEG format
WO1999018728A1 (en) * 1997-10-02 1999-04-15 General Datacomm, Inc. Interconnecting multimedia data streams having different compressed formats
EP1032217A2 (en) * 1999-01-07 2000-08-30 General Instrument Corporation Multi-functional transcoder for compressed bit stream
US6119178A (en) * 1997-11-25 2000-09-12 8×8 Inc. Communication interface between remote transmission of both compressed video and other data and data exchange with local peripherals
WO2001095633A2 (en) * 2000-06-09 2001-12-13 General Instrument Corporation Video size conversion and transcoding from mpeg-2 to mpeg-4

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5491797A (en) * 1992-11-30 1996-02-13 Qwest Communications Schedulable automatically configured video conferencing system
EP0700214A2 (en) * 1994-08-30 1996-03-06 Hughes Aircraft Company Two stage video compression method and system
US5819004A (en) * 1995-05-08 1998-10-06 Kabushiki Kaisha Toshiba Method and system for a user to manually alter the quality of previously encoded video frames
WO1997039584A1 (en) * 1996-04-12 1997-10-23 Imedia Corporation Video transcoder
EP0805592A2 (en) * 1996-05-03 1997-11-05 Intel Corporation Video transcoding with interim encoding format
EP0823822A2 (en) * 1996-08-05 1998-02-11 Mitsubishi Denki Kabushiki Kaisha Image coded data re-encoding apparatus
EP0907287A2 (en) * 1997-10-01 1999-04-07 Matsushita Electric Industrial Co., Ltd. Conversion of DV format encoded video data into MPEG format
WO1999018728A1 (en) * 1997-10-02 1999-04-15 General Datacomm, Inc. Interconnecting multimedia data streams having different compressed formats
US6119178A (en) * 1997-11-25 2000-09-12 8×8 Inc. Communication interface between remote transmission of both compressed video and other data and data exchange with local peripherals
EP1032217A2 (en) * 1999-01-07 2000-08-30 General Instrument Corporation Multi-functional transcoder for compressed bit stream
WO2001095633A2 (en) * 2000-06-09 2001-12-13 General Instrument Corporation Video size conversion and transcoding from mpeg-2 to mpeg-4

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ABDEL-MOTTALEB M ET AL: "Aspects of multimedia retrieval", PHILIPS JOURNAL OF RESEARCH, ELSEVIER, AMSTERDAM, NL, vol. 50, no. 1, 1996, pages 227 - 251, XP004008214, ISSN: 0165-5817 *
GAUCH S ET AL: "The vision digital video library", INFORMATION PROCESSING & MANAGEMENT, ELSEVIER, BARKING, GB, vol. 33, no. 4, 1 July 1997 (1997-07-01), pages 413 - 426, XP004087986, ISSN: 0306-4573 *
WEE S J ET AL: "Field-to-frame transcoding with spatial and temporal downsampling", IMAGE PROCESSING, 1999. ICIP 99. PROCEEDINGS. 1999 INTERNATIONAL CONFERENCE ON KOBE, JAPAN 24-28 OCT. 1999, PISCATAWAY, NJ, USA,IEEE, US, 24 October 1999 (1999-10-24), pages 271 - 275, XP010368707, ISBN: 0-7803-5467-2 *

Cited By (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL1024605C2 (en) * 2003-10-23 2005-04-27 Guillaume Valentine Fl Defosse Method and device for compressing video data.
WO2005041584A2 (en) * 2003-10-23 2005-05-06 Defosse Guillaume Method and device for compressing video data
WO2005041584A3 (en) * 2003-10-23 2005-06-30 Guillaume Defosse Method and device for compressing video data
US8156403B2 (en) 2006-05-12 2012-04-10 Anobit Technologies Ltd. Combined distortion estimation and error correction coding for memory devices
US8239735B2 (en) 2006-05-12 2012-08-07 Apple Inc. Memory Device with adaptive capacity
US8050086B2 (en) 2006-05-12 2011-11-01 Anobit Technologies Ltd. Distortion estimation and cancellation in memory devices
US8599611B2 (en) 2006-05-12 2013-12-03 Apple Inc. Distortion estimation and cancellation in memory devices
US8570804B2 (en) 2006-05-12 2013-10-29 Apple Inc. Distortion estimation and cancellation in memory devices
US8060806B2 (en) 2006-08-27 2011-11-15 Anobit Technologies Ltd. Estimation of non-linear distortion in memory devices
US7975192B2 (en) 2006-10-30 2011-07-05 Anobit Technologies Ltd. Reading memory cells using multiple thresholds
USRE46346E1 (en) 2006-10-30 2017-03-21 Apple Inc. Reading memory cells using multiple thresholds
US8145984B2 (en) 2006-10-30 2012-03-27 Anobit Technologies Ltd. Reading memory cells using multiple thresholds
US7924648B2 (en) 2006-11-28 2011-04-12 Anobit Technologies Ltd. Memory power and performance management
US8151163B2 (en) 2006-12-03 2012-04-03 Anobit Technologies Ltd. Automatic defect management in memory devices
US7900102B2 (en) 2006-12-17 2011-03-01 Anobit Technologies Ltd. High-speed programming of memory devices
US7881107B2 (en) 2007-01-24 2011-02-01 Anobit Technologies Ltd. Memory device with negative thresholds
US8151166B2 (en) 2007-01-24 2012-04-03 Anobit Technologies Ltd. Reduction of back pattern dependency effects in memory devices
US8369141B2 (en) 2007-03-12 2013-02-05 Apple Inc. Adaptive estimation of memory cell read thresholds
US8001320B2 (en) 2007-04-22 2011-08-16 Anobit Technologies Ltd. Command interface for memory devices
US8234545B2 (en) 2007-05-12 2012-07-31 Apple Inc. Data storage with incremental redundancy
US8429493B2 (en) 2007-05-12 2013-04-23 Apple Inc. Memory device with internal signap processing unit
US7925936B1 (en) 2007-07-13 2011-04-12 Anobit Technologies Ltd. Memory device with non-uniform programming levels
US8259497B2 (en) 2007-08-06 2012-09-04 Apple Inc. Programming schemes for multi-level analog memory cells
US8174905B2 (en) 2007-09-19 2012-05-08 Anobit Technologies Ltd. Programming orders for reducing distortion in arrays of multi-level analog memory cells
US8000141B1 (en) 2007-10-19 2011-08-16 Anobit Technologies Ltd. Compensation for voltage drifts in analog memory cells
US8068360B2 (en) 2007-10-19 2011-11-29 Anobit Technologies Ltd. Reading analog memory cells using built-in multi-threshold commands
US8527819B2 (en) 2007-10-19 2013-09-03 Apple Inc. Data storage in analog memory cell arrays having erase failures
US8270246B2 (en) 2007-11-13 2012-09-18 Apple Inc. Optimized selection of memory chips in multi-chips memory devices
US8225181B2 (en) 2007-11-30 2012-07-17 Apple Inc. Efficient re-read operations from memory devices
US8209588B2 (en) 2007-12-12 2012-06-26 Anobit Technologies Ltd. Efficient interference cancellation in analog memory cell arrays
US8085586B2 (en) 2007-12-27 2011-12-27 Anobit Technologies Ltd. Wear level estimation in analog memory cells
US8156398B2 (en) 2008-02-05 2012-04-10 Anobit Technologies Ltd. Parameter estimation based on error correction code parity check equations
US7924587B2 (en) 2008-02-21 2011-04-12 Anobit Technologies Ltd. Programming of analog memory cells using a single programming pulse per state transition
US7864573B2 (en) 2008-02-24 2011-01-04 Anobit Technologies Ltd. Programming analog memory cells for reduced variance after retention
US8230300B2 (en) 2008-03-07 2012-07-24 Apple Inc. Efficient readout from analog memory cells using data compression
US8059457B2 (en) 2008-03-18 2011-11-15 Anobit Technologies Ltd. Memory device with multiple-accuracy read commands
US8400858B2 (en) 2008-03-18 2013-03-19 Apple Inc. Memory device with reduced sense time readout
US8498151B1 (en) 2008-08-05 2013-07-30 Apple Inc. Data storage in analog memory cells using modified pass voltages
US7995388B1 (en) 2008-08-05 2011-08-09 Anobit Technologies Ltd. Data storage using modified voltages
US7924613B1 (en) 2008-08-05 2011-04-12 Anobit Technologies Ltd. Data storage in analog memory cells with protection against programming interruption
US8949684B1 (en) 2008-09-02 2015-02-03 Apple Inc. Segmented data storage
US8169825B1 (en) 2008-09-02 2012-05-01 Anobit Technologies Ltd. Reliable data storage in analog memory cells subjected to long retention periods
US8000135B1 (en) 2008-09-14 2011-08-16 Anobit Technologies Ltd. Estimation of memory cell read thresholds by sampling inside programming level distribution intervals
US8482978B1 (en) 2008-09-14 2013-07-09 Apple Inc. Estimation of memory cell read thresholds by sampling inside programming level distribution intervals
US8239734B1 (en) 2008-10-15 2012-08-07 Apple Inc. Efficient data storage in storage device arrays
US8261159B1 (en) 2008-10-30 2012-09-04 Apple, Inc. Data scrambling schemes for memory devices
US8713330B1 (en) 2008-10-30 2014-04-29 Apple Inc. Data scrambling in memory devices
US8774541B2 (en) 2008-11-05 2014-07-08 Sony Corporation Intra prediction with adaptive interpolation filtering for image compression
US8208304B2 (en) 2008-11-16 2012-06-26 Anobit Technologies Ltd. Storage at M bits/cell density in N bits/cell analog memory cell devices, M>N
US8248831B2 (en) 2008-12-31 2012-08-21 Apple Inc. Rejuvenation of analog memory cells
US8397131B1 (en) 2008-12-31 2013-03-12 Apple Inc. Efficient readout schemes for analog memory cell devices
US8174857B1 (en) 2008-12-31 2012-05-08 Anobit Technologies Ltd. Efficient readout schemes for analog memory cell devices using multiple read threshold sets
US8924661B1 (en) 2009-01-18 2014-12-30 Apple Inc. Memory system including a controller and processors associated with memory devices
US8228701B2 (en) 2009-03-01 2012-07-24 Apple Inc. Selective activation of programming schemes in analog memory cell arrays
US8832354B2 (en) 2009-03-25 2014-09-09 Apple Inc. Use of host system resources by memory controller
US8259506B1 (en) 2009-03-25 2012-09-04 Apple Inc. Database of memory read thresholds
US8238157B1 (en) 2009-04-12 2012-08-07 Apple Inc. Selective re-programming of analog memory cells
US8479080B1 (en) 2009-07-12 2013-07-02 Apple Inc. Adaptive over-provisioning in memory systems
US8495465B1 (en) 2009-10-15 2013-07-23 Apple Inc. Error correction coding over multiple memory pages
US8677054B1 (en) 2009-12-16 2014-03-18 Apple Inc. Memory management schemes for non-volatile memory devices
US8694814B1 (en) 2010-01-10 2014-04-08 Apple Inc. Reuse of host hibernation storage space by memory controller
US8677203B1 (en) 2010-01-11 2014-03-18 Apple Inc. Redundant data storage schemes for multi-die memory systems
US8572311B1 (en) 2010-01-11 2013-10-29 Apple Inc. Redundant data storage in multi-die memory systems
US8694853B1 (en) 2010-05-04 2014-04-08 Apple Inc. Read commands for reading interfering memory cells
US8572423B1 (en) 2010-06-22 2013-10-29 Apple Inc. Reducing peak current in memory systems
US8595591B1 (en) 2010-07-11 2013-11-26 Apple Inc. Interference-aware assignment of programming levels in analog memory cells
US9104580B1 (en) 2010-07-27 2015-08-11 Apple Inc. Cache memory for hybrid disk drives
US8645794B1 (en) 2010-07-31 2014-02-04 Apple Inc. Data storage in analog memory cells using a non-integer number of bits per cell
US8767459B1 (en) 2010-07-31 2014-07-01 Apple Inc. Data storage in analog memory cells across word lines using a non-integer number of bits per cell
US8856475B1 (en) 2010-08-01 2014-10-07 Apple Inc. Efficient selection of memory blocks for compaction
US8493781B1 (en) 2010-08-12 2013-07-23 Apple Inc. Interference mitigation using individual word line erasure operations
US8694854B1 (en) 2010-08-17 2014-04-08 Apple Inc. Read threshold setting based on soft readout statistics
US9021181B1 (en) 2010-09-27 2015-04-28 Apple Inc. Memory management for unifying memory cell conditions by using maximum time intervals
US11556416B2 (en) 2021-05-05 2023-01-17 Apple Inc. Controlling memory readout reliability and throughput by adjusting distance between read thresholds
US11847342B2 (en) 2021-07-28 2023-12-19 Apple Inc. Efficient transfer of hard data and confidence levels in reading a nonvolatile memory

Similar Documents

Publication Publication Date Title
WO2002100112A1 (en) System and method for rapid video compression
CN1309219C (en) Singnal main controller using infrared transmission ordor and bus transmission order to control AV device
US6675241B1 (en) Streaming-media input port
US8659638B2 (en) Method applied to endpoint of video conference system and associated endpoint
US20150229992A1 (en) Media Content Shifting
CN101778285B (en) A kind of audio-video signal wireless transmitting system and method thereof
US8931027B2 (en) Media gateway
EP1491054A1 (en) A method and system for a distributed digital video recorder
US20160100218A1 (en) Methods for receiving and sending video to a handheld device
US20110085046A1 (en) Data communication method and system using mobile terminal
WO2000076220A1 (en) System and method for streaming an enhanced digital video file
US6704493B1 (en) Multiple source recording
US7193649B2 (en) Image processing device supporting variable data technologies
US20050151836A1 (en) Video conferencing system
CN103404161A (en) Television signal processing method and device
KR20080086262A (en) Method and apparatus for sharing digital contents, and digital contents sharing system using the method
CN104735410A (en) Narrow bandwidth lower than 4 K/S video transmission method and system
CN114630051A (en) Video processing method and system
US20050039211A1 (en) High-quality, reduced data rate streaming video production and monitoring system
US6583807B2 (en) Videoconference system for wireless network machines and its implementation method
WO2000051129A1 (en) Non-linear multimedia editing system integrated into a television, set-top box or the like
US20060015929A1 (en) USB interface module for TV broadcasting
US20030122964A1 (en) Synchronization network, system and method for synchronizing audio
KR101480402B1 (en) Method and system for providing data from audio/visual source devices to audio/visual sink devices in a network
WO2001063911A2 (en) Non-linear multimedia editing system integrated into a television, set-top box or the like

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP