US20070058842A1 - Storage of video analysis data for real-time alerting and forensic analysis - Google Patents
Storage of video analysis data for real-time alerting and forensic analysis Download PDFInfo
- Publication number
- US20070058842A1 US20070058842A1 US11/520,532 US52053206A US2007058842A1 US 20070058842 A1 US20070058842 A1 US 20070058842A1 US 52053206 A US52053206 A US 52053206A US 2007058842 A1 US2007058842 A1 US 2007058842A1
- Authority
- US
- United States
- Prior art keywords
- video data
- processors
- data change
- change
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/732—Query formulation
- G06F16/7335—Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7837—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7847—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
- G06F16/786—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using motion, e.g. object motion or camera motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/40—Data acquisition and logging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
- G06V10/235—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19665—Details related to the storage of video surveillance data
- G08B13/19671—Addition of non-video data, i.e. metadata, to video stream
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19665—Details related to the storage of video surveillance data
- G08B13/19676—Temporary storage, e.g. cyclic memory, buffer storage on pre-alarm
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
- G11B27/322—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier used signal is digitally coded
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2353—Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
- H04N21/4545—Input to filtering algorithms, e.g. filtering a region of the image
- H04N21/45455—Input to filtering algorithms, e.g. filtering a region of the image applied to a region of the image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
- H04N21/4545—Input to filtering algorithms, e.g. filtering a region of the image
- H04N21/45457—Input to filtering algorithms, e.g. filtering a region of the image applied to a time segment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/4728—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/482—End-user interface for program selection
- H04N21/4828—End-user interface for program selection for searching program descriptors
Definitions
- the present invention relates to efficiently storing video data, and more specifically, storing information about visual changes that has been aggregated over a series of frames of the video data.
- Analyzing video streams to determine whether or not any interesting activities or objects are present is a resource-intensive operation.
- Software applications are used to analyze video data, attempting to recognize certain activities or objects in the video data.
- recognition applications exist for recognizing faces, gestures, vehicles, guns, motion, and the like. Often, such applications are used to analyze surveillance video streams for security purposes.
- a software application may be used to detect the particular object, and store data that records that the object was detected.
- the amount of storage space needed to record the detection of those objects is relatively small.
- FIG. 1 is a flow diagram that illustrates how video data may be stored, according to an embodiment of the invention
- FIG. 2 is a block diagram that illustrates how video data change records may represent varying amounts of video data, according to an embodiment of the invention
- FIG. 3 is a graphical depiction that illustrates how change information may be stored on a per-region basis, according to an embodiment of the invention
- FIG. 4 is a block diagram that illustrates how video data change records may store specific and generalized change information, according to an embodiment of the invention.
- FIG. 5 is a block diagram of a computer system on which embodiments of the invention may be implemented.
- an efficient storage mechanism is proposed. Instead of storing information related to changes detected in video data on a frame-by-frame basis, the information is aggregated across all or most of the corresponding frames and stored as a single logical record in a storage system. For example, typical video cameras and display devices operate at approximately 24 video frames per second. If motion is detected within a particular view and the motion lasts for one minute, then instead of storing 1440 different records to represent the motion, the motion information is stored in a single record that represents the 1440 frames corresponding to the motion.
- the searches for visual changes that satisfy certain criteria can also be performed very efficiently. For example, if a user desired to search for a certain type of motion in 1440 frames, then only one record would have to be searched, as opposed to 1440 records.
- embodiments of the invention described herein are illustrated in the context of video surveillance systems. However, embodiments of the invention are limited to that context. Embodiments of the invention are also relevant in other non-surveillance contexts, such as searching for certain motion patterns in a series of computer-generated frames.
- FIG. 1 is a flow diagram that illustrates how video data may be stored and used to search for changes detected in the video data, according to an embodiment of the invention.
- step 102 video data that comprises a series of frames is received.
- step 104 information about visual changes that are detected in the series of frames is generated.
- step 106 the generated information is aggregated to generate a plurality of video data change records (VDCRs).
- VDCRs video data change records
- step 108 events of interest that satisfy specified search criteria are searched for by comparing the specified search criteria against change information in one or more of the plurality of visual change records.
- changes detected in the same sequence of video frames may be aggregated at multiple levels of granularity.
- the changes may be aggregated at the entire-view level, the quadrant level, and the grid-point level.
- the changes may be aggregated at per-week, per-day, per-hour, per-minute and per-second levels of granularity, or at variable time intervals that depend on other criteria.
- a “video data change record” is a logical composition of one or more fields, items, attributes, and/or objects.
- a VDCR corresponds to a plurality of frames and includes change information (described below).
- a VDCR corresponds to a particular level of temporal and spatial granularity.
- a VDCR may contain information about one or more events that were detected in the video data within the spatial/temporal space associated with the VDCR.
- VDCRs may also store change information pertaining about other events that do not appear in the frames that correspond to the VDCR. For example, a VDCR may store information indicating that an audible alarm began ringing during the time interval associated with the VDCR, even though there is no indication within the video stream of the alarm.
- a VDCR may also include, but is not limited to, (a) a start time of when the first frame in the plurality of frames was captured, (b) an end time of when the last frame in the plurality of frames was captured, (c) a time duration indicating the difference between the start time and the end time, (d) type data indicating whether the change corresponds to a detection of motion or only a pixel change, (e) shape data indicating a shape (e.g., person, car) of a moving object that triggered the VDCR, (f) behavior data indicating a behavior (e.g., walking, running, driving) of a moving object that triggered the VDCR, and (g) an indication of whether the VDCR corresponds to an event or a specified time interval.
- shape data indicating a shape (e.g., person, car) of a moving object that triggered the VDCR
- behavior data indicating a behavior (e.g., walking, running, driving) of a moving object that triggered the
- a VDCR may also contain a reference to the actual video data that corresponds to the plurality of frames of the VDCR in order to enable a user of the storage system to view the corresponding video data. If a VDCR contains a start time, then the start time may be used as the reference.
- “Change information” is information that indicates visual changes that are detected in the temporal/spatial interval associated with a VDCR.
- the change information for changes associated with a VDCR is stored in the VDCR.
- Change information may indicate motion that is detected in the plurality of frames and/or a change in pixel values that is detected in the plurality of frames, such as brightness and hue.
- a pixel change may result from the shadow, of a person, that enters and leaves a view represented by the frames.
- a pixel change may also result from a light bulb turning on or off that affects the brightness of objects in the frames.
- the last frame in an event may appear as an exact duplicate of the first frame of the event.
- the change information may indicate the greatest amount of change. For example, if the light bulb mentioned above went out and then back on and the possible pixel values range from 0-100, the change information may indicate 100 instead of zero.
- the change information may further indicate all directions and/or speeds of the motion. For example, with a particular view, an object may move right, left, up, and down. Thus, the change information may indicate all directions. As another example, if the object moved at five different speeds in a certain direction, then change information may indicate the largest speed.
- any method for detecting and calculating visual changes may be used.
- embodiments of the invention are not limited to any particular method.
- Change information may further include information on a per-region basis.
- a “region” is a portion of a two-dimensional view (e.g., captured by a video camera) of the video data.
- the view may be divided into multiple uniform regions, such as in a grid layout.
- a region may be of any arbitrary size and shape.
- change information may include motion and/or pixel change information for each specified region of the view for the duration of the plurality of frames that corresponds to the change information.
- An “event” is generally associated with a visual change detected in video data.
- an event may correspond to the detection of a person walking in a region of the view.
- the duration of the event is typically the length of time that the visual change occurs. Once no more visual change is detected, then the event may end.
- An event may be initiated, not only on the detection of visual changes within a view, but also upon the occurrence of an external event.
- an event may be triggered by a fire alarm. Once the fire alarm is detected, the frames of video data from that point on are used to generate a VDCR that represents the event. The event may end, for example, when the fire alarm ends or when an administrator of a video surveillance system indicates that the event is completed.
- a VDCR may correspond to a specified time interval instead of to an event. For example, regardless of whether a visual change is detected, a VDCR may be generated for each 5-minute interval after every hour. As another example, a VDCR may be generated for each 24 hour period.
- a VDCR may be generated from other VDCRs and not necessarily from the video data itself. For example, if a VDCR is generated for each one-hour period of each day, then a “day” VDCR may be generated directly from the twenty-four “hour” VDCRs that correspond to that day. Similarly, a “week” VDCR may be generated from seven “day” VDCRs, and so forth.
- a view-level VDCR may be generated based on the change information in the corresponding quadrant VDCRs, and the quadrant VDCRs may be generated based on the change information in the corresponding region VDCRs.
- VDCR may correspond to thousands or millions of frame of video data
- the storage space required to store visual changes is much smaller than is required otherwise (e.g., storing a VDCR for each two-frame sequence where motion is detected).
- FIG. 2 is a block diagram that illustrates how video data change records may represent varying amounts of video data, according to an embodiment of the invention.
- Video data 201 comprises a series of frames. Each block of video data 201 may represent, for example, 100 frames.
- VDCR 202 represents 400 frames and VDCR 206 represents 600 frames.
- VDCRs 202 - 226 are comprised of two sets: a VDCR set 230 and a VDCR set 240 .
- VDCR set 230 i.e., VDCRs 202 - 214
- VDCR set 240 i.e., VDCRs 202 - 214
- VDCR set 230 i.e., VDCRs 202 - 214
- VDCR 204 represents an event that lasted 600 frames
- VDCR 206 represents an event that lasted 200 frames.
- VDCRs 222 - 226 each VDCR in VDCR set 240 (i.e., VDCRs 222 - 226 ) is generated for each hour.
- VDCR 222 may represent hour # 1
- VDCR 224 may represent hour #2
- VDCR 226 may represent hour #3. Therefore, a specific period of time may be represented by multiple VDCRs that each represents different periods of time.
- VDCR 222 contains references to VDCRs 202 and 204 and VDCRs 202 and 204 each contain a reference to VDCR 222 .
- VDCR 226 contains references to VDCRs 210 - 214 and VDCRs 210 - 214 each contain a reference to VDCR 226 .
- VDCRs may be stored on disk in specified tables.
- Each table may correspond to VDCRs of a certain type.
- each table of a plurality of tables may comprise VDCRs that correspond to a specified time interval (e.g., day table, week table, month table, etc.).
- each table of a plurality of tables may comprise VDCRs that correspond to certain time frames (e.g., all VDCRs generated in January, 2006, or all VDCRs generated during week #51, etc.).
- Embodiments of the invention are not limited to how VDCRs are organized on disk.
- a two-dimensional view of video data may be divided into multiple regions.
- a region may or not be convex.
- Multiple regions within a view may be of different sizes and shapes.
- the change information of a particular region may indicate the amount of change in pixel values within that particular region.
- the change information of a particular region may indicate the direction and/or velocity of the detected motion within that particular region. For example, suppose a ball was thrown through the view of a camera. A VDCR was generated for the few frames that captured the event. The change information in every region of the view through which the ball traveled will indicate that motion was detected in that region and may indicate the direction of the ball and the velocity of the ball.
- FIG. 3 is a graphical depiction that illustrates how change information may be stored on a per-region basis, according to an embodiment of the invention.
- a camera view is divided into multiple, non-overlapping rectangle regions.
- Each region in FIG. 3 indicates one or more directions of motion.
- region 302 indicates that at least four directions of a motion have been detected for that particular region.
- region 304 not every region must specify a direction and/or speed, such as region 304 .
- the change information for such a region may be empty or include some indicator indicating zero direction and/or speed.
- change information may be kept, not only in per-region VDCRs, but also in multi-region VDCRs. For example, if there are 100 regions within a view, and change information is maintained for each region for a given region-level VDCR, then view-level VDCRs may indicate change information for the entire view, quadrant-level VDCRs may indicate change information for each quadrant of the view, etc.
- the change information is abstracted to the level of the VDCR.
- change information for a view-level VDCR may contain 1/00 th the information of the information contained in the corresponding 100 region-level VDCRs.
- the change information for a single quadrant-level VDCR may contain 1/25 th the information of the information contained in the corresponding twenty-five region-level VDCRs that correspond to the quadrant.
- VDCRs are logical pieces of information, and do not necessarily correspond to distinct records within a repository.
- a single record or data structure within a repository may include a view-level VDCR, its corresponding quadrant-level VDCRs, and its corresponding region-level VDCRs.
- a single record or data structure may be used to store change information aggregated at the week level, the day, level, the hour level and the minute level. Data structures that store change data that has been aggregated at multiple levels of granularity are referred to herein as composite VDCRs.
- the nature of the repository used to store VDCRs may vary from implementation to implementation.
- the techniques described herein are not limited to any particular type of repository.
- the VDCRs may be stores in a multi-dimensional database, a relational database, or an object-relational database.
- separate relational tables are used to store VDCRs at different levels of granularity.
- one relational table may have rows that correspond to view-level VDCRs, while another relational table may have rows that correspond to region-level VDCRs.
- indexes may be used to efficiently locate the region-level VDCRs that correspond to a particular view-level VDCR.
- FIG. 4 is a block diagram that illustrates how composite video data change records may store specific and generalized change information, according to an embodiment of the invention.
- Video data 401 comprises a series of frames. Each block of video data 401 may represent any number of frames, such as one hundred. Thus, composite VDCR 402 may represent four hundred frames.
- the change information of VDCR 402 may be represented on a per-region basis, a per-quadrant basis, a per-view basis, and/or any other basis.
- VDCR 402 comprises view data 403 , quadrant data 404 , and region data 405 .
- VDCR 402 may comprise other information such as whether any visual change was detected in the frames that correspond to VDCR 402 and the type of visual change (e.g., pixel change, motion).
- search criteria may be specified in which all incoming video data is analyzed to determine whether any detected visual changes satisfy the search criteria.
- an alert may be triggered on the precise frame that the change information (accumulated thus far) first satisfied the search criteria.
- the accumulated change information is stored (in the manner described above) in a VDCR so that future searches with similar search criteria may return that VDCR.
- several levels of filtering may be performed for various reasons, some of which may include (1) reducing noise that may be generated when generating change information and (2) determining dominant motion areas and velocities within a scene.
- the change information of a particular VDCR may be filtered across adjacent regions within a frame, or across frames, of the corresponding plurality of frames and adjacent frames within the corresponding plurality of frames, using various methods that include, but are not limited to, smoothing filters, median filters, and multi-dimensional clustering algorithms.
- the generated VDCRs facilitate fast searches across multiple events and specified time intervals. Because searches are executed against change information, as described above (which may be thought of as meta-data about visual changes), rather than the entire video data itself, the searches may be performed much faster than if a user was required to search through the entire video data or search on a frame-by-frame basis of each detected change.
- a user may specify search criteria that are compared against each VDCR. For example, a user may search for all VDCRs that indicate any motion, where the motion is more than 20 mph. As another example, a user may search for one VDCR that indicates a pixel change in the lower left quadrant of a 50% change in brightness.
- indexes may be generated in order to facilitate faster searches. Such indexes may be based on time, the type of visual change, the speed of a motion, the direction of a motion, etc.
- search criteria of a particular search may include (1) multiple ranges of time, (2) the speed of motion in some regions, (3) the direction of motion in other regions, (4) an amount of pixel change in still other regions, (5) the shape and type of behavior of multiple detected objects, etc.
- the number of possible search criteria is immeasurable.
- change information that is generated from video data may be aggregated at different levels of spatial granularity.
- the change information stored for a particular time period may include (1) view-level VDCRs that indicate change information relative to the entire view, (2) quadrant-level VDCRs that indicate change information for each of four quadrants of the view, and (3) region-level VDCRs that indicate change information for each of a thousand grid regions within the view.
- the search mechanism may make use of these different levels of granularity to improve search performance.
- a view is divided into one hundred non-overlapping regions.
- a user is searching for motion events that occurred over a particular week, and that a million region-level VDCRs have been generated for each region during that week.
- the search criteria includes that a specified type of motion occurred within each region of twenty-four specified regions of the view.
- the entire search is performed at the region-level of granularity, then twenty-four million region-level VDCRs will have to be inspected during the search.
- a multi-level search may be performed. Specifically, during the first phase of the multi-level search, each of a million view-level VDCRs may be inspected to find those view-level VDCRs that indicate that the specified motion occurred anywhere within the view. The determination may be based on view-level change information in each view-level VDCR.
- the view-level change information of a view-level VDCR indicates whether motion was detected anywhere in the entire view during the frames associated with the view-level VDCR.
- the first-level search will involve one million comparisons (one for each view-level VDCR). For the purpose of explanation, assume that 50,000 view-level VDCRs matched the first-level search.
- quadrant-level VDCRs are inspected. However, rather than inspecting all 4 million of the quadrant-level VDCRs, only the quadrant-level VDCRs that correspond to the 50,000 view-level VDCRs are searched in the second-level search. Further, if the 24 regions specified in the search criteria only fall within two of the four quadrants, then the second-level search need only involve the quadrant-level VDCRs associated with those two quadrants. Thus, the second phase of the search will involve no more than 100,000 quadrant-level VDCRs.
- Each quadrant-level VDCR includes quadrant-level data that indicates whether motion was detected in any portion of the corresponding quadrant. For the purpose of explanation, assume that, based on the quadrant-level VDCRs, only 10,000 view-level VDCRs of the 50,000 VDCRs included motion in those two quadrants.
- a region-level search is performed against the region-level VDCRs that correspond to the 10,000 view-level VDCRs.
- 24 region-level VDCRs may need to be inspected for each of the 10,000 view-level VDCRs.
- the number of region-level comparisons performed during the third-level search ( 240 , 000 , in the present example) will typically be far fewer than the number of comparisons (24 million) that would have been performed if all searching was done at the region-level of granularity.
- a search may be separated into a multi-level search according to time. For example, suppose a user wants to find motion events that occurred between the hours of 1:00 AM and 5:00 AM during the past week. Further suppose that an hour-level VDCR exists for each hour and each day. Thus, in the first search level, each day-level VDCR of the past week is examined to determine whether motion was detected in the corresponding day. In the second search level, each hour-level VDCR that is associated with a day-level VDCR that was identified in the first search level is examined to determine whether motion was detected in the corresponding hour.
- one level of a multi-level search may be performed based on time and another level of the multi-level search may be performed based on areas of the view. For example, suppose search criteria specifies motion that occurred within a certain area of a view between the hours of 1:00 AM and 5:00 AM during the past week. Thus, the first two levels of the search may be used to identify all hour-level/view-level VDCRs of the past week between 1:00 AM and 5:00 AM. Subsequent levels of the search may be used to identify all hour-level/region-level VDCRs with change information that indicates the specified motion in the specified area.
- users may specify the search criteria for each level of a multi-level search.
- multi-level searches may be performed automatically transparent to the user, beginning at relatively coarser temporal/spatial granularities and ending at the level of granularities of the search criteria that was specified by the user.
- a single set of search criteria may be automatically divided (e.g., by a query compiler) into one or more general searches and one specific search. Any mechanism for dividing search criteria into a multi-level query may be used. Embodiments of the invention are not limited to any specific mechanism.
- FIG. 5 is a block diagram that illustrates a computer system 500 upon which an embodiment of the invention may be implemented.
- Computer system 500 includes a bus 502 or other communication mechanism for communicating information, and a processor 504 coupled with bus 502 for processing information.
- Computer system 500 also includes a main memory 506 , such as a random access memory (RAM) or other dynamic storage device, coupled to bus 502 for storing information and instructions to be executed by processor 504 .
- Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504 .
- Computer system 500 further includes a read only memory (ROM) 508 or other static storage device coupled to bus 502 for storing static information and instructions for processor 504 .
- ROM read only memory
- a storage device 510 such as a magnetic disk or optical disk, is provided and coupled to bus 502 for storing information and instructions.
- Computer system 500 may be coupled via bus 502 to a display 512 , such as a cathode ray tube (CRT), for displaying information to a computer user.
- a display 512 such as a cathode ray tube (CRT)
- An input device 514 is coupled to bus 502 for communicating information and command selections to processor 504 .
- cursor control 516 is Another type of user input device
- cursor control 516 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 504 and for controlling cursor movement on display 512 .
- This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
- the invention is related to the use of computer system 500 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 500 in response to processor 504 executing one or more sequences of one or more instructions contained in main memory 506 . Such instructions may be read into main memory 506 from another machine-readable medium, such as storage device 510 . Execution of the sequences of instructions contained in main memory 506 causes processor 504 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
- machine-readable medium refers to any medium that participates in providing data that causes a machine to operation in a specific fashion.
- various machine-readable media are involved, for example, in providing instructions to processor 504 for execution.
- Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
- Non-volatile media includes, for example, optical or magnetic disks, such as storage device 510 .
- Volatile media includes dynamic memory, such as main memory 506 .
- Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 502 .
- Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. All such media must be tangible to enable the instructions carried by the media to be detected by a physical mechanism that reads the instructions into a machine.
- Machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
- Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 504 for execution.
- the instructions may initially be carried on a magnetic disk of a remote computer.
- the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
- a modem local to computer system 500 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
- An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 502 .
- Bus 502 carries the data to main memory 506 , from which processor 504 retrieves and executes the instructions.
- the instructions received by main memory 506 may optionally be stored on storage device 510 either before or after execution by processor 504 .
- Computer system 500 also includes a communication interface 518 coupled to bus 502 .
- Communication interface 518 provides a two-way data communication coupling to a network link 520 that is connected to a local network 522 .
- communication interface 518 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line.
- ISDN integrated services digital network
- communication interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
- LAN local area network
- Wireless links may also be implemented.
- communication interface 518 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
- Network link 520 typically provides data communication through one or more networks to other data devices.
- network link 520 may provide a connection through local network 522 to a host computer 524 or to data equipment operated by an Internet Service Provider (ISP) 526 .
- ISP 526 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 528 .
- Internet 528 uses electrical, electromagnetic or optical signals that carry digital data streams.
- the signals through the various networks and the signals on network link 520 and through communication interface 518 which carry the digital data to and from computer system 500 , are exemplary forms of carrier waves transporting the information.
- Computer system 500 can send messages and receive data, including program code, through the network(s), network link 520 and communication interface 518 .
- a server 530 might transmit a requested code for an application program through Internet 528 , ISP 526 , local network 522 and communication interface 518 .
- the received code may be executed by processor 504 as it is received, and/or stored in storage device 510 , or other non-volatile storage for later execution. In this manner, computer system 500 may obtain application code in the form of a carrier wave.
Abstract
Description
- This application claims priority to U.S. Provisional Patent Application Ser. No. 60/716,729 filed Sep. 12, 2005, the contents of which is incorporated herein in its entirety for all purposes.
- The present invention relates to efficiently storing video data, and more specifically, storing information about visual changes that has been aggregated over a series of frames of the video data.
- Analyzing video streams to determine whether or not any interesting activities or objects are present is a resource-intensive operation. Software applications are used to analyze video data, attempting to recognize certain activities or objects in the video data. For example, recognition applications exist for recognizing faces, gestures, vehicles, guns, motion, and the like. Often, such applications are used to analyze surveillance video streams for security purposes.
- If a user is interested in whether a particular object (e.g. face or gun) appears in a video stream, a software application may be used to detect the particular object, and store data that records that the object was detected. Typically, the amount of storage space needed to record the detection of those objects is relatively small. However, under some circumstances, one may not know ahead-of-time what events of interest will occur in a video stream. In such cases, one could theoretically try to detect and capture all possible changes that occur within the video stream. However, doing so would require a prohibitively large amount of storage space. Not only would storage capacity issues arise from storing all possible change information, but it would be difficult to perform searches against such a vast amount of information.
- Due to the impracticality of such an all-changes storage technique, current approaches for scanning for suspicious behavior captured in video necessarily employ human involvement. Not only is significant human involvement prohibitively expensive (especially for small to mid-size businesses), people are prone to error. Watching hours of live or recorded video is extremely fatiguing, which fatigue may result in missing suspicious activity. Furthermore, a computer may operate continuously whereas people require sleep and rest.
- Based on the foregoing, there is a need for efficiently storing motion and other change information to reduce the amount of data stored and to increase search speed.
- The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
- The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
-
FIG. 1 is a flow diagram that illustrates how video data may be stored, according to an embodiment of the invention; -
FIG. 2 is a block diagram that illustrates how video data change records may represent varying amounts of video data, according to an embodiment of the invention; -
FIG. 3 is a graphical depiction that illustrates how change information may be stored on a per-region basis, according to an embodiment of the invention; -
FIG. 4 is a block diagram that illustrates how video data change records may store specific and generalized change information, according to an embodiment of the invention; and -
FIG. 5 is a block diagram of a computer system on which embodiments of the invention may be implemented. - In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention.
- In order to (1) store a minimal amount of video data and (2) quickly search video data for certain events, an efficient storage mechanism is proposed. Instead of storing information related to changes detected in video data on a frame-by-frame basis, the information is aggregated across all or most of the corresponding frames and stored as a single logical record in a storage system. For example, typical video cameras and display devices operate at approximately 24 video frames per second. If motion is detected within a particular view and the motion lasts for one minute, then instead of storing 1440 different records to represent the motion, the motion information is stored in a single record that represents the 1440 frames corresponding to the motion.
- Because the amount of information that is stored is relatively small, the searches for visual changes that satisfy certain criteria can also be performed very efficiently. For example, if a user desired to search for a certain type of motion in 1440 frames, then only one record would have to be searched, as opposed to 1440 records.
- The embodiments of the invention described herein are illustrated in the context of video surveillance systems. However, embodiments of the invention are limited to that context. Embodiments of the invention are also relevant in other non-surveillance contexts, such as searching for certain motion patterns in a series of computer-generated frames.
-
FIG. 1 is a flow diagram that illustrates how video data may be stored and used to search for changes detected in the video data, according to an embodiment of the invention. Instep 102, video data that comprises a series of frames is received. Instep 104, information about visual changes that are detected in the series of frames is generated. Instep 106, the generated information is aggregated to generate a plurality of video data change records (VDCRs). Each video data change record corresponds to a plurality of frames and includes change information that indicates visual changes that were detected relative to the corresponding plurality of frames. - In
step 108, events of interest that satisfy specified search criteria are searched for by comparing the specified search criteria against change information in one or more of the plurality of visual change records. As shall be described in greater detail hereafter, changes detected in the same sequence of video frames may be aggregated at multiple levels of granularity. For example, in the spatial dimension, the changes may be aggregated at the entire-view level, the quadrant level, and the grid-point level. Similarly, in the temporal dimension, the changes may be aggregated at per-week, per-day, per-hour, per-minute and per-second levels of granularity, or at variable time intervals that depend on other criteria. - A “video data change record” (VDCR) is a logical composition of one or more fields, items, attributes, and/or objects. A VDCR corresponds to a plurality of frames and includes change information (described below). A VDCR corresponds to a particular level of temporal and spatial granularity. A VDCR may contain information about one or more events that were detected in the video data within the spatial/temporal space associated with the VDCR. VDCRs may also store change information pertaining about other events that do not appear in the frames that correspond to the VDCR. For example, a VDCR may store information indicating that an audible alarm began ringing during the time interval associated with the VDCR, even though there is no indication within the video stream of the alarm.
- A VDCR may also include, but is not limited to, (a) a start time of when the first frame in the plurality of frames was captured, (b) an end time of when the last frame in the plurality of frames was captured, (c) a time duration indicating the difference between the start time and the end time, (d) type data indicating whether the change corresponds to a detection of motion or only a pixel change, (e) shape data indicating a shape (e.g., person, car) of a moving object that triggered the VDCR, (f) behavior data indicating a behavior (e.g., walking, running, driving) of a moving object that triggered the VDCR, and (g) an indication of whether the VDCR corresponds to an event or a specified time interval.
- A VDCR may also contain a reference to the actual video data that corresponds to the plurality of frames of the VDCR in order to enable a user of the storage system to view the corresponding video data. If a VDCR contains a start time, then the start time may be used as the reference.
- “Change information” is information that indicates visual changes that are detected in the temporal/spatial interval associated with a VDCR. In one embodiment, the change information for changes associated with a VDCR is stored in the VDCR. Change information may indicate motion that is detected in the plurality of frames and/or a change in pixel values that is detected in the plurality of frames, such as brightness and hue. For example, a pixel change may result from the shadow, of a person, that enters and leaves a view represented by the frames. A pixel change may also result from a light bulb turning on or off that affects the brightness of objects in the frames. In some instances, the last frame in an event may appear as an exact duplicate of the first frame of the event. For example, suppose a light bulb faded out and then back on. By simply differencing the pixel values of the first frame with the pixel values of the last frame, the difference may be zero. Thus, the change information may indicate the greatest amount of change. For example, if the light bulb mentioned above went out and then back on and the possible pixel values range from 0-100, the change information may indicate 100 instead of zero.
- Correspondingly, if the change information indicates a motion, then the change information may further indicate all directions and/or speeds of the motion. For example, with a particular view, an object may move right, left, up, and down. Thus, the change information may indicate all directions. As another example, if the object moved at five different speeds in a certain direction, then change information may indicate the largest speed.
- Any method for detecting and calculating visual changes (whether just pixel change or motion) may be used. Thus, embodiments of the invention are not limited to any particular method.
- Change information may further include information on a per-region basis. A “region” is a portion of a two-dimensional view (e.g., captured by a video camera) of the video data. The view may be divided into multiple uniform regions, such as in a grid layout. However, a region may be of any arbitrary size and shape. Thus, change information may include motion and/or pixel change information for each specified region of the view for the duration of the plurality of frames that corresponds to the change information.
- An “event” is generally associated with a visual change detected in video data. For example, an event may correspond to the detection of a person walking in a region of the view. The duration of the event is typically the length of time that the visual change occurs. Once no more visual change is detected, then the event may end.
- An event may be initiated, not only on the detection of visual changes within a view, but also upon the occurrence of an external event. For example, an event may be triggered by a fire alarm. Once the fire alarm is detected, the frames of video data from that point on are used to generate a VDCR that represents the event. The event may end, for example, when the fire alarm ends or when an administrator of a video surveillance system indicates that the event is completed.
- Alternatively, a VDCR may correspond to a specified time interval instead of to an event. For example, regardless of whether a visual change is detected, a VDCR may be generated for each 5-minute interval after every hour. As another example, a VDCR may be generated for each 24 hour period.
- A VDCR may be generated from other VDCRs and not necessarily from the video data itself. For example, if a VDCR is generated for each one-hour period of each day, then a “day” VDCR may be generated directly from the twenty-four “hour” VDCRs that correspond to that day. Similarly, a “week” VDCR may be generated from seven “day” VDCRs, and so forth.
- Similarly, a view-level VDCR may be generated based on the change information in the corresponding quadrant VDCRs, and the quadrant VDCRs may be generated based on the change information in the corresponding region VDCRs.
- Because a single VDCR may correspond to thousands or millions of frame of video data, the storage space required to store visual changes is much smaller than is required otherwise (e.g., storing a VDCR for each two-frame sequence where motion is detected).
-
FIG. 2 is a block diagram that illustrates how video data change records may represent varying amounts of video data, according to an embodiment of the invention.Video data 201 comprises a series of frames. Each block ofvideo data 201 may represent, for example, 100 frames. Thus,VDCR 202 represents 400 frames andVDCR 206 represents 600 frames. VDCRs 202-226 are comprised of two sets: aVDCR set 230 and aVDCR set 240. Suppose that each VDCR in VDCR set 230 (i.e., VDCRs 202-214) is generated based on events, as opposed to a pre-specified time interval regardless of visual changes. For example,VDCR 204 represents an event that lasted 600 frames, whereasVDCR 206 represents an event that lasted 200 frames. - Further suppose that each VDCR in VDCR set 240 (i.e., VDCRs 222-226) is generated for each hour. Thus, for example,
VDCR 222 may representhour # 1,VDCR 224 may represent hour #2, andVDCR 226 may representhour # 3. Therefore, a specific period of time may be represented by multiple VDCRs that each represents different periods of time. - In one embodiment, if a VDCR represents a number of frames that is also represented by another VDCR, then each VDCR contains a reference to the other VDCR, as illustrated. Thus,
VDCR 222 contains references to VDCRs 202 and 204 and VDCRs 202 and 204 each contain a reference toVDCR 222. Similarly,VDCR 226 contains references to VDCRs 210-214 and VDCRs 210-214 each contain a reference toVDCR 226. - VDCRs may be stored on disk in specified tables. Each table may correspond to VDCRs of a certain type. For example, each table of a plurality of tables may comprise VDCRs that correspond to a specified time interval (e.g., day table, week table, month table, etc.). As another example, each table of a plurality of tables may comprise VDCRs that correspond to certain time frames (e.g., all VDCRs generated in January, 2006, or all VDCRs generated during week #51, etc.). Embodiments of the invention are not limited to how VDCRs are organized on disk.
- As described above, a two-dimensional view of video data may be divided into multiple regions. A region may or not be convex. Multiple regions within a view may be of different sizes and shapes.
- If a VDCR is generated based on a detected change in pixel values that is not associated with motion of an object, then the change information of a particular region may indicate the amount of change in pixel values within that particular region.
- If a VDCR is generated based on a motion event, then the change information of a particular region may indicate the direction and/or velocity of the detected motion within that particular region. For example, suppose a ball was thrown through the view of a camera. A VDCR was generated for the few frames that captured the event. The change information in every region of the view through which the ball traveled will indicate that motion was detected in that region and may indicate the direction of the ball and the velocity of the ball.
-
FIG. 3 is a graphical depiction that illustrates how change information may be stored on a per-region basis, according to an embodiment of the invention. In this example, a camera view is divided into multiple, non-overlapping rectangle regions. Each region inFIG. 3 indicates one or more directions of motion. For example,region 302 indicates that at least four directions of a motion have been detected for that particular region. As this example further illustrates, not every region must specify a direction and/or speed, such asregion 304. In such a case, the change information for such a region may be empty or include some indicator indicating zero direction and/or speed. - In one embodiment, change information may be kept, not only in per-region VDCRs, but also in multi-region VDCRs. For example, if there are 100 regions within a view, and change information is maintained for each region for a given region-level VDCR, then view-level VDCRs may indicate change information for the entire view, quadrant-level VDCRs may indicate change information for each quadrant of the view, etc. In each VDCR, the change information is abstracted to the level of the VDCR. Thus, change information for a view-level VDCR may contain 1/00th the information of the information contained in the corresponding 100 region-level VDCRs. Similarly, the change information for a single quadrant-level VDCR may contain 1/25th the information of the information contained in the corresponding twenty-five region-level VDCRs that correspond to the quadrant.
- It should be noted that VDCRs are logical pieces of information, and do not necessarily correspond to distinct records within a repository. For example, a single record or data structure within a repository may include a view-level VDCR, its corresponding quadrant-level VDCRs, and its corresponding region-level VDCRs. Similarly, a single record or data structure may be used to store change information aggregated at the week level, the day, level, the hour level and the minute level. Data structures that store change data that has been aggregated at multiple levels of granularity are referred to herein as composite VDCRs.
- The nature of the repository used to store VDCRs may vary from implementation to implementation. The techniques described herein are not limited to any particular type of repository. For example, the VDCRs may be stores in a multi-dimensional database, a relational database, or an object-relational database. In one embodiment, separate relational tables are used to store VDCRs at different levels of granularity. Thus, one relational table may have rows that correspond to view-level VDCRs, while another relational table may have rows that correspond to region-level VDCRs. In such an embodiment, indexes may be used to efficiently locate the region-level VDCRs that correspond to a particular view-level VDCR.
-
FIG. 4 is a block diagram that illustrates how composite video data change records may store specific and generalized change information, according to an embodiment of the invention.Video data 401 comprises a series of frames. Each block ofvideo data 401 may represent any number of frames, such as one hundred. Thus,composite VDCR 402 may represent four hundred frames. The change information ofVDCR 402 may be represented on a per-region basis, a per-quadrant basis, a per-view basis, and/or any other basis. In this example,VDCR 402 comprisesview data 403,quadrant data 404, andregion data 405.VDCR 402 may comprise other information such as whether any visual change was detected in the frames that correspond to VDCR 402 and the type of visual change (e.g., pixel change, motion). - In one embodiment, search criteria may be specified in which all incoming video data is analyzed to determine whether any detected visual changes satisfy the search criteria. Thus, for an ongoing event (i.e., before a VDCR is generated for the event), an alert may be triggered on the precise frame that the change information (accumulated thus far) first satisfied the search criteria. Once the event has completed, the accumulated change information is stored (in the manner described above) in a VDCR so that future searches with similar search criteria may return that VDCR.
- In one embodiment, several levels of filtering may be performed for various reasons, some of which may include (1) reducing noise that may be generated when generating change information and (2) determining dominant motion areas and velocities within a scene. The change information of a particular VDCR may be filtered across adjacent regions within a frame, or across frames, of the corresponding plurality of frames and adjacent frames within the corresponding plurality of frames, using various methods that include, but are not limited to, smoothing filters, median filters, and multi-dimensional clustering algorithms.
- With the storage techniques described above, the generated VDCRs facilitate fast searches across multiple events and specified time intervals. Because searches are executed against change information, as described above (which may be thought of as meta-data about visual changes), rather than the entire video data itself, the searches may be performed much faster than if a user was required to search through the entire video data or search on a frame-by-frame basis of each detected change.
- Thus, a user may specify search criteria that are compared against each VDCR. For example, a user may search for all VDCRs that indicate any motion, where the motion is more than 20 mph. As another example, a user may search for one VDCR that indicates a pixel change in the lower left quadrant of a 50% change in brightness.
- Multiple indexes may be generated in order to facilitate faster searches. Such indexes may be based on time, the type of visual change, the speed of a motion, the direction of a motion, etc.
- Furthermore, the manner in which change information is stored and the varying types of information that a VDCR may include make possible many types of searches. For example, search criteria of a particular search may include (1) multiple ranges of time, (2) the speed of motion in some regions, (3) the direction of motion in other regions, (4) an amount of pixel change in still other regions, (5) the shape and type of behavior of multiple detected objects, etc. The number of possible search criteria is immeasurable.
- As described above, change information that is generated from video data may be aggregated at different levels of spatial granularity. For example, the change information stored for a particular time period may include (1) view-level VDCRs that indicate change information relative to the entire view, (2) quadrant-level VDCRs that indicate change information for each of four quadrants of the view, and (3) region-level VDCRs that indicate change information for each of a thousand grid regions within the view. The search mechanism may make use of these different levels of granularity to improve search performance.
- For example, suppose a view is divided into one hundred non-overlapping regions. Further, suppose that a user is searching for motion events that occurred over a particular week, and that a million region-level VDCRs have been generated for each region during that week. Suppose that the search criteria includes that a specified type of motion occurred within each region of twenty-four specified regions of the view. In this example, if the entire search is performed at the region-level of granularity, then twenty-four million region-level VDCRs will have to be inspected during the search.
- Instead of performing the entire search at the region-level of granularity, a multi-level search may be performed. Specifically, during the first phase of the multi-level search, each of a million view-level VDCRs may be inspected to find those view-level VDCRs that indicate that the specified motion occurred anywhere within the view. The determination may be based on view-level change information in each view-level VDCR. The view-level change information of a view-level VDCR indicates whether motion was detected anywhere in the entire view during the frames associated with the view-level VDCR. In the present example, the first-level search will involve one million comparisons (one for each view-level VDCR). For the purpose of explanation, assume that 50,000 view-level VDCRs matched the first-level search.
- During the second-phase of the multi-level search, quadrant-level VDCRs are inspected. However, rather than inspecting all 4 million of the quadrant-level VDCRs, only the quadrant-level VDCRs that correspond to the 50,000 view-level VDCRs are searched in the second-level search. Further, if the 24 regions specified in the search criteria only fall within two of the four quadrants, then the second-level search need only involve the quadrant-level VDCRs associated with those two quadrants. Thus, the second phase of the search will involve no more than 100,000 quadrant-level VDCRs.
- Each quadrant-level VDCR includes quadrant-level data that indicates whether motion was detected in any portion of the corresponding quadrant. For the purpose of explanation, assume that, based on the quadrant-level VDCRs, only 10,000 view-level VDCRs of the 50,000 VDCRs included motion in those two quadrants.
- In the third level search, a region-level search is performed against the region-level VDCRs that correspond to the 10,000 view-level VDCRs. When searching at the region-level of granularity, 24 region-level VDCRs may need to be inspected for each of the 10,000 view-level VDCRs. However, because the candidate set of view-level VDCRs has been pruned down during the first two search phases, the number of region-level comparisons performed during the third-level search (240,000, in the present example) will typically be far fewer than the number of comparisons (24 million) that would have been performed if all searching was done at the region-level of granularity.
- As with areas of a view, a search may be separated into a multi-level search according to time. For example, suppose a user wants to find motion events that occurred between the hours of 1:00 AM and 5:00 AM during the past week. Further suppose that an hour-level VDCR exists for each hour and each day. Thus, in the first search level, each day-level VDCR of the past week is examined to determine whether motion was detected in the corresponding day. In the second search level, each hour-level VDCR that is associated with a day-level VDCR that was identified in the first search level is examined to determine whether motion was detected in the corresponding hour.
- In one embodiment, one level of a multi-level search may be performed based on time and another level of the multi-level search may be performed based on areas of the view. For example, suppose search criteria specifies motion that occurred within a certain area of a view between the hours of 1:00 AM and 5:00 AM during the past week. Thus, the first two levels of the search may be used to identify all hour-level/view-level VDCRs of the past week between 1:00 AM and 5:00 AM. Subsequent levels of the search may be used to identify all hour-level/region-level VDCRs with change information that indicates the specified motion in the specified area.
- In one embodiment, users may specify the search criteria for each level of a multi-level search. In another embodiment, multi-level searches may be performed automatically transparent to the user, beginning at relatively coarser temporal/spatial granularities and ending at the level of granularities of the search criteria that was specified by the user. Thus, a single set of search criteria may be automatically divided (e.g., by a query compiler) into one or more general searches and one specific search. Any mechanism for dividing search criteria into a multi-level query may be used. Embodiments of the invention are not limited to any specific mechanism.
-
FIG. 5 is a block diagram that illustrates acomputer system 500 upon which an embodiment of the invention may be implemented.Computer system 500 includes abus 502 or other communication mechanism for communicating information, and aprocessor 504 coupled withbus 502 for processing information.Computer system 500 also includes amain memory 506, such as a random access memory (RAM) or other dynamic storage device, coupled tobus 502 for storing information and instructions to be executed byprocessor 504.Main memory 506 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed byprocessor 504.Computer system 500 further includes a read only memory (ROM) 508 or other static storage device coupled tobus 502 for storing static information and instructions forprocessor 504. Astorage device 510, such as a magnetic disk or optical disk, is provided and coupled tobus 502 for storing information and instructions. -
Computer system 500 may be coupled viabus 502 to adisplay 512, such as a cathode ray tube (CRT), for displaying information to a computer user. Aninput device 514, including alphanumeric and other keys, is coupled tobus 502 for communicating information and command selections toprocessor 504. Another type of user input device iscursor control 516, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections toprocessor 504 and for controlling cursor movement ondisplay 512. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. - The invention is related to the use of
computer system 500 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed bycomputer system 500 in response toprocessor 504 executing one or more sequences of one or more instructions contained inmain memory 506. Such instructions may be read intomain memory 506 from another machine-readable medium, such asstorage device 510. Execution of the sequences of instructions contained inmain memory 506 causesprocessor 504 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software. - The term “machine-readable medium” as used herein refers to any medium that participates in providing data that causes a machine to operation in a specific fashion. In an embodiment implemented using
computer system 500, various machine-readable media are involved, for example, in providing instructions toprocessor 504 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such asstorage device 510. Volatile media includes dynamic memory, such asmain memory 506. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprisebus 502. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications. All such media must be tangible to enable the instructions carried by the media to be detected by a physical mechanism that reads the instructions into a machine. - Common forms of machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
- Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to
processor 504 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local tocomputer system 500 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data onbus 502.Bus 502 carries the data tomain memory 506, from whichprocessor 504 retrieves and executes the instructions. The instructions received bymain memory 506 may optionally be stored onstorage device 510 either before or after execution byprocessor 504. -
Computer system 500 also includes acommunication interface 518 coupled tobus 502.Communication interface 518 provides a two-way data communication coupling to anetwork link 520 that is connected to alocal network 522. For example,communication interface 518 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example,communication interface 518 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation,communication interface 518 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. - Network link 520 typically provides data communication through one or more networks to other data devices. For example,
network link 520 may provide a connection throughlocal network 522 to ahost computer 524 or to data equipment operated by an Internet Service Provider (ISP) 526.ISP 526 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 528.Local network 522 andInternet 528 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals onnetwork link 520 and throughcommunication interface 518, which carry the digital data to and fromcomputer system 500, are exemplary forms of carrier waves transporting the information. -
Computer system 500 can send messages and receive data, including program code, through the network(s),network link 520 andcommunication interface 518. In the Internet example, aserver 530 might transmit a requested code for an application program throughInternet 528,ISP 526,local network 522 andcommunication interface 518. - The received code may be executed by
processor 504 as it is received, and/or stored instorage device 510, or other non-volatile storage for later execution. In this manner,computer system 500 may obtain application code in the form of a carrier wave. - In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Claims (32)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/520,532 US20070058842A1 (en) | 2005-09-12 | 2006-09-12 | Storage of video analysis data for real-time alerting and forensic analysis |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US71672905P | 2005-09-12 | 2005-09-12 | |
US11/520,532 US20070058842A1 (en) | 2005-09-12 | 2006-09-12 | Storage of video analysis data for real-time alerting and forensic analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070058842A1 true US20070058842A1 (en) | 2007-03-15 |
Family
ID=37865593
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/520,532 Abandoned US20070058842A1 (en) | 2005-09-12 | 2006-09-12 | Storage of video analysis data for real-time alerting and forensic analysis |
US11/520,116 Expired - Fee Related US8553084B2 (en) | 2005-09-12 | 2006-09-12 | Specifying search criteria for searching video data |
US14/035,098 Expired - Fee Related US9224047B2 (en) | 2005-09-12 | 2013-09-24 | Specifying search criteria for searching video data |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/520,116 Expired - Fee Related US8553084B2 (en) | 2005-09-12 | 2006-09-12 | Specifying search criteria for searching video data |
US14/035,098 Expired - Fee Related US9224047B2 (en) | 2005-09-12 | 2013-09-24 | Specifying search criteria for searching video data |
Country Status (3)
Country | Link |
---|---|
US (3) | US20070058842A1 (en) |
KR (1) | KR20080075091A (en) |
WO (2) | WO2007033352A2 (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090315996A1 (en) * | 2008-05-09 | 2009-12-24 | Sadiye Zeyno Guler | Video tracking systems and methods employing cognitive vision |
US20100161615A1 (en) * | 2008-12-19 | 2010-06-24 | Electronics And Telecommunications Research Institute | Index anaysis apparatus and method and index search apparatus and method |
US20150015480A1 (en) * | 2012-12-13 | 2015-01-15 | Jeremy Burr | Gesture pre-processing of video stream using a markered region |
WO2015112668A1 (en) * | 2014-01-24 | 2015-07-30 | Cisco Technology, Inc. | Line rate visual analytics on edge devices |
US9123223B1 (en) | 2008-10-13 | 2015-09-01 | Target Brands, Inc. | Video monitoring system using an alarm sensor for an exit facilitating access to captured video |
WO2018152088A1 (en) * | 2017-02-14 | 2018-08-23 | Cisco Technology, Inc. | Generating and reviewing motion metadata |
US10225313B2 (en) | 2017-07-25 | 2019-03-05 | Cisco Technology, Inc. | Media quality prediction for collaboration services |
US10291597B2 (en) | 2014-08-14 | 2019-05-14 | Cisco Technology, Inc. | Sharing resources across multiple devices in online meetings |
US10375125B2 (en) | 2017-04-27 | 2019-08-06 | Cisco Technology, Inc. | Automatically joining devices to a video conference |
US10375474B2 (en) | 2017-06-12 | 2019-08-06 | Cisco Technology, Inc. | Hybrid horn microphone |
US10440073B2 (en) | 2017-04-11 | 2019-10-08 | Cisco Technology, Inc. | User interface for proximity based teleconference transfer |
US10477148B2 (en) | 2017-06-23 | 2019-11-12 | Cisco Technology, Inc. | Speaker anticipation |
US10516709B2 (en) | 2017-06-29 | 2019-12-24 | Cisco Technology, Inc. | Files automatically shared at conference initiation |
US10516707B2 (en) | 2016-12-15 | 2019-12-24 | Cisco Technology, Inc. | Initiating a conferencing meeting using a conference room device |
US10542126B2 (en) | 2014-12-22 | 2020-01-21 | Cisco Technology, Inc. | Offline virtual participation in an online conference meeting |
US10592867B2 (en) | 2016-11-11 | 2020-03-17 | Cisco Technology, Inc. | In-meeting graphical user interface display using calendar information and system |
US10623576B2 (en) | 2015-04-17 | 2020-04-14 | Cisco Technology, Inc. | Handling conferences using highly-distributed agents |
US10706391B2 (en) | 2017-07-13 | 2020-07-07 | Cisco Technology, Inc. | Protecting scheduled meeting in physical room |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7763074B2 (en) * | 2004-10-20 | 2010-07-27 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and methods for posterior dynamic stabilization of the spine |
KR20080035891A (en) * | 2006-10-20 | 2008-04-24 | 포스데이타 주식회사 | Image playback apparatus for providing smart search of motion and method of the same |
US20080263010A1 (en) * | 2006-12-12 | 2008-10-23 | Microsoft Corporation | Techniques to selectively access meeting content |
JP4420085B2 (en) * | 2007-08-20 | 2010-02-24 | ソニー株式会社 | Data processing apparatus, data processing method, program, and recording medium |
US8041077B2 (en) * | 2007-12-18 | 2011-10-18 | Robert Bosch Gmbh | Method of motion detection and autonomous motion tracking using dynamic sensitivity masks in a pan-tilt camera |
US9342594B2 (en) * | 2008-10-29 | 2016-05-17 | International Business Machines Corporation | Indexing and searching according to attributes of a person |
KR100933788B1 (en) * | 2009-07-13 | 2009-12-24 | (주)명정보기술 | A method for processing commands and data in write block device for harddisk forensic |
US9432639B2 (en) * | 2010-11-19 | 2016-08-30 | Honeywell International Inc. | Security video detection of personal distress and gesture commands |
US9226037B2 (en) | 2010-12-30 | 2015-12-29 | Pelco, Inc. | Inference engine for video analytics metadata-based event detection and forensic search |
JP5840399B2 (en) * | 2011-06-24 | 2016-01-06 | 株式会社東芝 | Information processing device |
US20130265418A1 (en) * | 2012-04-06 | 2013-10-10 | Chin-Teng Lin | Alarm chain based monitoring device |
JP6191160B2 (en) * | 2012-07-12 | 2017-09-06 | ノーリツプレシジョン株式会社 | Image processing program and image processing apparatus |
US9152872B2 (en) * | 2012-11-12 | 2015-10-06 | Accenture Global Services Limited | User experience analysis system to analyze events in a computer desktop |
KR102111135B1 (en) * | 2014-10-30 | 2020-05-14 | 에스케이텔레콤 주식회사 | Method for searching image based on image recognition and applying image search apparatus thereof |
RU2634225C1 (en) * | 2016-06-20 | 2017-10-24 | Общество с ограниченной ответственностью "САТЕЛЛИТ ИННОВАЦИЯ" (ООО "САТЕЛЛИТ") | Methods and systems for searching object in video stream |
KR102644782B1 (en) * | 2016-07-25 | 2024-03-07 | 한화비전 주식회사 | The Apparatus And The System For Monitoring |
JP6861484B2 (en) * | 2016-07-25 | 2021-04-21 | キヤノン株式会社 | Information processing equipment and its control method, computer program |
CN107977144B (en) * | 2017-12-15 | 2020-05-12 | 维沃移动通信有限公司 | Screen capture processing method and mobile terminal |
EP3648059B1 (en) | 2018-10-29 | 2021-02-24 | Axis AB | Video processing device and method for determining motion metadata for an encoded video |
CN112419639A (en) * | 2020-10-13 | 2021-02-26 | 中国人民解放军国防大学联合勤务学院 | Video information acquisition method and device |
CN112419638B (en) * | 2020-10-13 | 2023-03-14 | 中国人民解放军国防大学联合勤务学院 | Method and device for acquiring alarm video |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5969755A (en) * | 1996-02-05 | 1999-10-19 | Texas Instruments Incorporated | Motion based event detection system and method |
US6182069B1 (en) * | 1992-11-09 | 2001-01-30 | International Business Machines Corporation | Video query system and method |
US6516090B1 (en) * | 1998-05-07 | 2003-02-04 | Canon Kabushiki Kaisha | Automated video interpretation system |
US20060227997A1 (en) * | 2005-03-31 | 2006-10-12 | Honeywell International Inc. | Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing |
US7577199B1 (en) * | 2003-06-19 | 2009-08-18 | Nvidia Corporation | Apparatus and method for performing surveillance using motion vectors |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5802361A (en) * | 1994-09-30 | 1998-09-01 | Apple Computer, Inc. | Method and system for searching graphic images and videos |
US7194117B2 (en) * | 1999-06-29 | 2007-03-20 | The Research Foundation Of State University Of New York | System and method for performing a three-dimensional virtual examination of objects, such as internal organs |
US6553150B1 (en) * | 2000-04-25 | 2003-04-22 | Hewlett-Packard Development Co., Lp | Image sequence compression featuring independently coded regions |
US9892606B2 (en) * | 2001-11-15 | 2018-02-13 | Avigilon Fortress Corporation | Video surveillance system employing video primitives |
AU2003220287A1 (en) | 2002-03-14 | 2003-09-29 | General Electric Company | High-speed search of recorded video information to detect motion |
-
2006
- 2006-09-12 US US11/520,532 patent/US20070058842A1/en not_active Abandoned
- 2006-09-12 WO PCT/US2006/035960 patent/WO2007033352A2/en active Application Filing
- 2006-09-12 KR KR1020087008915A patent/KR20080075091A/en not_active Application Discontinuation
- 2006-09-12 WO PCT/US2006/035959 patent/WO2007033351A2/en active Application Filing
- 2006-09-12 US US11/520,116 patent/US8553084B2/en not_active Expired - Fee Related
-
2013
- 2013-09-24 US US14/035,098 patent/US9224047B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6182069B1 (en) * | 1992-11-09 | 2001-01-30 | International Business Machines Corporation | Video query system and method |
US5969755A (en) * | 1996-02-05 | 1999-10-19 | Texas Instruments Incorporated | Motion based event detection system and method |
US6516090B1 (en) * | 1998-05-07 | 2003-02-04 | Canon Kabushiki Kaisha | Automated video interpretation system |
US7577199B1 (en) * | 2003-06-19 | 2009-08-18 | Nvidia Corporation | Apparatus and method for performing surveillance using motion vectors |
US20060227997A1 (en) * | 2005-03-31 | 2006-10-12 | Honeywell International Inc. | Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9019381B2 (en) | 2008-05-09 | 2015-04-28 | Intuvision Inc. | Video tracking systems and methods employing cognitive vision |
US10121079B2 (en) | 2008-05-09 | 2018-11-06 | Intuvision Inc. | Video tracking systems and methods employing cognitive vision |
US20090315996A1 (en) * | 2008-05-09 | 2009-12-24 | Sadiye Zeyno Guler | Video tracking systems and methods employing cognitive vision |
US9123223B1 (en) | 2008-10-13 | 2015-09-01 | Target Brands, Inc. | Video monitoring system using an alarm sensor for an exit facilitating access to captured video |
US9866799B1 (en) | 2008-10-13 | 2018-01-09 | Target Brands, Inc. | Video monitoring system for an exit |
US20100161615A1 (en) * | 2008-12-19 | 2010-06-24 | Electronics And Telecommunications Research Institute | Index anaysis apparatus and method and index search apparatus and method |
US10261596B2 (en) | 2012-12-13 | 2019-04-16 | Intel Corporation | Gesture pre-processing of video stream using a markered region |
US9720507B2 (en) * | 2012-12-13 | 2017-08-01 | Intel Corporation | Gesture pre-processing of video stream using a markered region |
US20150015480A1 (en) * | 2012-12-13 | 2015-01-15 | Jeremy Burr | Gesture pre-processing of video stream using a markered region |
US10146322B2 (en) | 2012-12-13 | 2018-12-04 | Intel Corporation | Gesture pre-processing of video stream using a markered region |
WO2015112668A1 (en) * | 2014-01-24 | 2015-07-30 | Cisco Technology, Inc. | Line rate visual analytics on edge devices |
US20150213056A1 (en) * | 2014-01-24 | 2015-07-30 | Cisco Technology, Inc. | Line rate visual analytics on edge devices |
EP3097694A1 (en) * | 2014-01-24 | 2016-11-30 | Cisco Technology, Inc. | Line rate visual analytics on edge devices |
US9600494B2 (en) * | 2014-01-24 | 2017-03-21 | Cisco Technology, Inc. | Line rate visual analytics on edge devices |
US10291597B2 (en) | 2014-08-14 | 2019-05-14 | Cisco Technology, Inc. | Sharing resources across multiple devices in online meetings |
US10778656B2 (en) | 2014-08-14 | 2020-09-15 | Cisco Technology, Inc. | Sharing resources across multiple devices in online meetings |
US10542126B2 (en) | 2014-12-22 | 2020-01-21 | Cisco Technology, Inc. | Offline virtual participation in an online conference meeting |
US10623576B2 (en) | 2015-04-17 | 2020-04-14 | Cisco Technology, Inc. | Handling conferences using highly-distributed agents |
US11227264B2 (en) | 2016-11-11 | 2022-01-18 | Cisco Technology, Inc. | In-meeting graphical user interface display using meeting participant status |
US10592867B2 (en) | 2016-11-11 | 2020-03-17 | Cisco Technology, Inc. | In-meeting graphical user interface display using calendar information and system |
US11233833B2 (en) | 2016-12-15 | 2022-01-25 | Cisco Technology, Inc. | Initiating a conferencing meeting using a conference room device |
US10516707B2 (en) | 2016-12-15 | 2019-12-24 | Cisco Technology, Inc. | Initiating a conferencing meeting using a conference room device |
WO2018152088A1 (en) * | 2017-02-14 | 2018-08-23 | Cisco Technology, Inc. | Generating and reviewing motion metadata |
US10515117B2 (en) | 2017-02-14 | 2019-12-24 | Cisco Technology, Inc. | Generating and reviewing motion metadata |
US10440073B2 (en) | 2017-04-11 | 2019-10-08 | Cisco Technology, Inc. | User interface for proximity based teleconference transfer |
US10375125B2 (en) | 2017-04-27 | 2019-08-06 | Cisco Technology, Inc. | Automatically joining devices to a video conference |
US10375474B2 (en) | 2017-06-12 | 2019-08-06 | Cisco Technology, Inc. | Hybrid horn microphone |
US10477148B2 (en) | 2017-06-23 | 2019-11-12 | Cisco Technology, Inc. | Speaker anticipation |
US11019308B2 (en) | 2017-06-23 | 2021-05-25 | Cisco Technology, Inc. | Speaker anticipation |
US10516709B2 (en) | 2017-06-29 | 2019-12-24 | Cisco Technology, Inc. | Files automatically shared at conference initiation |
US10706391B2 (en) | 2017-07-13 | 2020-07-07 | Cisco Technology, Inc. | Protecting scheduled meeting in physical room |
US10225313B2 (en) | 2017-07-25 | 2019-03-05 | Cisco Technology, Inc. | Media quality prediction for collaboration services |
Also Published As
Publication number | Publication date |
---|---|
US20140022387A1 (en) | 2014-01-23 |
WO2007033352A3 (en) | 2007-07-12 |
US8553084B2 (en) | 2013-10-08 |
WO2007033352A2 (en) | 2007-03-22 |
US20070061696A1 (en) | 2007-03-15 |
KR20080075091A (en) | 2008-08-14 |
US9224047B2 (en) | 2015-12-29 |
WO2007033351A2 (en) | 2007-03-22 |
WO2007033351A3 (en) | 2007-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20070058842A1 (en) | Storage of video analysis data for real-time alerting and forensic analysis | |
US20200265085A1 (en) | Searching recorded video | |
US9171075B2 (en) | Searching recorded video | |
US8107740B2 (en) | Apparatus and method for efficient indexing and querying of images in security systems and other systems | |
US6807306B1 (en) | Time-constrained keyframe selection method | |
US20190228040A1 (en) | Object search by description | |
US20130163864A1 (en) | Video detection system and methods | |
US20100080477A1 (en) | System, computer program product and associated methodology for video motion detection using spatio-temporal slice processing | |
CN108073858A (en) | Crowd massing monitoring identifying system based on depth camera | |
Feris et al. | Case study: IBM smart surveillance system | |
Aved | Scene understanding for real time processing of queries over big data streaming video | |
e Souza et al. | Survey on visual rhythms: A spatio-temporal representation for video sequences | |
Thomanek et al. | A scalable system architecture for activity detection with simple heuristics | |
KR101170676B1 (en) | Face searching system and method based on face recognition | |
Ju et al. | A representative-based framework for parsing and summarizing events in surveillance videos | |
US10360253B2 (en) | Systems and methods for generation of searchable structures respective of multimedia data content | |
Li et al. | Streaming news image summarization | |
Srilakshmi et al. | Shot boundary detection using structural similarity index | |
Lee et al. | A data cube model for surveillance video indexing and retrieval | |
Koumousis et al. | A new approach to gradual video transition detection | |
De Marsico et al. | M-VIVIE: A multi-thread video indexer via identity extraction | |
Dande et al. | VIDEO ANALYTICS ON DIFFERENT BUSINESS DOMAINS TECHNIQUES: A SURVEY | |
Hafiz et al. | Event-handling based smart video surveillance system | |
Mahalakshmi et al. | Efficient Video Feature Extraction and retrieval on Multimodal Search | |
CN117743634A (en) | Object retrieval method, system and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: 3VR SECURITY, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:VALLONE, ROBERT P.;FLEISCHER, STEPHEN D.;PITTS, COLVIN H.;AND OTHERS;REEL/FRAME:018312/0584;SIGNING DATES FROM 20060911 TO 20060912 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: OPUS BANK, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:3VR SECURITY, INC.;REEL/FRAME:034609/0386 Effective date: 20141226 |
|
AS | Assignment |
Owner name: EAST WEST BANK, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:3VR SECURITY, INC.;REEL/FRAME:044951/0032 Effective date: 20180215 |
|
AS | Assignment |
Owner name: 3VR SECURITY, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:OPUS BANK;REEL/FRAME:048383/0513 Effective date: 20180306 |