US20130207969A1 - System for three-dimensional rendering of electrical test and measurement signals - Google Patents

System for three-dimensional rendering of electrical test and measurement signals Download PDF

Info

Publication number
US20130207969A1
US20130207969A1 US13/755,287 US201313755287A US2013207969A1 US 20130207969 A1 US20130207969 A1 US 20130207969A1 US 201313755287 A US201313755287 A US 201313755287A US 2013207969 A1 US2013207969 A1 US 2013207969A1
Authority
US
United States
Prior art keywords
dimensional
user
segments
data
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/755,287
Inventor
Justin Ralph Louise
Kevin Roy Francis
Daivd James Yaw
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US13/755,287 priority Critical patent/US20130207969A1/en
Publication of US20130207969A1 publication Critical patent/US20130207969A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R13/00Arrangements for displaying electric variables or waveforms
    • G01R13/02Arrangements for displaying electric variables or waveforms for displaying measured electric variables in digital form
    • G01R13/0218Circuits therefor
    • G01R13/0236Circuits therefor for presentation of more than one variable
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/203Drawing of straight lines or curves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • This present invention relates to a system (method) for three-dimensional (3-D) rendering of test and measurement signals, and relates particularly to a system for three-dimensional rendering of test and measurement signals having a computer system, or other microprocessor based platform or system which produces three-dimensional surfaces representing multiple signal channels on a display in accordance with data acquired, streaming, or previously stored in memory of the computer system.
  • the system is useful for three-dimensional visualization of the relationship between different channels of signals with user control of three-dimensional viewing position and angle to improve signal analysis over traditional two-dimensional display of test and measurement signals.
  • the system is described herein for test and measurement signals, other signals that are variable over a domain which either are in, or are separable into, multiple channels, may also be visualized by the computer system as 3-D surfaces.
  • a device typically collects sample data from one or more electrical test points over some period of time, whereby the value of a sample represents the voltage level of the given test point at a specific point in that timeline. Samples collected in one time-contiguous sequence are commonly considered as a single acquisition. Common tools in this field today include logic analyzers and digital storage oscilloscopes, such as those manufactured by Agilent Technologies, Tektronix Inc., and LeCroy Corp.
  • These systems typically have a dedicated hardware platform, an attached personal computer coupled to the logic analyzer, or a digital storage oscilloscope, operating in accordance with software that can collect, store, and manipulate the data representing sample data over one or more signal channels, and renders such to the user in a pseudo real-time, or non real-time fashion on a display.
  • These systems commonly display the data to the user on the display as a two-dimensional graph, whereby the x-axis represents time, and the y-axis value describes the voltage of the test point at that time for a particular signal channel, as illustrated for example in FIG. 1 .
  • the user relies on this data representation to gain insight into the operation of the unit under test, thereby allowing detection of errors, anomalies, or proof that the device is operating properly.
  • 2005/0234670 describes viewing multiple channels, domains, or acquisitions simultaneously, but does not provide for a display of multiple channels and acquisitions (or domains) simultaneously in a single three-dimensional view on a display. Further, the systems described in the above cited patents and publication have limited flexibility in the organization and presentation of the data on a display, which restricts the user's ability to quickly visualize and compare data when analyzing complex systems.
  • test and measurement data representing multiple channels which readily enables visualization of multiple channels in a three-dimensional (3-D) perspective as continuous or discontinuous surfaces aligned on a display in which the user can observe the relationships between different channels.
  • Still another object of the present invention is to provide an improved system for rendering test and measurement data of multiple channels as continuous or discontinuous surfaces in three-dimensional perspective on a display, in which the data can represent real-time data for a device or system under test or stored data accessible to the system.
  • a yet further object of the present invention is to provide an improved system for rendering test and measurement data of multiple channels as continuous or discontinuous surfaces in three-dimensional perspective on a display, in which the rendering may smoothly change from a three-dimensional view to an orthogonal or two-dimensional view, and vice versa
  • the present invention embodies a system having a computer (or other microprocessor based platform or system) having memory with acquired, streaming, or previously stored, data representing multiple channels of signals in which
  • the signals of each Channel has a value (y) which varies over time (x), and a display coupled to the computer.
  • the computer system segments the data of the channel into segments, orders the segments. renders on the display each of the segments as one or more lines in accordance with consecutive values of the data associated with the segment, in which each of the rendered segments are aligned in their order in depth (z) along a three-dimensional perspective with gaps between adjacently rendered segments, and lines are rendered extending from each line of each one of the rendered segments to form a three-dimensional plane in the gap to the next successive one of the rendered ordered segments to form a three-dimensional continuous or discontinuous surface characterizing the channel.
  • the surface of each of the channels are aligned on the display preferably for enabling a user to view relationships of two or more different channels.
  • edges of two or more adjacent planes of the surface that are located along the same two of the three-dimensions may appear joined to each other as a common plane, and when two planes meet along different ones of at least two of the three-dimensions such two planes may appear to meet to form an edge.
  • Each channel is preferably rendered as a surface having a different color on the display to distinguish the channels from each other, rendered with shading (or gradients along surface depth and between varied signal values) to enhance a three-dimensional view of surfaces, and/or a degree of translucency to enable viewing (and discernment) of the channel or different channel(s) when overlaid on the display.
  • the computer system segmentation of the data of each of the channels is in accordance with segment start and/or stop conditions which are predefined or user defined.
  • the ordering of segments for each of the channels may be in accordance with predefined or user defined, conditions, or the order of the segments of the channel is defined by order in which the segments are segmented.
  • the number of segments rendered for each of the channels on the display may also be predefined or user defined condition.
  • the ordered segments rendered on the display for each of such one or more channels advances in the depth (z) in the three-dimensional perspective as the computer system continues to segment the data representing the signal of the channel into segment, and then render such segments with planes extending from one or more lines thereof as an addition to the surface of the channel on the display.
  • the rendered segments and planes extending from one or more lines thereof associated with the oldest segment(s) may be removed from the view, thereby providing a smoothly flowing view of multiple signal channels in a three-dimensional view whereby newer to older segments of each signals channel are viewable as a scrolling, aging surface in a perspective of depth.
  • the computer system upon detection of a specified condition predefined by the user within the data of any one or more of the channels may adjust or add to the number of surfaces displayed by rendering new surface(s) on the display aligned with other surfaces by segmenting and rendering in accordance with subset(s) of the data associated with such condition. Consequently, as new signal data is acquired the depth (z) in the three-dimensional perspective for one or more of rendered surfaces, the computer system may adjust (add/remove) by a varying number of rendered surfaces dependent on the number of occurrences of a specified condition within newly acquired or previously acquired data.
  • the computer system has user controls, such as a keyboard, mouse, touch screen surface upon the display, or the like, enabling the user to manipulate his/her view of the three-dimensional model representing the signal channels, such as for e.g., changing the viewing position from which the rendered surfaces are oriented (or centered) such that the user can select the angle of view of the model in, around, or along the model in all three dimensions.
  • user controls such as a keyboard, mouse, touch screen surface upon the display, or the like, enabling the user to manipulate his/her view of the three-dimensional model representing the signal channels, such as for e.g., changing the viewing position from which the rendered surfaces are oriented (or centered) such that the user can select the angle of view of the model in, around, or along the model in all three dimensions.
  • the data representing the multiple signals may be from an acquisition device that is coupled by leads to a unit or device under test, where the acquisition device provides the data in near real-time to the computer system for processing into a three-dimensional view, or the data may be provided in real-time from the acquisition device, or such data may be any data representing multiple signals in memory stored or accessible to computer.
  • the present invention also provides a method for visualizing in a three-dimensional view having the steps of: segmenting the data of a channel into segments in which each of the segments starts at a predefined or user defined condition, ordering the segments, and rendering on a display each of the segments as one or more lines in accordance with the values of the data associated with the segment, in which each of the rendered segments are aligned in the order on the display along a three-dimensional perspective with gaps between adjacently rendered segments, and each line of each one of the rendered segments extends to form a three-dimensional plane in the gap to the next successive one of the rendered ordered segments to form a three-dimensional continuous or discontinuous surface characterizing the channel.
  • the surface rendered represents one of a plurality of surfaces rendered on the display aligned with each other in which each of the plurality of surfaces are provided h carrying out the segmenting, ordering, and rendering steps on data representing each one of the channels.
  • An advantage of the present invention over traditional two-dimensional viewing of test and measurement data is that the user is able to request the system to organize and display the data in ways that lead to quick comparison and identification of problems that are not easily discernible via a two-dimension view. For example, take an acquisition with a complex repetitive (over time) pattern in it. With a conventional two-dimensional logic analyzer display, as shown for example in FIG. 1 , the user must either condense the time scale of the graph (zoom out in time) losing most of the detailed data, or scroll back and forth left and right to see if any of the instances of the pattern deviate in any detail from the expected. This is extremely difficult because there is no way to directly compare the occurrences with details on the screen at the same time.
  • the user can separate and reorganize each repetition of the pattern along the z dimension (depth) at a continuous or discontinuous surface viewed in three-dimensions.
  • depth dimension
  • a variation in any one of the repetitions in any channel(s) is quickly identifiable from the norm.
  • An analogy to this would be searching a group of hole-punched pages to find if, and which ones, might contain holes that are misaligned. Laid out next to each other on a long table, examining the sheets to find differences could take very long or completely miss them. Instead, if the sheets are stacked on top of each other, any variances can be quickly identified.
  • orthogonal refers to an orthogonal projection vi C NV and is technically still a view of a 3-D representation but with zero perspective applied, the resulting image appears to the user to be a “flat” 2-D rendering.
  • 3-D representation is viewed orthogonally from a perpendicular vantage point it becomes indiscernible from a 2-D graph.
  • the system and method of the present invention may operate to visualize signals in other domains, such a frequency domain, or from other sources.
  • the signals may be associated with or represent other forms of data such as a stream of video for analysis of frames thereof for anomalies, or signals from any sources that are variable over a domain (not limited to test and measuring electrical signals) which a user desires to visualize that can be captured and stored in memory of the computer system.
  • FIG. 1 is an example of the output rendered on the display of a typical logic analyzer or digital oscilloscope where x axis is time and y axis is the value (amplitude or Voltage level);
  • FIG. 2 is a block diagram of the system of the present invention in which the computer system operates in accordance with software for enabling three-dimensional rendering of data representing multiple signal channels on a display;
  • FIG. 3 is an example of the output rendered by the system of FIG. 2 in displaying a three-dimensional view of four channels of electrical signals over time as four three-dimensional continuous or discontinuous surfaces;
  • FIG. 4 is block diagram of the modules of the software operating on the computer system of FIG. 2 ;
  • FIG. 5 is an illustration showing an example of how the sequence of samples used to generate segments by the segment controller module of FIG. 4 are composite data samples representative of samples from all the individual channels from the same point in time;
  • FIG. 6A is an illustration showing an example of selection of segments by segment controller module of FIG. 4 from a time-contiguous series of composite samples based on a user selectable condition
  • FIG. 6B is an example of a two-dimensional array ordering by the segment controller module of FIG. 4 of segments along z from most earlier to later of those selected in FIG. 6A ;
  • FIG. 7 is a flow chart of the processes in software on the computer system of FIG. 2 for the three-dimensional model generator of FIG. 4 ;
  • FIG. 7A is an example of the operation of step 38 in the flow chart FIG. 7 ;
  • FIGS. 8A , 8 B, 8 C, and 8 D graphically illustrate the processing of a single channel by the three-dimensional model generator of FIG. 4 as described in FIG. 7 from a two-dimensional array of segments of a signal channel into a three-dimensional surface, in which FIG. 8A represents the ordered segmented signal data in a two-dimensional array (x,z) in which each entry in the array has a value (y); FIG. 8B shows a graphical representation of the array as point locations in three-dimensional perspective; FIG. 8C shows a graphical representation of lines connecting time-adjacent points of each segment of FIG. 8B ; and FIG. 8D shows a graphical representation of planes formed from such segments to form a three-dimensional channel surface, where the points are the vertices of the planes;
  • FIGS. 9A and 9B show different examples of single-bit digital signal data rendered in a two-dimension (x,y), and repeated segments of same data rendered in three dimensions (x, y, z) in which adjacent samples form planes having depth along the z axis;
  • FIGS. 9C , and 9 D shows different examples of analog, or multi-bit digital signal data rendered in two-dimensions (x,y), and repeated segments of same data rendered in three dimensions (x, y, z) in which adjacent samples form planes having depth along the z axis;
  • FIG. 10 is another example of digital or analog signal data rendered in a two-dimension (x,y), and repeated segments of same data rendered in three dimensions (x, y, z) in which adjacent samples form planes having depth along the z axis;
  • FIG. 11 shows a schematic illustration of multiple three-dimensional channels as ordered surfaces rendered in a single view on a screen of the display of FIGS. 2 and 4 in which the surface characterizing each of the channels are aligned with each other with a common time base (x) to facilitate viewing of the relationship of channels with respect to each other;
  • FIG. 1 illustrates communication between an eye contrast sensitivity test (CST) measurement device and a health correlation assessment service.
  • CST eye contrast sensitivity test
  • FIG. 12 shows an example of four three-dimensional channel surfaces rendered in a view on a screen of the display of FIGS. 2 and 4 having non-signal objects of three-dimensional planes extending through the channel surfaces to illustrate events in signals occurring at the same time along multiple signal channels, and measurement and grid markers associated with time;
  • FIG. 12A is an example of the output rendered by the system of FIG. 2 in visualizing a two-dimensional view of four channels of electrical signals over time where three of the channels are digital and one of the channels is analog:
  • FIG. 12B is an example of the output rendered by the system of FIG. 2 in visualizing a three-dimensional view of same four channels of FIG. 12A ;
  • FIG. 13 illustrates the use of lower resolution sections of a rendered view of multiple signals channels on a display of FIGS. 2 and 4 as the view extends in virtual distance from the viewer;
  • FIGS. 14-20 show different representations of a three-dimensional view of multiple channels in which the user changes the viewing position, angle, or scale;
  • FIGS. 21-28 show different representations of a three-dimensional view of multiple channels changing from a three-dimensional perspective view of FIG. 21 to a two-dimensional or orthogonal view of FIG. 28 with smooth transitions at intermediate representations of such view to visualize smooth transition at FIGS. 22-27 ;
  • FIGS. 29A-29C are collectively a diagram showing the geometric method for smooth transitioning on the display from a three-dimensional rendering to a two-dimensional or orthogonal view, as shown for example in FIGS. 21-28 to enable smooth transitions of the display by the model visualizer of FIG. 4 ;
  • FIGS. 30A-30B are collectively a flow chart of the process in software on the computer system of FIG. 2 for smooth transitioning on the display from a three-dimensional rendering to a two-dimensional or orthogonal view, as shown for example in FIGS. 21-28 ;
  • FIG. 31 is a ray diagram to illustrate the geometry for reorienting of the rendered three-dimensional model of multiple channels to a view perpendicular to a user selected point of interest by the model visualizer of FIG. 4 ;
  • FIG. 32 is a flow chart of the processes in software on the computer system of FIG. 2 for automatic reorienting of the rendered three-dimensional model of multiple channels to a view perpendicular to a user selected point of interest by the model visualizer of FIG. 4 ;
  • FIG. 33-43 show different representations of a view of a three-dimensional view of multiple channels smoothly reorienting to as view perpendicular to a user selected point of interest, and then transitioning to a two-dimensional or orthogonal view to perform in-depth analysis of the data and timing around the event;
  • FIGS. 43-49 show different representations of an orthogonal view of a three-dimensional view of multiple channels as the viewpoint smoothly rotated around a point of interest from a top-down view to a traditional front-on view;
  • FIG. 50 shows an example of a control panel provided along the screen on the display of FIGS. 2 and 4 by the user interface module of FIG. 4 providing labels of the signal channels being rendered, the current activity of each signal channel, and current view and angle;
  • FIGS. 51A and 51B are similar to FIGS. 6A and 6B , respectively, in which the selection criteria of segments result in overlap and gaps of the samples within the segments in order to more generally show segmentation;
  • FIGS. 52-55 illustrate segmentation of signal data by the segmentation controller module of FIG. 4 to form a two-dimensional array of ordered segments based on different user defined parameters and the characteristics of the signals;
  • FIG. 56 illustrates the addition of a reference segment at the segmentation controller module of FIG. 4 that may be specified by the user and generated for inclusion in the set of segments;
  • FIG. 57 illustrates an example of the reordering of a two-dimension array of segment on conditions such as values in the data original source, or acquisition order by the segmentation controller module of FIG. 4 ;
  • FIG. 58 is the display of FIGS. 2 and 4 of a three-dimensional view of a sampled live or recorded video image streaming from a unit under test along with other signals relevant to analysis in the system of the present invention.
  • the system 10 of the present invention has a computer system 12 with software in accordance with the present invention for rendering on a display 13 coupled to the computer system.
  • the computer system 12 is connected to an acquisition device 15 , such as a LeCroy Model No. MS-250 or MS-500 Mixed Signal Oscilloscope, or other logic analyzer or digital oscilloscope, receiving electrical signals from a device (or unit) 16 under test via test leads 17 .
  • an acquisition device 15 such as a LeCroy Model No. MS-250 or MS-500 Mixed Signal Oscilloscope, or other logic analyzer or digital oscilloscope, receiving electrical signals from a device (or unit) 16 under test via test leads 17 .
  • three leads 17 are shown for providing three electrical signals, but other number of leads may be used depending on the acquisition device 15 .
  • the acquisition device outputs digital data to the computer system 12 representing multiple channels of electrical signals received from leads 17 , where each channel represents a signal from one of the leads 17 having amplitude or value over time.
  • the computer system 12 stores the received data in its memory (RAM), and may also store the data in a file in memory (hard drive or optical drive) of the computer system for archival storage or for later non-real time rendering of the signals by the computer system on display 13 .
  • the computer system 12 has hardware and/or software enabling acquisition of data from the acquisition device 15 and storage of such data in its memory, as typically provided by the manufacturer of the acquisition device 15 for operating with the acquisition device.
  • the computer system 12 may represent a personal computer (PC) system, work station, laptop computer, or other microprocessor based platform, which is coupled to user controls 13 a, such as a keyboard, mouse, touch screen surface upon the display, or combination thereof, enabling the user to control the computer system 12 operation.
  • user controls 13 a may be interactive with software on the computer system 12 to provide a graphical user interface on display 13 .
  • the system 10 of the present invention includes computer system 12 , display 13 , and user controls 13 a, in which computer system 12 has a graphics and video card and software for operating same for interfacing and outputting to display 13 as typical of a PC based computer system, the system 10 may he part of an acquisition and display system 11 with acquisition device 15 .
  • the digital data representation of channels of electrical signals received in memory from the acquisition device 15 is processed by the computer system 12 for rendering on display 13 .
  • An example output on display 13 is shown in FIG. 3 having four signal channels each represented by three-dimensional (3-D) surfaces 18 to provide a three-dimensional model, in which measurement grids (or scale) 20 represent the time base (x), and the heights along each surface 18 is representative of amplitude or value (y) of the signal.
  • measurement grids (or scale) 20 represent the time base (x)
  • y amplitude or value
  • the 3-D view is illustrated as being along the entire screen of display 13 in FIG. 12 , the view may be in a window on display 13 and may have a control panel of FIG. 50 .
  • the computer system 12 segments the data in accordance to predefined or user defined start and/or end conditions of each segment, orders the segments in accordance to predefined or user defined conditions, and renders on the display each of the segments as line(s) having variations in height (y) in accordance with the values of the data of such segment in their order in depth (z) along a three-dimensional perspective (x, y, z) with gaps between adjacent segments, and then from each of the segments extends three-dimensional planes from each of the line(s) to the next segment in depth (z).
  • the third segment 19 of the topmost surface 18 is denoted by lines which varies (falls and rises) in height (y) along time (x), and for each part of such lines having the same height (y) a three-dimensional plane 21 is extended in a gap 19 b to abut lines of the next segment 19 a in depth (z).
  • Each segment of a channel on the display forms one ribbon of surface 18 , and the combination of such ribbons forms a continuous or discontinuous surface 18 .
  • depth (z) in the perspective of a channel surface 18 relates to previous acquisitions of the channel, thereby enabling a view where a user can analyze the relationship of two or more different channels by their respective surfaces 18 along the relative time base (or scale) 20 over a series of independent acquisition of such different channels.
  • the user may add three-dimensional planes at particular times along the time base through different channel surfaces 18 to assist in the analysis.
  • the data representing multiple channels of signals may be from the acquisition device 15 in real-time (e.g., streaming), but may also or alternatively, represent a mathematical simulation, or data stored file in memory (hard drive, optical drive, or other memory storage device) of computer system 12 that is not acquired in real-tine from acquisition device 15 , volatile (RAM) memory or FLASH drive, or any other system or acquisition device 15 capable of producing a set of acquired signal data other than from acquisition device 15 that is connected to computer system 12 .
  • This enables system 10 to be portable or stand-alone as well as part of a complete acquisition and display system 11 with acquisition device 15 .
  • the software operating on the computer system 12 for generating the three-dimensional surfaces 18 is shown having modules or software components 14 , 22 , 24 , 26 , acid 28 : user interface 14 , segment controller 22 , three-dimensional model (3-D) generator 24 , model enhancer 26 , and model visualizer 28 .
  • the model visualizer 28 interfaces with software and/or hardware 32 of the computer system 12 to render and animate the model on display 13 to produce the desired scene, while the user interface 14 allows the user to control the visualized output on the display by changing parameters to modules 22 , 24 , 26 , and 28 in response to user input from user controls 13 a.
  • modules will now be described.
  • the segment controller module 22 receives data 17 a representing one or more separate channels of signals from acquisition device 15 and places it in a historical data store in memory of the computer system 12 .
  • the segment controller 22 uses the set of new and/or historical data to generate individual time-contiguous segments containing samples for each of the channels and arranging them into a two-dimensional array of data at the same time.
  • the combined segment array of multiple channels may be considered a matrix. The exact format of such data is not restricted to a single representation.
  • Segmentation is performed on all available channels in parallel so as to maintain all time relationships.
  • the samples used for purposes of segmentation, and the samples in the generated segments may be considered composite samples whereby the composite sample contains the complete data of each of the included channels samples from that same point in time.
  • This is represented in FIG. 5 where Channels 0 to M (C0-CM) each containing samples 0 to N (C0S0-CMSN) containing an amplitude or value (y) along time (x) are operated upon by the segment controller module 22 as a set of composite samples S0-SN (e.g. S0 contains the data of C0S0-CMS0).
  • FIGS. 6A and 6B A representation of the segmentation of data of multiple channels into segments is shown in a basic case in FIGS. 6A and 6B and in a more general case in FIGS. 51A and 51B .
  • the data for each channel may be segmented along user selectable conditions such as: consecutive time slices, reoccurrence of a pattern, specific start and/or stop conditions, acquisition source, different acquisitions, or combinations thereof.
  • the user selectable condition may be a pattern, such as a switch from low to high or high to low, or a particular sequence of data values.
  • the segments may be uniform or non-uniform in length (x).
  • FIG. 6A shows an example of segmentation of a set of composite samples, containing data for one or more channels, into five segments labeled A to E in length (time) in FIG. 6A , in which one or more amplitudes or values (y) are shown by boxes along time (x).
  • a two-dimensional array of this data ordering the segments A-E of FIG. 6A in depth z is shown in FIG. 6B .
  • the segments are shown non-uniform in FIGS. 6A and 6B the segments may also be of uniform length (time).
  • the organization of the array in two dimensions may be different than shown, and the particular data structure and organization is not limited to that shown in FIG. 6 or 51 . Other segmentation of signals will be discussed latter in connection with FIGS. 52-57 .
  • the user interface 14 provides the user with one or more screens on display 13 that enables the selection of parameters, such as selecting which of the available channels the user desires to view, the layered order of such channels, the maximum number of segments of each channel to be extracted and rendered in the view (e.g., 1 to 100), and start and/or stop conditions by which each channel will be segmented, such as described above.
  • the user can further select, via the user interface 14 , the color of each channel to be rendered, shading to be applied, and the degree or level of translucency of each channel.
  • the user can also add non-signal objects to the view such as one or more reference markers (at chosen times (x and/or z)), measurement grids and reference planes (at preset times (x and/or z)), and furthermore can adjust the color and translucency of such additional objects.
  • These parameters may be set to predefined default levels if the user does not wish to select user defined parameters.
  • the user interface 14 may use graphical user interface elements, such as menus, input fields, and the like, typical of a software user interface. Other means for entry of these parameters may be used, such as buttons or knobs along a housing having the display 13 , where housing includes the computer system 12 , to select one or more of these parameters with or without a graphical user interface on display 13 .
  • the two-dimensional array 23 of data containing samples composed of one or more channels is input to the three-dimensional model generator 24 ( FIG. 4 ) to transform the data into a digital 3-D signal model.
  • the operation of three-dimensional model generator 24 is shown in FIG. 7 .
  • the computer system 12 selects a depth (number of segments) of an input buffer in memory of computer system 12 .
  • the three-dimensional model generator 24 separates the 2D array of composite samples into individual 2D arrays (step 34 ), one for each channel and then filters out (removes) any undesired channels (as specified by predefined or user defined conditions) not to be displayed (step 35 ). This reduces the amount of data that must be processed by the subsequent functions to only that requested by the user.
  • the model generator 24 uses the number and order of the channels, and the maximum number of segments, as selectable by the user via user controls 13 a to user interface 14 .
  • the sample data from individual channels are individually located within a y portion of the three-dimensional space. For each N number of 2D arrays of channel data steps 36 - 44 are performed within the y portion assigned by the computer system for that channel. Based on the x and z indices of each sample within the array for that channel, a respective location on the x-z plane of the model is calculated (step 36 ). It is very common when sampling a test point to have a series of time-contiguous samples of the same value. Therefore, in a preferred embodiment the process reduces the workload of the system by eliminating extraneous points that do not describe changes in the signal level over time (step 38 ) as is shown in the example FIG. 7A . This is done so far as is possible without any loss of information.
  • step 38 provides for optimization, whereby extraneous vertices that do not represent change in the value of the signal are removed for efficiency in processing and rendering. Although preferable, step 38 may optionally he removed.
  • a y value is then calculated for each to create a location in 3-D space (step 40 ).
  • the y value relies on two components. Each channel is given a minimum and maximum y value in the model space within which all related samples will he located. The specific values for this y range are for presentation and clarity purposes to provide separation from the other channel surfaces 18 when rendered. In a preferred embodiment these are configurable by the user, as desired, and would not preclude the ability to overlap the locations of separate channels in the same space.
  • the second component in generating a y value for each sample is the stored value associated with it, e.g., voltage of the given test point at that time.
  • the final y value for the point is calculated as a location within the channel's y range which is proportional to that samples value in regards to the maximum value that can be represented for that channel based on the input source.
  • Those 3-D points that are contiguous in time for each given channel are then connected by lines (step 42 ). In one specific embodiment this is accomplished by using vertical and horizontal lines to generate a digital representation by forming right angled ‘steps’. In another embodiment direct angular lines are created to represent interpolation of the signal value between samples. This is useful for example if the source was an analog channel. Furthermore, multi-bit samples or combinations of multiple channels may be represented by bus symbols rather than a basic line. In any case, the user via the user interface 14 may select the desired forth of presentation. A representation of the lines connecting contiguous points in time for each ordered segment 19 is shown in FIG. 8C for the example of FIGS. 8A and 8B , in which each pair of adjacently rendered ordered segments 19 is rendered with a gap 19 b (or space) between them for rendering of planes as will be described below.
  • lines are extruded or extend in the z dimension (step 44 ) in the three-dimensional perspective to form planes 21 along gaps 19 b, where common y values contiguous in time (x) along the same segment form a plane 21 (x, z), and different consecutive y values in time (x) along the same segment form an orthogonal step (y,z) or a sloped plane 21 (x,y,z).
  • the planes 21 are extruded in depth (z) such that the plane 21 for each segment 19 meets up with the following segment in depth z, thereby joining to provide a three-dimensional synthesized surface 18 for each channel that is easily discernible from different perspectives as shown in FIG. 8D .
  • Each segment 19 of a channel on the display once rendered one or more lines along x,y axes, with lines along the z axis forming planes 21 extending there from represents one ribbon of a surface 18 .
  • common y values contiguous along the same segment 19 and among a series of ordered segments 19 along the x axis may form (or appear or rendered as) a common plane 21 a (x, z), and different consecutive y values along the same segment which are common among a series of ordered segments 19 may form a common orthogonal step (plane) 21 b (y,z) or a sloped plane 21 (x,y,z).
  • edges of two or more adjacent planes of a channel that are located along the same two of the three-dimensions (x, y, z) they may appear joined to each other as a common plane, and when two planes meet along different ones of at least two of the three-dimensions such two planes appear to meet to form an edge.
  • surface 18 in the example of FIG. 8D is discontinuous between some of the planes 21 providing a discontinuous surface (see for example opening 21 c between five rendered planes, or openings 21 d and 21 e ).
  • the surface 18 may be continuous.
  • a discontinuous three-dimensional surface 18 may optionally be made continuous by rendering additional three-dimensional planes along such discontinuities, such as in openings 21 c, 21 d, and 21 e, and other openings where no rendered planes have edges adjacent to each other.
  • FIGS. 9A-9D Other representations of surface 18 synthesis from data are shown for example in FIGS. 9A-9D , and FIG. 10 .
  • the surface may be form angular planes, or from a combination of angular and non-angular planes. This method is particularly useful when the given channel being represented is derived from analog data instead of digital.
  • the user controls the three-dimensional perspective, as will be shown later, to enable the user to view down the z axis of the surfaces 18 for each channel and thereby visualize variation patterns over a deep quantity of acquisitions or cycles (i.e., segments up to the maximum number of segments specified by the user).
  • a history FIFO of acquisitions, or data segments can be used to place new data at the front of surfaces 18 and fluidly “scroll” older acquisitions (or data segments) away from the user along the z axis in real-time.
  • the ordered segments 19 rendered on the display 13 for such channel advances in the depth (z) in the three-dimensional perspective as the computer system 12 continues to segment the data representing the signal of the channels into segments 19 which are then added to the surface 18 of such channel as a new ribbon to such surface 18 .
  • data segments which form an array 23 for one or more channels may be stored before, after, or concurrent with rendering on display 13 in memory of computer system 12 or external memory device accessible for storage by computer system 12 , and thereby provide an archive for later display of such segments for analysis.
  • the three-dimensional model generator 24 may generate multiple three-dimensional models 25 a views in parallel. These additional views are generated from decimated copies of lower resolutions of input data 17 a by segment controller 22 as array data 23 a, and result in simplified versions 25 a of the base model 25 . These are then used later on by the model visualizer 28 to improve rendering efficiency and increase the volume of data that can be displayed at once while retaining responsiveness to the user and higher update rates of renderings on the display 13 , as will be described further below.
  • the 3-D model generator 24 receives the two-dimensional array of data 23 for one or more signal channels, and translates it into a three-dimensional model representation where the signal voltage amplitude and time are used to give each sample volume and location in three dimensions (x, y, and?). This produces a complete model where individual channel are viewed as 3-D surfaces 18 layered relative to each other in three-dimensional space over a common time (x and z).
  • the resulting model 25 is a record in memory (RAM) of the computer system 12 for all the channels to be rendered in a view of vertices in x, y, z space.
  • the record has for each channel the vertices of each segment (such as represented by FIG. 8B ) in an order (e.g., left to right) defining lines between vertices (such as represented by FIG. 8C ), and the vertices in an order (e.g., top to bottom) defining the planes or surfaces between segments (such as represented by FIG. 8D ).
  • the model is enhanced to improve its usefulness to the user by the model enhancer 26 .
  • most if not all of these enhancements are under the control of the user by controls 13 a to enable and configure as best suits their needs via the user interface 14 .
  • a different color is applied to each surface 18 , as selectable by the user via user interface 14 .
  • the area of planes 21 of a ribbon between adjacent samples is applied with a gradient, where the shade of the color approached at each sample point represents the voltage value at that same point.
  • each surface 18 is associated with values of data associated with the surface.
  • Each surface 18 thus preferably varies in one or more characteristics e.g., color, intensity, shading or gradient) to distinguish the surfaces representing different channels from each other, to distinguish different planes of the same surface 18 from each other, and to distinguish the area of each planes from each other of same surface 18 .
  • the user via the user interface 14 , is able to configure surfaces in the model to be applied with varied degrees of translucency.
  • This combined with a 3-D vantage point, enables viewing one surface 18 through another of the same or through multiple layered surfaces 18 , and provides the ability for one pixel on the screen to give the user information on the value of multiple samples at once. Viewed from above and down along the direction of the y axis this ability can be used to make asynchronous data between two or more channels instantly apparent.
  • Such capability is not possible with the conventional logic analyzer software for displaying two-dimensional signals of multiple channels.
  • individual samples can also be further enhanced in the 3-D model with particular color, translucency, outlining, the appearance of glowing, or other special graphical characteristic effects as to provide for highlighting of desired points.
  • These enhancements are applicable based on a specific sample, or samples meeting given user criteria such as value.
  • sequences of samples can similarly be highlighted based on a certain sequential pattern or variance in either the x or z dimensions.
  • the user controls 13 a to interface 14 may enable the user to select desired value(s) or patterns within a channel to be highlighted by desired graphical characteristic(s).
  • references planes 46 a, 46 b, and 46 c that give scale and alignment information about the samples or identify special locations, such as grid planes 46 a, measurement markers 46 b, trigger points 46 c, and scale 20 .
  • the reference planes may extend through the channel surfaces 18 along the entire depth (z) of the view or less than the entire depth, as shown for example by reference planes 46 b.
  • non-signal objects 20 , 46 a, 46 b, and 46 c can be customized by the user via the user interface 14 with varying colors and translucencies so as not to be lost amongst or hide the signal data being shown around them.
  • An example of the 3-D model of surfaces 18 on a screen of display 13 is shown for example in FIG. 12 having vertical reference planes 46 a, 46 b, and 46 c with four surfaces 18 representing data of different channels.
  • the 3-D model and their associated surfaces 18 shown in the figures are shown in gray-scale, but typically each surface 18 , scale 20 , and reference planes 46 a, 46 b, and 46 c are of color as described herein.
  • channels may be representative of digital or analog sources or a combination thereof.
  • surfaces 18 may be rendered in an analog form or digital form based on user selection.
  • An example rendering of mixed analog and digital channels in a front-on orthogonal view is shown in FIG. 12A
  • an example of mixed analog and digital channels with three-dimensional perspective is shown in FIG. 12B .
  • the earlier described record defining model 25 is modified by model enhancer 26 to add a number or code) for each vertex defining its color and translucency level.
  • this number may have four values (R, G, 8 , ⁇ ), where the first three define the R (red), G (green) and B (blue) values, respectively, that describe the color (or color mixture) of the vertex, and the fourth byte, is a value ( ⁇ ) is the level of translucency of that vertex of the color in accordance with its R, 8 , G, values.
  • a completely opaque pure white vertex can be described as (1.0, 1.0, 1.0, 1.0) while a 50% transparent pure black vertex is described as (0.0, 0.0, 0.0, 1.0) and a slightly transparent yellow vertex can be described as (1.0, 1.0, 0.0, 0.9).
  • vertices defining the non-signal objects e.g., reference plane(s) 46 a, 46 b, 46 c, and scale 20 ) and their color and translucency values.
  • the modified record represents a 3-D display model 27 which is used by model visualizer 28 to produce rendering instructions 29 representative of the visualization of the model 27 to a software/hardware renderer 32 for output on display 13 and thereby produce the desired visual image.
  • the visualizer 28 performs scaling of the model 27 in any or all of the three dimensions x, y, z based on predefined or user-defined conditions. This allows the user to condense each axis independently, altering the proportions and the amount of data that is displayed on the display 13 .
  • the visualizer 28 takes into account the user's simulated position in the 3-D environment and their viewing angle to determine the portion of model in view.
  • the user controls 13 a via the user interface 14 enable the user to input the desired scaling in x, y, z and select any change of simulated user position and viewing angle within or around the three-dimensional model.
  • the change of simulated user position may be performed using buttons on the user interface's keyboard that is coupled to the computer system 12 or clicking (pressing) a mouse button to select where on the image of the 3-D model will be the new viewing position, or clicking down a mouse button and holding down that button while dragging the image to move the position or angle of view about the current viewing position or angle, and releasing that button when the desired view is obtained.
  • Other means of using the user interface 14 may also be used to select or change viewing position and angle, including to top views, bottom views, side views, and any other angular view there between, as desired by the user to view the relationship between two or more channels, or patterns in a single channel.
  • the model visualizer 28 produces rendering instructions 29 for the view to be displayed.
  • the rendering instructions 29 are in a format and code defined by the three-dimensional software/hardware renderer 32 that receives such instructions.
  • the three-dimensional software/hardware renderer 32 is a component of the computer system 12 and has graphics libraries and (optionally) 3-D acceleration hardware to output a three-dimensional image in accordance with such instructions.
  • Such software/hardware renderer 32 enables a fast frame rate and three-dimensional rendering effects, and is often used for video games rendering on personal computers, but has not been utilized in the field of display of test and measuring data.
  • Examples of commercially available three-dimensional software/hardware renderers 32 are commercial video accelerator hardware/software, such as an ATI Radeon or NVidia GeForce series graphics cards and their drivers.
  • the software of model visualizer 28 uses widely available OpenGL software libraries for interfacing to the card. Alternately, the Microsoft DirectX standard or other video graphics library and/or hardware may be chosen.
  • the model visualizer 28 logically separates the model 27 into sections in the x and/or z dimensions. Based on the viewing angle and virtual distance from the viewer, it then determines each individual section of the view to be displayed and chooses, out of multiple resolution (i.e., decimated) models produced by the 3-D model generator 24 and enhanced by model enhancer 26 which resolution model is most appropriate for each section. Decimated model sections are used when the size on display 13 (related to perspective distance) they are to be rendered at is incapable of effectively displaying additional information in the more detailed version of the model due to the pixel resolution (or other limitation) of the display. In this way, the model visualizer 28 is able to simplify the model without information loss to the user and still greatly decrease the amount of data that must be rendered.
  • decimated decimated
  • a lower resolution representation of arrays 17 a are produced by reducing the number of samples in time (x) for each ordered segment, such as by collapsing the set of y values for consecutive N number of samples in the arrays 23 a to represent a single y range (max and min) value pair (where N increases as resolution lowers).
  • Each lower resolution representation of the array data 23 a is operated upon by generator 24 to produce model 25 a and then enhancer 26 to provide different models 27 a of model 27 of different resolution for visualizer 28 .
  • the visualizer 28 selects the vertices of records for each section of the final view from one of these models 27 and 27 a in accordance with time (x and/or z) as the virtual distance from the viewer increases and required resolution reduced.
  • FIG. 13 An example of this is shown in FIG. 13 showing three different versions 51 , 52 , 53 , labeled Sections A, B, and C, respectively of the same original data 50 for two signal channels along different parts of a rendered view 49 . Although three sections are shown, there are more sections between Sections A, B, and C of different resolution levels of the signal channels.
  • Model visualizer 28 described above operates asynchronously. This is because the other components 24 and 26 focus on producing a 3-D model and therefore only need to operate when new data is input to the system or the user requests a change in their operation. In addition to when new data is input to the computer system 12 , the model visualizer 28 also operates whenever a new image of the 3-D model must be output to the display 13 , such as when the user wants to change the view. This approach also allows the model visualizer 28 to also implement animation processes to improve the user experience without requiring continual user input or new data models to he generated.
  • the user interface 14 facilitates the user's interaction with system 10 by user controls 13 a. Once a view, such as shown in FIG. 12 or 12 B for example, is rendered on the display 13 , the user via the user controls 13 a through user interface 14 has freedom of movement within and around the 3-D view of surfaces 18 to select the viewing position and angle of view, as described earlier. This feature is comparable to that normally found in a video game and includes mouse and keyboard control for adjusting the user's X-Y-Z positions, yaw, pitch, and roll, but is not present in conventional logic analyzer or oscilloscope software. This gives the user the ability to view the data from any location around or inside the model quickly and intuitively.
  • FIGS. 14-20 A series of examples of the movement of a view of surfaces 18 a, 18 b 18 c, and 18 d is shown in FIGS. 14-20 .
  • the user can perform the following controls using the user interface's mouse: hold the left mouse button and drag left, right, up, and down to move their view position left, right, up, and down in relation to the current viewing angle ( FIG. 15 shows an example moving viewing position up and left from FIG. 14 ); scroll the mouse wheel forward and back to move the user's viewpoint forward or back ( FIG. 16 shows an example of moving viewing position forward from FIG. 14 ); hold the right mouse button and drag left, right, up, and down to change their viewing angle (yaw and pitch) ( FIG.
  • FIG. 17 shows tilt of the viewing angle to the right and down slightly from FIG. 14 ); and hold down the middle mouse button and drag left or right to change the 3-D model's scale in x lesser and greater respectively, as well as down and up to scale the view in y lesser and greater respectively.
  • An example of reducing the x scale from FIG. 14 is shown in FIG. 18 .
  • the scaling axis' change for the middle mouse button drag controls when the user's current pitch and/or yaw angle is greater than 45 degrees. This correlates the adjustment to the model to the predominant direction the user is facing. For example, when the view is greater than 45 degrees down, dragging the middle mouse button up and down will now scale the z dimension of the model instead of the y dimension, as shown in the before and after shots in FIGS. 19 and 20 .
  • the surfaces 18 a, 18 b 18 c, and 18 d can move in and out of view on display 13 as desired by the user as shown in FIGS. 14-20 .
  • the above described use of the mouse's buttons and wheel are exemplary, other mouse buttons, keyboard buttons, or components of the user interface may be used to perform similar or additional functions.
  • a 3-D view is rendered to a 2-D display, such as a CRT or LCD monitor with perspective; meaning that objects are drawn smaller as their virtual distance from the observer increases.
  • a 2-D display such as a CRT or LCD monitor with perspective
  • this is not always preferable as it can become difficult to do certain time comparisons of signal data in perspective.
  • traditional logic analyzers display their data, including historical data layering, in 2-D graphs.
  • the user may control the amount of perspective used by the model visualizer 28 in drawing the image. This enables the user to switch to and from a completely orthogonal (non-perspective) view which can mimic a traditional two-dimensional (2-D) logic analyzer display when viewed from a perpendicular front view.
  • the user may toggle between views via user controls 13 a buttons on a keyboard, or selection on menu, button, or other graphical element on the graphical user interface 14 provided on display 13 .
  • FIGS. 21-28 To avoid user disorientation in switching between perspective (3-D) and orthogonal (2-D) views, the model visualizer 28 enables smooth transitions between perspective and orthogonal views (or modes) and back again, thus allowing the user to readily understand the change.
  • Representative frames of this animation are shown in the eight perspective to orthogonal transition screenshots of FIGS. 21-28 , where FIG. 21 has the most perspective of surfaces 18 e, 18 f, and 18 g and FIG. 28 is a fully orthogonal view thereof as traces 18 e ′′, 18 f′′ and 18 g ′′, respectively, and the change there to denoted by version 18 e ′, 18 f′ and 18 g ′ through FIGS. 22-27 .
  • FIGS. 29A-29C The smooth transition between 3-D and 2-D views is animated by the model visualizer 28 and relies on basic geometric calculations illustrated in FIGS. 29A-29C which are applied to 3-D computer graphics by the model visualizer 28 .
  • These methods provide a way to fluidly change the amount of perspective without dramatically changing the current area and focus of channel samples visible within the display model 27 by adjusting both the viewing position and field of view steadily during the transition. This allows the perspective change to occur without disorienting the user or causing undesired side-effects on the resulting render on display 13 .
  • a software flow chart of the steps performed by the model visualizer 28 to achieve this automatic animation is shown in FIGS. 30A-30C . The result is a comfortable and intuitive feel to the user when switching back and forth between these 3-D and 2-D rendering modes.
  • a point of interest (denoted as POI) via the user controls 13 a for the user interface 14 on display 13 (step 54 ).
  • This POI may be any point representative of a data sample or object in the current 3-D rendered view on the display.
  • the user then presses a button on the graphic user interface 14 in the screen on display 13 or a keyboard button to initiate the change to orthogonal view (step 55 ).
  • the model visualizer 28 on computer system 12 then calculates virtual distance in 3-D space between the POI and the user's current viewpoint or “camera” (step 56 ). This distance is considered dA.
  • the computer system determines the angle that is half of the current vertical field of view (step 57 ). This value is described as fA, also shown in FIGS. 29A-29C , and is maintained in the memory of computer system 12 .
  • N a discrete number of steps, called N, for the transition to initialize the current step count, called S (step 59 ).
  • N is 50, but it could be any number greater than or equal to 1. Greater numbers result in a smoother, but longer transition.
  • a discrete time duration could be used instead, such that the transition occurs over a given time period independent of the number of transition frames that can be rendered by any given system during that time.
  • the computer system updates the current vertical field of view angle so that half of that angle is equal to (S/N ⁇ fA)2 where the result is called fB (step 60 ).
  • the horizontal field of view is also always updated when the vertical field of view changes such that their ratio remains constant. This ration may be any value predefined by the software. Squaring the stepped angle is used to provide a more linear transition from the user's perspective due to the trigonometric functions. However, any number of other mathematical functions could be utilized to create somewhat varying effects.
  • the computer system calculates a new virtual distance between the POI and the user's viewpoint called dB which is equal to (dA ⁇ tan(fA)/tan(fB)) and updated in the systems memory (step 61 ).
  • the viewpoint is moved directly backwards in 3-D space based on its current view direction so that it is at the computed distance.
  • the model visualizer 28 is then ready to render a new scene of the 3-D model (step 62 ).
  • step 63 the software decrements S by 1 (step 63 ).
  • the new value of S is analyzed to see if it is still greater than 1 (step 64 ). If it is, then the process repeats back to step 60 and continues on again through steps 60 - 64 . Otherwise at step 64 , the model visualizer 28 is on the last step of the transition and switches the 3-D graphics library from perspective rendering to orthogonal rendering mode (step 65 ).
  • the computer system uses the previously calculated viewport height H to generate the vertical, and horizontal distances around the POI for an orthogonal projection border which is applied to the 3-D graphics library (step 66 ). Then the user's viewpoint is returned to the original 3-D location it was in at the beginning of the 3-D to 2-D transition process (step 67 ). Finally, this new scene is rendered with the changed settings and values (step 68 ), and the change to an orthogonal 2-D view is complete (step 69 ),
  • a key feature of rendering of a 3-D view of multiple channels is automated reorientation of the view position and angle.
  • the user With the freedom of movement provided by a full 3-D environment, the user can select a variety of off-center vantage points with regards to the 3-D view on display 13 .
  • the user starts from any location, illustrated as “1(Start)” and may click on, for purpose of selection, a point of interest in the 3-D view on display 13 , illustrated as “2(POI)” (step 70 ) and press a button on the graphic user interface 14 in the screen on display 13 or a keyboard button of user controller 13 a (step 71 ) to have the system 10 automatically reorient on that point.
  • the model visualizer 28 smoothly animates the transition along the dotting line, labeled “3” ( FIG. 31 ) until the perpendicular target vantage point, labeled “4(End)” is achieved.
  • the software calculates the virtual distance in 3-D between the starting viewpoint and the POI (step 72 ). This distance is called Dp.
  • the computer system calculates the point location in 3-D space (considered Pe) that is exactly Dp distance from the POI perpendicular along either the x, y, or z axis (step 73 ).
  • the axis chosen is dependent on which button or keystroke the user selected for the desired vantage point.
  • the computer system calculates the desired final viewing angle (called Ve) that will result in the POI being in the center of the field of view from location Pe (step 74 ).
  • the angle Ve will always be 0 degrees from the chosen axis and 90 degrees from the remaining two axis.
  • the software calculates the virtual 3-D distance (called Pd) between Ps and Pe (step 75 ), followed by calculating the angle difference (called Ad) between the starting view angle (called Vs) and Ve (step 76 ).
  • Pd virtual 3-D distance
  • Ad angle difference
  • Vs starting view angle
  • Vs the starting view angle
  • Ve Ve
  • the model visualizer 28 enters a view animation and render loop to perform the transition in small steps until the target endpoint is reached.
  • the software checks to see if the current viewpoint position (called Pc) equals Pe and the current viewpoint angle (called Ac) equals Ae (step 78 ). If both position and angle are equal to the final desired values at step 78 then the process is complete (step 79 ). Otherwise the software adjusts Pc to be 1/Nth of Pd closer to Pe (step 80 ). N is a discrete number of steps predetermined by the software over which to perform the transition.
  • N is 50, but it could be any number greater than or equal to 1. Greater numbers result in a smoother, but longer transition. Additionally a discrete time duration could he used instead, such that the transition occurs over a given time period independent of the number of transition frames that can be rendered by any given system during that time.
  • the computer system adjusts Ac to be 1/Nth of Ad closer to Ae (step 81 ). With the new field of view and viewpoint location values calculated, the model visualizer 28 is then ready to render a new scene of the 3-D view (step 82 ). Then the process repeats back to step 78 and continues from there until the final location and angle are reached.
  • the model visualizer 28 calculates the necessary movement path and viewpoints, performing the entire process in a smooth transition effect so that the user is able to keep focus on their point of interest and the data around it. This process can he performed to orient the user's position and angle in 3-D space to be aligned perpendicular to the point of interest along any two of the three model axis at a time as chosen by the user.
  • this reorientation process can be combined with the perspective to orthogonal (or reverse) view transition method described above whereby selecting a point of interest and making just a single click or key press, the user can invoke the computer system 12 to automatically center on the desired vantage point and transition to the desired amount of perspective or lack thereof; resulting in a single automatic and fluid transition such as that shown in the example of a view of surfaces 18 h, 18 i, 18 j, and 18 k of FIGS. 33-43 .
  • This is extremely useful for being able to move around the model in 3-D perspective to locate an anomaly, and then with a single click smoothly transition to a straight-on orthogonal view to perform in-depth analysis of the data and timing around the event.
  • FIGS. 43-49 Another click and the user can transition to a traditional front-on view as shown in FIGS. 43-49 , thus providing for an efficient and effective usage flow of the instrument.
  • This movement can be performed while remaining in orthogonal (non-perspective mode) as in FIGS. 43-49 , or by transitioning back to 3-D perspective during the movement.
  • Surfaces 18 h, 18 i, 18 j, and 18 k are preferably of different color and have a sufficient degree of translucency to enable discernment of transitions in such surfaces when overlaid as shown in FIGS. 33-48 .
  • Surfaces 18 i, 18 j, and 18 k thus may be visible under surface 18 h and through each other, but Surfaces 18 i, 8 j, and 18 k are not discernible in FIGS. 37-43 due to limitation of non-color presentation.
  • the user interface 14 may include a control panel 84 which may appear on the display 13 adjacent to the main rendered View on the screen of the 3-D surfaces 18 of multiple channels, as shown in FIG. 12 , and operated upon using the user interface's mouse of user controls 13 a.
  • the control panel 84 enables the user to understand the positioning of the viewpoint and signal channels in the 3-D model as well as provide an interface to customize the properties of those channels.
  • This panel shows representations of each channel currently included in the 3-D model, Each channel has a label 85 that can be customized by clicking on it and a graphical representation of the signal 86 - 92 . These are drawn with a color matching that used for the signal in the 3-D model itself.
  • the symbols may be located vertically within the panel at a location proportional to the channel's y location in the model, and are therefore not necessarily evenly distributed in area 93 of panel 84 .
  • the symbols are capable of being dragged up and down by the user to alter the channel's location and the 3-D) model is then re-generated accordingly. By clicking on the symbols the user can alter further properties of the channel such as but not limited to: color, transparency, source, height, graphical representation style (analog, digital, bus, etc.), and numerical base (binary, hex, octal, decimal, etc.).
  • an acquisition device 15 connected to this system is capable of producing live activity information on the current state of the signals, then the channel symbols in the panel are also used to portray this information. These are exemplified in the diagram as signal conditions of: rising 86 , low 87 , high 88 , toggling 89 , falling 90 , a stable hexadecimal bus 91 , and a bus with some of its signals changing 92 .
  • the control panel 84 also contains a camera symbol 94 that represents the user's current viewpoint in the y axis with relation to the 3-D) view. Additionally, the angle of the graphic indicates the current pitch of the view 95 . In this way the symbol effectively provides useful orientation information, particularly when the point of view is very close to or within the surfaces 18 on display 13 itself and the main view rendering may be too close-up or confusing. Lastly, the camera symbol 94 is draggable in the y axis similar to the signals, such that the user can quickly relocate to a new vantage point of the view of surfaces 18 on display 13 .
  • the segment controller module 22 segments channels of the data into an array along a z dimension of ordered segments.
  • This array does not necessarily have to be uniform.
  • the z dimensional ordering of a stack of independent data segments may be selected, broken, and ordered based on user defined conditions, whereby each segment is comprised of a set of the time-ordered data. Examples of such conditions are shown in FIGS. 51-57 .
  • all of the data comes from a single original time- ordered acquisition, whereby segments can be any part of the original set, overlap, vary in size, and skip data.
  • Some preferred uses include choosing segments based on; different acquisitions, consecutive time slices ( FIG. 52 ), occurrence of a pattern ( FIG. 53 ), specific start and/or stop conditions ( FIGS. 54-55 ), data dependent sub-triggers, source device and combinations thereof.
  • fake or simulated reference segments can be specified by the user and generated for inclusion in the set of segments for the purpose of creating a constant base case or comparison set as shown in FIG. 56 for example.
  • Other selection and ordering of segments may be used and such selection and ordering is not limited to those described herein.
  • the first factor is alignment.
  • the first sample of each segment does not necessarily have to be aligned in the x dimension (time), such as is shown in FIG. 51B .
  • segments could be aligned based on other conditions, such as location of samples matching a specified pattern, end sample, or a specific real-time time differential between segments.
  • the second factor in arranging the segments is order. While the preferred method is to order the segments in the 7 dimension in increasing real-time order, segments may also be reordered based on such conditions as values in the data, original source, or acquisition. An example of reordering of an array is shown in FIG. 57 .
  • system 10 may be used to analyze domain specific data.
  • One such example of useful analysis that could be assisted by the present invention is video signals where the acquisition device 15 ( FIG. 2 ) samples a video channel from a device or unit 16 under test, or such video signals may be from another source to the memory of computer system 12 .
  • Video communications typically include a vertical sync signal that signifies a new video frame and a horizontal sync signal that signifies a new horizontal line of video within that frame.
  • the sampled data can be rearranged such that the invention can display a 3-D representation of the live sampled video streaming from a unit under test; such as is represented in the example FIG. 58 .
  • the acquisition device's trigger is set up to trigger on a rising edge of the vertical sync line.
  • a hardware sub-trigger or else a software programmed sub-trigger is set for a rising edge on the horizontal sync line.
  • the vertical sync trigger denotes the first segment of the 2-D array to be displayed by the invention, while the horizontal sync trigger would be used to “break” or denote the separation of each segment to be ordered in the z dimension of the array.
  • the z order would want to place the first segment furthest away and the last segment closest to the front of the model as to mimic a top to bottom scanning of a video frame.
  • the result of such a setup is a full video frame of a time-contiguous acquisition that is then restructured into a 2-D array of data similar to what it actually represents.
  • the system 12 is then able to render it such that the actual complete video frame could be seen along with the control and/or other signals. This is extremely useful for determining the cause of anomalies or static in the video signal as the human eye could easily recognize them in the reconstituted frame.
  • the user can overlay the video data with various other signals to determine the cause of the problem.
  • Reference planes 46 may further be added to the rendered display.
  • the system 19 may he extended for use in analysis of data in non time-ordered domains.
  • the system 10 can render in three dimensions, for purposes of visualization and analysis, a frequency domain representation of some or all of the input data 17 a; for example, as resulting from application of a Fourier transform to the data.
  • the system 10 may render the input data 17 a as a probability distribution; for example, as a histogram or other non time-domain representation, though the scope of the invention is not limited to solely those domains and applications specified herein.
  • the system 10 may also operate upon input data 17 a, or processed derivations thereof, to render 3-D representations on display 13 using non-Cartesian coordinate systems such as: 3-D spherical, cylindrical, or other coordinate systems, similar to that described, above using a Cartesian coordinate system.

Abstract

The system for three-dimensional rendering of signals has a computer system having acquired, streaming, or previously stored data in its memory representing multiple channels of signals in which each channel has a value which varies over a domain, e.g., time or frequency, and a display coupled to the computer. For each channel, the computer system segments the data of the channel into segments, orders the segments, renders on the display each of the segments, in which each of the rendered segments are aligned in such order along as three-dimensional perspective with gaps between adjacently rendered segments, and lines are rendered extending from each line of each one of the rendered segments to form a three-dimensional plane in the gap to the next successive one of the rendered ordered segments to form a three-dimensional continuous or discontinuous surface characterizing the channel. The surfaces of each of the channels are aligned on the display, and may be of different color, shading, and translucency, whereby channels of overlaid surfaces are viewable on the display.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This document is a continuing U.S. non-provisional utility patent application being filed under 37 CFR 1.53(b), that claims priority and benefit to, U.S. non-provisional patent application Ser. No. (12/012,617) which was filed on Feb. 4, 2008, and that is entitled “System for Three-Dimensional Rendering of Electrical Test and Measurement Signals”, and which is also incorporated herein by reference in its entirety.
  • This document further claims priority and benefit to U.S. provisional utility patent application Ser. No. (61/592,998) (Confirmation No. 7158) filed on Jan. 31, 2012, that is entitled “System for Three-Dimensional Rendering of Electrical Test and Measurement Signals”, and which is also incorporated herein by reference in its entirety.
  • Priority is claimed to all of the above aforementioned patent applications, which are each incorporated herein by reference in their entirety.
  • FIELD OF THE INVENTION
  • This present invention relates to a system (method) for three-dimensional (3-D) rendering of test and measurement signals, and relates particularly to a system for three-dimensional rendering of test and measurement signals having a computer system, or other microprocessor based platform or system which produces three-dimensional surfaces representing multiple signal channels on a display in accordance with data acquired, streaming, or previously stored in memory of the computer system. The system is useful for three-dimensional visualization of the relationship between different channels of signals with user control of three-dimensional viewing position and angle to improve signal analysis over traditional two-dimensional display of test and measurement signals. Although the system is described herein for test and measurement signals, other signals that are variable over a domain which either are in, or are separable into, multiple channels, may also be visualized by the computer system as 3-D surfaces.
  • BACKGROUND OF THE INVENTION
  • in the field of test and measurement, a device typically collects sample data from one or more electrical test points over some period of time, whereby the value of a sample represents the voltage level of the given test point at a specific point in that timeline. Samples collected in one time-contiguous sequence are commonly considered as a single acquisition. Common tools in this field today include logic analyzers and digital storage oscilloscopes, such as those manufactured by Agilent Technologies, Tektronix Inc., and LeCroy Corp. These systems typically have a dedicated hardware platform, an attached personal computer coupled to the logic analyzer, or a digital storage oscilloscope, operating in accordance with software that can collect, store, and manipulate the data representing sample data over one or more signal channels, and renders such to the user in a pseudo real-time, or non real-time fashion on a display. These systems commonly display the data to the user on the display as a two-dimensional graph, whereby the x-axis represents time, and the y-axis value describes the voltage of the test point at that time for a particular signal channel, as illustrated for example in FIG. 1. The user relies on this data representation to gain insight into the operation of the unit under test, thereby allowing detection of errors, anomalies, or proof that the device is operating properly.
  • Although the typical two dimensional voltage versus time graph is useful for showing one sample per channel per column of pixels on the display, variations in repetitive waveforms over time are difficult to discern. Further, as devices under test become more complex arid the number of channels in acquisition devices available to the user rises, the sampling rates of signals by acquisition devices results in a huge amount of data over multiple channels in the memory of the computer system or digital oscilloscope storing such data. As a result, it becomes problematic to render the large amount of data from multiple channels in test and measurement systems all at once on a display to the viewer in a meaningful manner, thereby making it more difficult for the user to visualize and identify data of interest at a particular channel, and especially among multiple channels.
  • Approaches to improve rendering of two-dimensional voltage time graphs are described for example in U.S. Pat. Nos. 6,151,010 and 7,216,046, which enables common persistence modes via overlays or density maps of a channel on a digital oscilloscope. A drawback of this approach is that it is difficult to obverse patterns occurring in the channel. In U.S. Patent Application No. 2003/0006990, a digital oscilloscope displays waveform variations over time as a surface map graph. Such rendering is limited to a single channel at a time without correlation with any other channels. U.S. Patent Application Publication No. 2005/0234670 describes viewing multiple channels, domains, or acquisitions simultaneously, but does not provide for a display of multiple channels and acquisitions (or domains) simultaneously in a single three-dimensional view on a display. Further, the systems described in the above cited patents and publication have limited flexibility in the organization and presentation of the data on a display, which restricts the user's ability to quickly visualize and compare data when analyzing complex systems.
  • SUMMARY OF THE INVENTION
  • Accordingly, it is an object of the present invention to provide an improved system for rendering test and measurement data representing multiple channels which readily enables visualization of multiple channels in a three-dimensional (3-D) perspective as continuous or discontinuous surfaces aligned on a display in which the user can observe the relationships between different channels.
  • It is another object of the present invention to provide an improved system for rendering test and measurement data of multiple channels as continuous or discontinuous surfaces in three-dimensional perspective on a display in which a user can control one or more of the viewing position and angle with respect to the three-dimensional surfaces representing the channels to move in, around, or along the surfaces in all three dimensions to visualize data of interest.
  • Still another object of the present invention is to provide an improved system for rendering test and measurement data of multiple channels as continuous or discontinuous surfaces in three-dimensional perspective on a display, in which the data can represent real-time data for a device or system under test or stored data accessible to the system.
  • A yet further object of the present invention is to provide an improved system for rendering test and measurement data of multiple channels as continuous or discontinuous surfaces in three-dimensional perspective on a display, in which the rendering may smoothly change from a three-dimensional view to an orthogonal or two-dimensional view, and vice versa
  • Briefly described, the present invention embodies a system having a computer (or other microprocessor based platform or system) having memory with acquired, streaming, or previously stored, data representing multiple channels of signals in which
  • the signals of each Channel has a value (y) which varies over time (x), and a display coupled to the computer. For each channel, the computer system segments the data of the channel into segments, orders the segments. renders on the display each of the segments as one or more lines in accordance with consecutive values of the data associated with the segment, in which each of the rendered segments are aligned in their order in depth (z) along a three-dimensional perspective with gaps between adjacently rendered segments, and lines are rendered extending from each line of each one of the rendered segments to form a three-dimensional plane in the gap to the next successive one of the rendered ordered segments to form a three-dimensional continuous or discontinuous surface characterizing the channel. The surface of each of the channels are aligned on the display preferably for enabling a user to view relationships of two or more different channels.
  • For each channel surface, the edges of two or more adjacent planes of the surface that are located along the same two of the three-dimensions may appear joined to each other as a common plane, and when two planes meet along different ones of at least two of the three-dimensions such two planes may appear to meet to form an edge.
  • Each channel is preferably rendered as a surface having a different color on the display to distinguish the channels from each other, rendered with shading (or gradients along surface depth and between varied signal values) to enhance a three-dimensional view of surfaces, and/or a degree of translucency to enable viewing (and discernment) of the channel or different channel(s) when overlaid on the display.
  • The computer system segmentation of the data of each of the channels is in accordance with segment start and/or stop conditions which are predefined or user defined. The ordering of segments for each of the channels may be in accordance with predefined or user defined, conditions, or the order of the segments of the channel is defined by order in which the segments are segmented. The number of segments rendered for each of the channels on the display may also be predefined or user defined condition.
  • In one embodiment. when additional (or newer) signal data for One or more of the channels is received by the system, the ordered segments rendered on the display for each of such one or more channels advances in the depth (z) in the three-dimensional perspective as the computer system continues to segment the data representing the signal of the channel into segment, and then render such segments with planes extending from one or more lines thereof as an addition to the surface of the channel on the display. Consequently, as each of such one or more channels advance in depth with the addition of new ordered segments, the rendered segments and planes extending from one or more lines thereof associated with the oldest segment(s) may be removed from the view, thereby providing a smoothly flowing view of multiple signal channels in a three-dimensional view whereby newer to older segments of each signals channel are viewable as a scrolling, aging surface in a perspective of depth.
  • In another embodiment, the computer system upon detection of a specified condition predefined by the user within the data of any one or more of the channels may adjust or add to the number of surfaces displayed by rendering new surface(s) on the display aligned with other surfaces by segmenting and rendering in accordance with subset(s) of the data associated with such condition. Consequently, as new signal data is acquired the depth (z) in the three-dimensional perspective for one or more of rendered surfaces, the computer system may adjust (add/remove) by a varying number of rendered surfaces dependent on the number of occurrences of a specified condition within newly acquired or previously acquired data.
  • The computer system has user controls, such as a keyboard, mouse, touch screen surface upon the display, or the like, enabling the user to manipulate his/her view of the three-dimensional model representing the signal channels, such as for e.g., changing the viewing position from which the rendered surfaces are oriented (or centered) such that the user can select the angle of view of the model in, around, or along the model in all three dimensions.
  • The data representing the multiple signals may be from an acquisition device that is coupled by leads to a unit or device under test, where the acquisition device provides the data in near real-time to the computer system for processing into a three-dimensional view, or the data may be provided in real-time from the acquisition device, or such data may be any data representing multiple signals in memory stored or accessible to computer.
  • The present invention also provides a method for visualizing in a three-dimensional view having the steps of: segmenting the data of a channel into segments in which each of the segments starts at a predefined or user defined condition, ordering the segments, and rendering on a display each of the segments as one or more lines in accordance with the values of the data associated with the segment, in which each of the rendered segments are aligned in the order on the display along a three-dimensional perspective with gaps between adjacently rendered segments, and each line of each one of the rendered segments extends to form a three-dimensional plane in the gap to the next successive one of the rendered ordered segments to form a three-dimensional continuous or discontinuous surface characterizing the channel. The surface rendered represents one of a plurality of surfaces rendered on the display aligned with each other in which each of the plurality of surfaces are provided h carrying out the segmenting, ordering, and rendering steps on data representing each one of the channels.
  • An advantage of the present invention over traditional two-dimensional viewing of test and measurement data is that the user is able to request the system to organize and display the data in ways that lead to quick comparison and identification of problems that are not easily discernible via a two-dimension view. For example, take an acquisition with a complex repetitive (over time) pattern in it. With a conventional two-dimensional logic analyzer display, as shown for example in FIG. 1, the user must either condense the time scale of the graph (zoom out in time) losing most of the detailed data, or scroll back and forth left and right to see if any of the instances of the pattern deviate in any detail from the expected. This is extremely difficult because there is no way to directly compare the occurrences with details on the screen at the same time. With the present system the user can separate and reorganize each repetition of the pattern along the z dimension (depth) at a continuous or discontinuous surface viewed in three-dimensions. By providing any possible view around, along, or inside this model, a variation in any one of the repetitions in any channel(s) is quickly identifiable from the norm. An analogy to this would be searching a group of hole-punched pages to find if, and which ones, might contain holes that are misaligned. Laid out next to each other on a long table, examining the sheets to find differences could take very long or completely miss them. Instead, if the sheets are stacked on top of each other, any variances can be quickly identified.
  • The terms “orthogonal” and “two-dimensional” or “2-D” are used herein synonymously. While orthogonal refers to an orthogonal projection vi C NV and is technically still a view of a 3-D representation but with zero perspective applied, the resulting image appears to the user to be a “flat” 2-D rendering. Thus in this invention when the 3-D representation is viewed orthogonally from a perpendicular vantage point it becomes indiscernible from a 2-D graph.
  • Although the system and method of the present invention are described in connection with test and measuring electrical signals in the time domain, the system and method may operate to visualize signals in other domains, such a frequency domain, or from other sources. For example, the signals may be associated with or represent other forms of data such as a stream of video for analysis of frames thereof for anomalies, or signals from any sources that are variable over a domain (not limited to test and measuring electrical signals) which a user desires to visualize that can be captured and stored in memory of the computer system.
  • This brief description of the invention is intended only to provide a brief overview of subject matter disclosed herein according to one or more illustrative embodiments, and does not serve as a guide to interpreting the claims or to define or limit the scope of the invention, which is defined only by the appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • So that the manner in which the features of the invention can be understood, a detailed description of the invention may be had by reference to certain embodiments, some of which are illustrated in the accompanying drawings. It is to be noted, however, that the drawings illustrate only certain embodiments of this invention and are therefore not to be considered limiting of its scope, for the scope of the invention can encompass other equally effective embodiments. The drawings are not necessarily to scale. The emphasis of the drawings is generally being placed upon illustrating the features of certain embodiments of the invention. In the drawings, like numerals are used to indicate like parts throughout the various views. Differences between like parts may cause those parts to be indicated with different numerals. Unlike parts are indicated with different numerals.
  • The foregoing objects, features and advantages of the invention will become more apparent from a reading of the following description in connection with the accompanying drawings, in which:
  • FIG. 1 is an example of the output rendered on the display of a typical logic analyzer or digital oscilloscope where x axis is time and y axis is the value (amplitude or Voltage level);
  • FIG. 2 is a block diagram of the system of the present invention in which the computer system operates in accordance with software for enabling three-dimensional rendering of data representing multiple signal channels on a display;
  • FIG. 3 is an example of the output rendered by the system of FIG. 2 in displaying a three-dimensional view of four channels of electrical signals over time as four three-dimensional continuous or discontinuous surfaces;
  • FIG. 4 is block diagram of the modules of the software operating on the computer system of FIG. 2;
  • FIG. 5 is an illustration showing an example of how the sequence of samples used to generate segments by the segment controller module of FIG. 4 are composite data samples representative of samples from all the individual channels from the same point in time;
  • FIG. 6A is an illustration showing an example of selection of segments by segment controller module of FIG. 4 from a time-contiguous series of composite samples based on a user selectable condition;
  • FIG. 6B is an example of a two-dimensional array ordering by the segment controller module of FIG. 4 of segments along z from most earlier to later of those selected in FIG. 6A;
  • FIG. 7 is a flow chart of the processes in software on the computer system of FIG. 2 for the three-dimensional model generator of FIG. 4;
  • FIG. 7A is an example of the operation of step 38 in the flow chart FIG. 7;
  • FIGS. 8A, 8B, 8C, and 8D graphically illustrate the processing of a single channel by the three-dimensional model generator of FIG. 4 as described in FIG. 7 from a two-dimensional array of segments of a signal channel into a three-dimensional surface, in which FIG. 8A represents the ordered segmented signal data in a two-dimensional array (x,z) in which each entry in the array has a value (y); FIG. 8B shows a graphical representation of the array as point locations in three-dimensional perspective; FIG. 8C shows a graphical representation of lines connecting time-adjacent points of each segment of FIG. 8B; and FIG. 8D shows a graphical representation of planes formed from such segments to form a three-dimensional channel surface, where the points are the vertices of the planes;
  • FIGS. 9A and 9B show different examples of single-bit digital signal data rendered in a two-dimension (x,y), and repeated segments of same data rendered in three dimensions (x, y, z) in which adjacent samples form planes having depth along the z axis;
  • FIGS. 9C, and 9D shows different examples of analog, or multi-bit digital signal data rendered in two-dimensions (x,y), and repeated segments of same data rendered in three dimensions (x, y, z) in which adjacent samples form planes having depth along the z axis;
  • FIG. 10 is another example of digital or analog signal data rendered in a two-dimension (x,y), and repeated segments of same data rendered in three dimensions (x, y, z) in which adjacent samples form planes having depth along the z axis;
  • FIG. 11 shows a schematic illustration of multiple three-dimensional channels as ordered surfaces rendered in a single view on a screen of the display of FIGS. 2 and 4 in which the surface characterizing each of the channels are aligned with each other with a common time base (x) to facilitate viewing of the relationship of channels with respect to each other; FIG. 1 illustrates communication between an eye contrast sensitivity test (CST) measurement device and a health correlation assessment service.
  • FIG. 12 shows an example of four three-dimensional channel surfaces rendered in a view on a screen of the display of FIGS. 2 and 4 having non-signal objects of three-dimensional planes extending through the channel surfaces to illustrate events in signals occurring at the same time along multiple signal channels, and measurement and grid markers associated with time;
  • FIG. 12A is an example of the output rendered by the system of FIG. 2 in visualizing a two-dimensional view of four channels of electrical signals over time where three of the channels are digital and one of the channels is analog:
  • FIG. 12B is an example of the output rendered by the system of FIG. 2 in visualizing a three-dimensional view of same four channels of FIG. 12A;
  • FIG. 13 illustrates the use of lower resolution sections of a rendered view of multiple signals channels on a display of FIGS. 2 and 4 as the view extends in virtual distance from the viewer;
  • FIGS. 14-20 show different representations of a three-dimensional view of multiple channels in which the user changes the viewing position, angle, or scale;
  • FIGS. 21-28 show different representations of a three-dimensional view of multiple channels changing from a three-dimensional perspective view of FIG. 21 to a two-dimensional or orthogonal view of FIG. 28 with smooth transitions at intermediate representations of such view to visualize smooth transition at FIGS. 22-27;
  • FIGS. 29A-29C are collectively a diagram showing the geometric method for smooth transitioning on the display from a three-dimensional rendering to a two-dimensional or orthogonal view, as shown for example in FIGS. 21-28 to enable smooth transitions of the display by the model visualizer of FIG. 4;
  • FIGS. 30A-30B are collectively a flow chart of the process in software on the computer system of FIG. 2 for smooth transitioning on the display from a three-dimensional rendering to a two-dimensional or orthogonal view, as shown for example in FIGS. 21-28;
  • FIG. 31 is a ray diagram to illustrate the geometry for reorienting of the rendered three-dimensional model of multiple channels to a view perpendicular to a user selected point of interest by the model visualizer of FIG. 4;
  • FIG. 32 is a flow chart of the processes in software on the computer system of FIG. 2 for automatic reorienting of the rendered three-dimensional model of multiple channels to a view perpendicular to a user selected point of interest by the model visualizer of FIG. 4;
  • FIG. 33-43 show different representations of a view of a three-dimensional view of multiple channels smoothly reorienting to as view perpendicular to a user selected point of interest, and then transitioning to a two-dimensional or orthogonal view to perform in-depth analysis of the data and timing around the event;
  • FIGS. 43-49 show different representations of an orthogonal view of a three-dimensional view of multiple channels as the viewpoint smoothly rotated around a point of interest from a top-down view to a traditional front-on view;
  • FIG. 50 shows an example of a control panel provided along the screen on the display of FIGS. 2 and 4 by the user interface module of FIG. 4 providing labels of the signal channels being rendered, the current activity of each signal channel, and current view and angle;
  • FIGS. 51A and 51B are similar to FIGS. 6A and 6B, respectively, in which the selection criteria of segments result in overlap and gaps of the samples within the segments in order to more generally show segmentation;
  • FIGS. 52-55 illustrate segmentation of signal data by the segmentation controller module of FIG. 4 to form a two-dimensional array of ordered segments based on different user defined parameters and the characteristics of the signals;
  • FIG. 56 illustrates the addition of a reference segment at the segmentation controller module of FIG. 4 that may be specified by the user and generated for inclusion in the set of segments;
  • FIG. 57 illustrates an example of the reordering of a two-dimension array of segment on conditions such as values in the data original source, or acquisition order by the segmentation controller module of FIG. 4; and
  • FIG. 58 is the display of FIGS. 2 and 4 of a three-dimensional view of a sampled live or recorded video image streaming from a unit under test along with other signals relevant to analysis in the system of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Referring to FIG. 2, the system 10 of the present invention has a computer system 12 with software in accordance with the present invention for rendering on a display 13 coupled to the computer system. The computer system 12 is connected to an acquisition device 15, such as a LeCroy Model No. MS-250 or MS-500 Mixed Signal Oscilloscope, or other logic analyzer or digital oscilloscope, receiving electrical signals from a device (or unit) 16 under test via test leads 17. For purposes of example, three leads 17 are shown for providing three electrical signals, but other number of leads may be used depending on the acquisition device 15. The acquisition device outputs digital data to the computer system 12 representing multiple channels of electrical signals received from leads 17, where each channel represents a signal from one of the leads 17 having amplitude or value over time. The analog electrical signals are converted by the acquisition device 15 into digital data format.
  • The computer system 12 stores the received data in its memory (RAM), and may also store the data in a file in memory (hard drive or optical drive) of the computer system for archival storage or for later non-real time rendering of the signals by the computer system on display 13. The computer system 12 has hardware and/or software enabling acquisition of data from the acquisition device 15 and storage of such data in its memory, as typically provided by the manufacturer of the acquisition device 15 for operating with the acquisition device.
  • The computer system 12 may represent a personal computer (PC) system, work station, laptop computer, or other microprocessor based platform, which is coupled to user controls 13 a, such as a keyboard, mouse, touch screen surface upon the display, or combination thereof, enabling the user to control the computer system 12 operation. Such user controls 13 a may be interactive with software on the computer system 12 to provide a graphical user interface on display 13. Although the system 10 of the present invention includes computer system 12, display 13, and user controls 13 a, in which computer system 12 has a graphics and video card and software for operating same for interfacing and outputting to display 13 as typical of a PC based computer system, the system 10 may he part of an acquisition and display system 11 with acquisition device 15.
  • The digital data representation of channels of electrical signals received in memory from the acquisition device 15 is processed by the computer system 12 for rendering on display 13. An example output on display 13 is shown in FIG. 3 having four signal channels each represented by three-dimensional (3-D) surfaces 18 to provide a three-dimensional model, in which measurement grids (or scale) 20 represent the time base (x), and the heights along each surface 18 is representative of amplitude or value (y) of the signal. Although the 3-D view is illustrated as being along the entire screen of display 13 in FIG. 12, the view may be in a window on display 13 and may have a control panel of FIG. 50.
  • As will be described in more detail later, for each channel the computer system 12 segments the data in accordance to predefined or user defined start and/or end conditions of each segment, orders the segments in accordance to predefined or user defined conditions, and renders on the display each of the segments as line(s) having variations in height (y) in accordance with the values of the data of such segment in their order in depth (z) along a three-dimensional perspective (x, y, z) with gaps between adjacent segments, and then from each of the segments extends three-dimensional planes from each of the line(s) to the next segment in depth (z).
  • For example, the third segment 19 of the topmost surface 18 is denoted by lines which varies (falls and rises) in height (y) along time (x), and for each part of such lines having the same height (y) a three-dimensional plane 21 is extended in a gap 19 b to abut lines of the next segment 19 a in depth (z). Each segment of a channel on the display forms one ribbon of surface 18, and the combination of such ribbons forms a continuous or discontinuous surface 18. In one example, depth (z) in the perspective of a channel surface 18 relates to previous acquisitions of the channel, thereby enabling a view where a user can analyze the relationship of two or more different channels by their respective surfaces 18 along the relative time base (or scale) 20 over a series of independent acquisition of such different channels. As will be shown below in FIG. 12, the user may add three-dimensional planes at particular times along the time base through different channel surfaces 18 to assist in the analysis.
  • The data representing multiple channels of signals may be from the acquisition device 15 in real-time (e.g., streaming), but may also or alternatively, represent a mathematical simulation, or data stored file in memory (hard drive, optical drive, or other memory storage device) of computer system 12 that is not acquired in real-tine from acquisition device 15, volatile (RAM) memory or FLASH drive, or any other system or acquisition device 15 capable of producing a set of acquired signal data other than from acquisition device 15 that is connected to computer system 12. This enables system 10 to be portable or stand-alone as well as part of a complete acquisition and display system 11 with acquisition device 15.
  • Referring now to FIG. 4, the software operating on the computer system 12 for generating the three-dimensional surfaces 18 is shown having modules or software components 14, 22, 24, 26, acid 28: user interface 14, segment controller 22, three-dimensional model (3-D) generator 24, model enhancer 26, and model visualizer 28. The model visualizer 28 interfaces with software and/or hardware 32 of the computer system 12 to render and animate the model on display 13 to produce the desired scene, while the user interface 14 allows the user to control the visualized output on the display by changing parameters to modules 22, 24, 26, and 28 in response to user input from user controls 13 a. Each of modules will now be described.
  • The segment controller module 22 receives data 17 a representing one or more separate channels of signals from acquisition device 15 and places it in a historical data store in memory of the computer system 12. The segment controller 22 then uses the set of new and/or historical data to generate individual time-contiguous segments containing samples for each of the channels and arranging them into a two-dimensional array of data at the same time. The combined segment array of multiple channels may be considered a matrix. The exact format of such data is not restricted to a single representation.
  • Segmentation is performed on all available channels in parallel so as to maintain all time relationships. As such, the samples used for purposes of segmentation, and the samples in the generated segments, may be considered composite samples whereby the composite sample contains the complete data of each of the included channels samples from that same point in time. This is represented in FIG. 5 where Channels 0 to M (C0-CM) each containing samples 0 to N (C0S0-CMSN) containing an amplitude or value (y) along time (x) are operated upon by the segment controller module 22 as a set of composite samples S0-SN (e.g. S0 contains the data of C0S0-CMS0).
  • A representation of the segmentation of data of multiple channels into segments is shown in a basic case in FIGS. 6A and 6B and in a more general case in FIGS. 51A and 51B. The data for each channel may be segmented along user selectable conditions such as: consecutive time slices, reoccurrence of a pattern, specific start and/or stop conditions, acquisition source, different acquisitions, or combinations thereof. For example, the user selectable condition may be a pattern, such as a switch from low to high or high to low, or a particular sequence of data values. The segments may be uniform or non-uniform in length (x).
  • FIG. 6A shows an example of segmentation of a set of composite samples, containing data for one or more channels, into five segments labeled A to E in length (time) in FIG. 6A, in which one or more amplitudes or values (y) are shown by boxes along time (x). A two-dimensional array of this data ordering the segments A-E of FIG. 6A in depth z is shown in FIG. 6B. As more data of the represented channel(s) is acquired it may be segmented and added to the bottom the array shown in FIG. 6B. Although the segments are shown non-uniform in FIGS. 6A and 6B the segments may also be of uniform length (time). The organization of the array in two dimensions may be different than shown, and the particular data structure and organization is not limited to that shown in FIG. 6 or 51. Other segmentation of signals will be discussed latter in connection with FIGS. 52-57.
  • To setup the system 10, the user interface 14 provides the user with one or more screens on display 13 that enables the selection of parameters, such as selecting which of the available channels the user desires to view, the layered order of such channels, the maximum number of segments of each channel to be extracted and rendered in the view (e.g., 1 to 100), and start and/or stop conditions by which each channel will be segmented, such as described above. The user can further select, via the user interface 14, the color of each channel to be rendered, shading to be applied, and the degree or level of translucency of each channel.
  • If desired, the user can also add non-signal objects to the view such as one or more reference markers (at chosen times (x and/or z)), measurement grids and reference planes (at preset times (x and/or z)), and furthermore can adjust the color and translucency of such additional objects. These parameters may be set to predefined default levels if the user does not wish to select user defined parameters. The user interface 14 may use graphical user interface elements, such as menus, input fields, and the like, typical of a software user interface. Other means for entry of these parameters may be used, such as buttons or knobs along a housing having the display 13, where housing includes the computer system 12, to select one or more of these parameters with or without a graphical user interface on display 13.
  • First, the two-dimensional array 23 of data containing samples composed of one or more channels, is input to the three-dimensional model generator 24 (FIG. 4) to transform the data into a digital 3-D signal model. The operation of three-dimensional model generator 24 is shown in FIG. 7.
  • Based on user or default settings, the computer system 12 selects a depth (number of segments) of an input buffer in memory of computer system 12. The three-dimensional model generator 24 separates the 2D array of composite samples into individual 2D arrays (step 34), one for each channel and then filters out (removes) any undesired channels (as specified by predefined or user defined conditions) not to be displayed (step 35). This reduces the amount of data that must be processed by the subsequent functions to only that requested by the user. The model generator 24 uses the number and order of the channels, and the maximum number of segments, as selectable by the user via user controls 13 a to user interface 14.
  • The sample data from individual channels are individually located within a y portion of the three-dimensional space. For each N number of 2D arrays of channel data steps 36-44 are performed within the y portion assigned by the computer system for that channel. Based on the x and z indices of each sample within the array for that channel, a respective location on the x-z plane of the model is calculated (step 36). It is very common when sampling a test point to have a series of time-contiguous samples of the same value. Therefore, in a preferred embodiment the process reduces the workload of the system by eliminating extraneous points that do not describe changes in the signal level over time (step 38) as is shown in the example FIG. 7A. This is done so far as is possible without any loss of information. In the FIG. 7A example vertices or points between vertices or points 37 a and 37 b, and 37 c and 37 d, are removed as indicated by arrows between such vertices. Thus step 38 provides for optimization, whereby extraneous vertices that do not represent change in the value of the signal are removed for efficiency in processing and rendering. Although preferable, step 38 may optionally he removed.
  • From the remaining planar points a y value is then calculated for each to create a location in 3-D space (step 40). The y value relies on two components. Each channel is given a minimum and maximum y value in the model space within which all related samples will he located. The specific values for this y range are for presentation and clarity purposes to provide separation from the other channel surfaces 18 when rendered. In a preferred embodiment these are configurable by the user, as desired, and would not preclude the ability to overlap the locations of separate channels in the same space. The second component in generating a y value for each sample is the stored value associated with it, e.g., voltage of the given test point at that time. The final y value for the point is calculated as a location within the channel's y range which is proportional to that samples value in regards to the maximum value that can be represented for that channel based on the input source. A representation of the x and z location of each entry in the array of FIG. 8A, and the height y at such location, is shown in FIG. 8B.
  • Those 3-D points that are contiguous in time for each given channel are then connected by lines (step 42). In one specific embodiment this is accomplished by using vertical and horizontal lines to generate a digital representation by forming right angled ‘steps’. In another embodiment direct angular lines are created to represent interpolation of the signal value between samples. This is useful for example if the source was an analog channel. Furthermore, multi-bit samples or combinations of multiple channels may be represented by bus symbols rather than a basic line. In any case, the user via the user interface 14 may select the desired forth of presentation. A representation of the lines connecting contiguous points in time for each ordered segment 19 is shown in FIG. 8C for the example of FIGS. 8A and 8B, in which each pair of adjacently rendered ordered segments 19 is rendered with a gap 19 b (or space) between them for rendering of planes as will be described below.
  • While connected lines are useful in a two-dimensional graph, they are extremely difficult to understand in a three-dimensional environment as a line is not a three-dimensional object and has no volume. To provide depth, lines are extruded or extend in the z dimension (step 44) in the three-dimensional perspective to form planes 21 along gaps 19 b, where common y values contiguous in time (x) along the same segment form a plane 21 (x, z), and different consecutive y values in time (x) along the same segment form an orthogonal step (y,z) or a sloped plane 21 (x,y,z). The planes 21 are extruded in depth (z) such that the plane 21 for each segment 19 meets up with the following segment in depth z, thereby joining to provide a three-dimensional synthesized surface 18 for each channel that is easily discernible from different perspectives as shown in FIG. 8D.
  • Each segment 19 of a channel on the display once rendered one or more lines along x,y axes, with lines along the z axis forming planes 21 extending there from represents one ribbon of a surface 18. As shown in FIG. 8D, common y values contiguous along the same segment 19 and among a series of ordered segments 19 along the x axis may form (or appear or rendered as) a common plane 21 a (x, z), and different consecutive y values along the same segment which are common among a series of ordered segments 19 may form a common orthogonal step (plane) 21 b (y,z) or a sloped plane 21 (x,y,z). In other words, the edges of two or more adjacent planes of a channel that are located along the same two of the three-dimensions (x, y, z) they may appear joined to each other as a common plane, and when two planes meet along different ones of at least two of the three-dimensions such two planes appear to meet to form an edge.
  • For the purpose of illustration, surface 18 in the example of FIG. 8D is discontinuous between some of the planes 21 providing a discontinuous surface (see for example opening 21 c between five rendered planes, or openings 21 d and 21 e). Depending of the values of the segments being rendered, the surface 18 may be continuous. Further a discontinuous three-dimensional surface 18 may optionally be made continuous by rendering additional three-dimensional planes along such discontinuities, such as in openings 21 c, 21 d, and 21 e, and other openings where no rendered planes have edges adjacent to each other.
  • Other representations of surface 18 synthesis from data are shown for example in FIGS. 9A-9D, and FIG. 10. As shown for example in FIGS. 9B and 9D, the surface may be form angular planes, or from a combination of angular and non-angular planes. This method is particularly useful when the given channel being represented is derived from analog data instead of digital.
  • Performing the process of FIG. 7 on each of the channels in parallel creates a series of surfaces 18 which, distributed on the y axis, combine together to form a layered three-dimensional view, as shown for example in FIG. 11. In the resulting three-dimensional view, the representations of same-time samples from different channels are fixed relative to each other in the x and z dimensions. Thus, the time relationships amongst elements of the channel surfaces 18 remain clear while still allowing the simultaneous display of multiple channels to the user. Furthermore, once the view of the layered surfaces 18 is shown on display 13, the user controls the three-dimensional perspective, as will be shown later, to enable the user to view down the z axis of the surfaces 18 for each channel and thereby visualize variation patterns over a deep quantity of acquisitions or cycles (i.e., segments up to the maximum number of segments specified by the user).
  • In a preferred embodiment, a history FIFO of acquisitions, or data segments, can be used to place new data at the front of surfaces 18 and fluidly “scroll” older acquisitions (or data segments) away from the user along the z axis in real-time. In other words, when each channel has additional (or newer) signals, the ordered segments 19 rendered on the display 13 for such channel advances in the depth (z) in the three-dimensional perspective as the computer system 12 continues to segment the data representing the signal of the channels into segments 19 which are then added to the surface 18 of such channel as a new ribbon to such surface 18.
  • Consequently, as each channel advance in depth with the addition of new ordered segments, ribbons of the surface 18 associated with the oldest segments at the back of the surface 18 (i.e., greater than the maximum number desired by the user) may be removed from the view. Thus, a smoothly flowing view of multiple signal channels in a three-dimensional view is provided, whereby newer to older segments of each signal channel are viewable as a flowing surface 18 in a perspective of depth with the surface modifying its shape as values (v) of the signal changes among consecutive ordered segments.
  • This creates an advantageous effect whereby the user can watch trends and cycles in signal timing by taking a perspective view down the depth of the three-dimensional view. Further, data segments which form an array 23 for one or more channels may be stored before, after, or concurrent with rendering on display 13 in memory of computer system 12 or external memory device accessible for storage by computer system 12, and thereby provide an archive for later display of such segments for analysis.
  • In another embodiment, the three-dimensional model generator 24 may generate multiple three-dimensional models 25 a views in parallel. These additional views are generated from decimated copies of lower resolutions of input data 17 a by segment controller 22 as array data 23 a, and result in simplified versions 25 a of the base model 25. These are then used later on by the model visualizer 28 to improve rendering efficiency and increase the volume of data that can be displayed at once while retaining responsiveness to the user and higher update rates of renderings on the display 13, as will be described further below.
  • Thus, the 3-D model generator 24 receives the two-dimensional array of data 23 for one or more signal channels, and translates it into a three-dimensional model representation where the signal voltage amplitude and time are used to give each sample volume and location in three dimensions (x, y, and?). This produces a complete model where individual channel are viewed as 3-D surfaces 18 layered relative to each other in three-dimensional space over a common time (x and z).
  • The resulting model 25 is a record in memory (RAM) of the computer system 12 for all the channels to be rendered in a view of vertices in x, y, z space. For example, the record has for each channel the vertices of each segment (such as represented by FIG. 8B) in an order (e.g., left to right) defining lines between vertices (such as represented by FIG. 8C), and the vertices in an order (e.g., top to bottom) defining the planes or surfaces between segments (such as represented by FIG. 8D).
  • Once the input sample data has been generated into a three-dimensional model 25 (such as shown for example in FIG. 11), the model is enhanced to improve its usefulness to the user by the model enhancer 26. In a preferred embodiment most if not all of these enhancements are under the control of the user by controls 13 a to enable and configure as best suits their needs via the user interface 14. In order to clearly identify channel surfaces 18, a different color is applied to each surface 18, as selectable by the user via user interface 14. Furthermore, the area of planes 21 of a ribbon between adjacent samples is applied with a gradient, where the shade of the color approached at each sample point represents the voltage value at that same point.
  • In this way variances in voltage and samples of the same value become more visually apparent. Thus, height and/or one or more of color or intensity of each surface 18 is associated with values of data associated with the surface. Each surface 18 thus preferably varies in one or more characteristics e.g., color, intensity, shading or gradient) to distinguish the surfaces representing different channels from each other, to distinguish different planes of the same surface 18 from each other, and to distinguish the area of each planes from each other of same surface 18.
  • In addition to color, the user, via the user interface 14, is able to configure surfaces in the model to be applied with varied degrees of translucency. This, combined with a 3-D vantage point, enables viewing one surface 18 through another of the same or through multiple layered surfaces 18, and provides the ability for one pixel on the screen to give the user information on the value of multiple samples at once. Viewed from above and down along the direction of the y axis this ability can be used to make asynchronous data between two or more channels instantly apparent. Such capability is not possible with the conventional logic analyzer software for displaying two-dimensional signals of multiple channels.
  • Further, individual samples can also be further enhanced in the 3-D model with particular color, translucency, outlining, the appearance of glowing, or other special graphical characteristic effects as to provide for highlighting of desired points. These enhancements are applicable based on a specific sample, or samples meeting given user criteria such as value. Furthermore, sequences of samples can similarly be highlighted based on a certain sequential pattern or variance in either the x or z dimensions. The user controls 13 a to interface 14 may enable the user to select desired value(s) or patterns within a channel to be highlighted by desired graphical characteristic(s).
  • To facilitate usefulness for analysis, non-signal objects are added to make the 3-D model. These objects include references planes 46 a, 46 b, and 46 c that give scale and alignment information about the samples or identify special locations, such as grid planes 46 a, measurement markers 46 b, trigger points 46 c, and scale 20. The reference planes may extend through the channel surfaces 18 along the entire depth (z) of the view or less than the entire depth, as shown for example by reference planes 46 b.
  • Furthermore these non-signal objects 20, 46 a, 46 b, and 46 c can be customized by the user via the user interface 14 with varying colors and translucencies so as not to be lost amongst or hide the signal data being shown around them. An example of the 3-D model of surfaces 18 on a screen of display 13 is shown for example in FIG. 12 having vertical reference planes 46 a, 46 b, and 46 c with four surfaces 18 representing data of different channels. For purposes of illustration, the 3-D model and their associated surfaces 18 shown in the figures are shown in gray-scale, but typically each surface 18, scale 20, and reference planes 46 a, 46 b, and 46 c are of color as described herein.
  • As mentioned earlier channels may be representative of digital or analog sources or a combination thereof. To accommodate the given source domain channel surfaces 18 may be rendered in an analog form or digital form based on user selection. An example rendering of mixed analog and digital channels in a front-on orthogonal view is shown in FIG. 12A, while an example of mixed analog and digital channels with three-dimensional perspective is shown in FIG. 12B. Fundamentally there is little difference between analog and digital herein and all of the features described herein apply to both forms.
  • The earlier described record defining model 25 is modified by model enhancer 26 to add a number or code) for each vertex defining its color and translucency level. For example, this number may have four values (R, G, 8, α), where the first three define the R (red), G (green) and B (blue) values, respectively, that describe the color (or color mixture) of the vertex, and the fourth byte, is a value (α) is the level of translucency of that vertex of the color in accordance with its R, 8, G, values. For example a completely opaque pure white vertex can be described as (1.0, 1.0, 1.0, 1.0) while a 50% transparent pure black vertex is described as (0.0, 0.0, 0.0, 1.0) and a slightly transparent yellow vertex can be described as (1.0, 1.0, 0.0, 0.9). Further added to the record are vertices defining the non-signal objects (e.g., reference plane(s) 46 a, 46 b, 46 c, and scale 20) and their color and translucency values.
  • The modified record represents a 3-D display model 27 which is used by model visualizer 28 to produce rendering instructions 29 representative of the visualization of the model 27 to a software/hardware renderer 32 for output on display 13 and thereby produce the desired visual image. First, the visualizer 28 performs scaling of the model 27 in any or all of the three dimensions x, y, z based on predefined or user-defined conditions. This allows the user to condense each axis independently, altering the proportions and the amount of data that is displayed on the display 13.
  • Next, the visualizer 28 takes into account the user's simulated position in the 3-D environment and their viewing angle to determine the portion of model in view. The user controls 13 a via the user interface 14 enable the user to input the desired scaling in x, y, z and select any change of simulated user position and viewing angle within or around the three-dimensional model. The change of simulated user position may be performed using buttons on the user interface's keyboard that is coupled to the computer system 12 or clicking (pressing) a mouse button to select where on the image of the 3-D model will be the new viewing position, or clicking down a mouse button and holding down that button while dragging the image to move the position or angle of view about the current viewing position or angle, and releasing that button when the desired view is obtained. Other means of using the user interface 14 may also be used to select or change viewing position and angle, including to top views, bottom views, side views, and any other angular view there between, as desired by the user to view the relationship between two or more channels, or patterns in a single channel.
  • After the model 27 (FIG. 4) is scaled and the viewing and position updated to the last viewing position and angle in memory of computer system 12 (or a new viewing position or angle as selected by the user), the model visualizer 28 produces rendering instructions 29 for the view to be displayed. The rendering instructions 29 are in a format and code defined by the three-dimensional software/hardware renderer 32 that receives such instructions. The three-dimensional software/hardware renderer 32 is a component of the computer system 12 and has graphics libraries and (optionally) 3-D acceleration hardware to output a three-dimensional image in accordance with such instructions.
  • Such software/hardware renderer 32 enables a fast frame rate and three-dimensional rendering effects, and is often used for video games rendering on personal computers, but has not been utilized in the field of display of test and measuring data. Examples of commercially available three-dimensional software/hardware renderers 32 are commercial video accelerator hardware/software, such as an ATI Radeon or NVidia GeForce series graphics cards and their drivers. The software of model visualizer 28 uses widely available OpenGL software libraries for interfacing to the card. Alternately, the Microsoft DirectX standard or other video graphics library and/or hardware may be chosen.
  • When using such a library there are common programming techniques that should be applied to achieve better performance. These techniques are well documented in the field of computer graphics and are described in. for example, the publications: OpenGL Architecture Review Board, Dave Shreiner, et al. OpenGL Programming Guide: the official guide to learning OpenGL, version 2. 5th ed. Boston, Mass.: Addison-Wesley, 2006, or OpenGL Architecture Review Board, Dave Shreiner. OpenGL Reference Manual, the official reference document to OpenGL, version 1.4. 4th ed. Boston, Mass.: Addison-Wesley, 2004.
  • Preferably, the model visualizer 28 logically separates the model 27 into sections in the x and/or z dimensions. Based on the viewing angle and virtual distance from the viewer, it then determines each individual section of the view to be displayed and chooses, out of multiple resolution (i.e., decimated) models produced by the 3-D model generator 24 and enhanced by model enhancer 26 which resolution model is most appropriate for each section. Decimated model sections are used when the size on display 13 (related to perspective distance) they are to be rendered at is incapable of effectively displaying additional information in the more detailed version of the model due to the pixel resolution (or other limitation) of the display. In this way, the model visualizer 28 is able to simplify the model without information loss to the user and still greatly decrease the amount of data that must be rendered.
  • For example, a lower resolution representation of arrays 17 a are produced by reducing the number of samples in time (x) for each ordered segment, such as by collapsing the set of y values for consecutive N number of samples in the arrays 23 a to represent a single y range (max and min) value pair (where N increases as resolution lowers). Each lower resolution representation of the array data 23 a is operated upon by generator 24 to produce model 25 a and then enhancer 26 to provide different models 27 a of model 27 of different resolution for visualizer 28. The visualizer 28 selects the vertices of records for each section of the final view from one of these models 27 and 27 a in accordance with time (x and/or z) as the virtual distance from the viewer increases and required resolution reduced. An example of this is shown in FIG. 13 showing three different versions 51, 52, 53, labeled Sections A, B, and C, respectively of the same original data 50 for two signal channels along different parts of a rendered view 49. Although three sections are shown, there are more sections between Sections A, B, and C of different resolution levels of the signal channels.
  • Model visualizer 28 described above operates asynchronously. This is because the other components 24 and 26 focus on producing a 3-D model and therefore only need to operate when new data is input to the system or the user requests a change in their operation. In addition to when new data is input to the computer system 12, the model visualizer 28 also operates whenever a new image of the 3-D model must be output to the display 13, such as when the user wants to change the view. This approach also allows the model visualizer 28 to also implement animation processes to improve the user experience without requiring continual user input or new data models to he generated.
  • The user interface 14 facilitates the user's interaction with system 10 by user controls 13 a. Once a view, such as shown in FIG. 12 or 12B for example, is rendered on the display 13, the user via the user controls 13 a through user interface 14 has freedom of movement within and around the 3-D view of surfaces 18 to select the viewing position and angle of view, as described earlier. This feature is comparable to that normally found in a video game and includes mouse and keyboard control for adjusting the user's X-Y-Z positions, yaw, pitch, and roll, but is not present in conventional logic analyzer or oscilloscope software. This gives the user the ability to view the data from any location around or inside the model quickly and intuitively.
  • A series of examples of the movement of a view of surfaces 18 a, 18 b 18 c, and 18 d is shown in FIGS. 14-20. Starting from an original view, such as that shown in FIG. 14, the user can perform the following controls using the user interface's mouse: hold the left mouse button and drag left, right, up, and down to move their view position left, right, up, and down in relation to the current viewing angle (FIG. 15 shows an example moving viewing position up and left from FIG. 14); scroll the mouse wheel forward and back to move the user's viewpoint forward or back (FIG. 16 shows an example of moving viewing position forward from FIG. 14); hold the right mouse button and drag left, right, up, and down to change their viewing angle (yaw and pitch) (FIG. 17 shows tilt of the viewing angle to the right and down slightly from FIG. 14); and hold down the middle mouse button and drag left or right to change the 3-D model's scale in x lesser and greater respectively, as well as down and up to scale the view in y lesser and greater respectively. An example of reducing the x scale from FIG. 14 is shown in FIG. 18.
  • The scaling axis' change for the middle mouse button drag controls when the user's current pitch and/or yaw angle is greater than 45 degrees. This correlates the adjustment to the model to the predominant direction the user is facing. For example, when the view is greater than 45 degrees down, dragging the middle mouse button up and down will now scale the z dimension of the model instead of the y dimension, as shown in the before and after shots in FIGS. 19 and 20. The surfaces 18 a, 18 b 18 c, and 18 d can move in and out of view on display 13 as desired by the user as shown in FIGS. 14-20. The above described use of the mouse's buttons and wheel are exemplary, other mouse buttons, keyboard buttons, or components of the user interface may be used to perform similar or additional functions.
  • Typically a 3-D view is rendered to a 2-D display, such as a CRT or LCD monitor with perspective; meaning that objects are drawn smaller as their virtual distance from the observer increases. In the case of logic analyzer display software this is not always preferable as it can become difficult to do certain time comparisons of signal data in perspective. This is part of the reason why traditional logic analyzers display their data, including historical data layering, in 2-D graphs. To account for this in system 10, the user may control the amount of perspective used by the model visualizer 28 in drawing the image. This enables the user to switch to and from a completely orthogonal (non-perspective) view which can mimic a traditional two-dimensional (2-D) logic analyzer display when viewed from a perpendicular front view. The user may toggle between views via user controls 13 a buttons on a keyboard, or selection on menu, button, or other graphical element on the graphical user interface 14 provided on display 13.
  • To avoid user disorientation in switching between perspective (3-D) and orthogonal (2-D) views, the model visualizer 28 enables smooth transitions between perspective and orthogonal views (or modes) and back again, thus allowing the user to readily understand the change. Representative frames of this animation are shown in the eight perspective to orthogonal transition screenshots of FIGS. 21-28, where FIG. 21 has the most perspective of surfaces 18 e, 18 f, and 18 g and FIG. 28 is a fully orthogonal view thereof as traces 18 e″, 18 f″ and 18 g″, respectively, and the change there to denoted by version 18 e′, 18 f′ and 18 g′ through FIGS. 22-27. The end result is a 2-D graph where the ordered segments of each channel surface 18 that had been presented in depth extending away from the user, now appear as flatly overlaid on each other in the 2-D graph. The opposite takes place when changing from a 2-D orthogonal view of a 3-D perspective view. Other number of steps may be used to enable smoother transitions between 3-D and 2-D views on display 13.
  • The smooth transition between 3-D and 2-D views is animated by the model visualizer 28 and relies on basic geometric calculations illustrated in FIGS. 29A-29C which are applied to 3-D computer graphics by the model visualizer 28. These methods provide a way to fluidly change the amount of perspective without dramatically changing the current area and focus of channel samples visible within the display model 27 by adjusting both the viewing position and field of view steadily during the transition. This allows the perspective change to occur without disorienting the user or causing undesired side-effects on the resulting render on display 13. A software flow chart of the steps performed by the model visualizer 28 to achieve this automatic animation is shown in FIGS. 30A-30C. The result is a comfortable and intuitive feel to the user when switching back and forth between these 3-D and 2-D rendering modes.
  • For example, when the user wishes to change a 3-D perspective view into a 2-D (orthogonal) view they will first select a point of interest (denoted as POI) via the user controls 13 a for the user interface 14 on display 13 (step 54). This POI may be any point representative of a data sample or object in the current 3-D rendered view on the display. The user then presses a button on the graphic user interface 14 in the screen on display 13 or a keyboard button to initiate the change to orthogonal view (step 55). The model visualizer 28 on computer system 12 then calculates virtual distance in 3-D space between the POI and the user's current viewpoint or “camera” (step 56). This distance is considered dA. The computer system then determines the angle that is half of the current vertical field of view (step 57). This value is described as fA, also shown in FIGS. 29A-29C, and is maintained in the memory of computer system 12.
  • The computer system then calculates the virtual spatial distance of half of the vie (visible perpendicular plane area) height at the POI (defined as H) using the formula H=tan(fA)×dA (step 58). Next, the computer system uses a discrete number of steps, called N, for the transition to initialize the current step count, called S (step 59). In this example N is 50, but it could be any number greater than or equal to 1. Greater numbers result in a smoother, but longer transition. Additionally a discrete time duration could be used instead, such that the transition occurs over a given time period independent of the number of transition frames that can be rendered by any given system during that time.
  • At this point the model visualizer 28 is ready to begin the transition process. The computer system then updates the current vertical field of view angle so that half of that angle is equal to (S/N×fA)2 where the result is called fB (step 60). The horizontal field of view is also always updated when the vertical field of view changes such that their ratio remains constant. This ration may be any value predefined by the software. Squaring the stepped angle is used to provide a more linear transition from the user's perspective due to the trigonometric functions. However, any number of other mathematical functions could be utilized to create somewhat varying effects.
  • Next the computer system calculates a new virtual distance between the POI and the user's viewpoint called dB which is equal to (dA×tan(fA)/tan(fB)) and updated in the systems memory (step 61). The viewpoint is moved directly backwards in 3-D space based on its current view direction so that it is at the computed distance. With the new field of view and viewpoint location values calculated, the model visualizer 28 is then ready to render a new scene of the 3-D model (step 62).
  • Afterwards the software decrements S by 1 (step 63). The new value of S is analyzed to see if it is still greater than 1 (step 64). If it is, then the process repeats back to step 60 and continues on again through steps 60-64. Otherwise at step 64, the model visualizer 28 is on the last step of the transition and switches the 3-D graphics library from perspective rendering to orthogonal rendering mode (step 65).
  • Next the computer system uses the previously calculated viewport height H to generate the vertical, and horizontal distances around the POI for an orthogonal projection border which is applied to the 3-D graphics library (step 66). Then the user's viewpoint is returned to the original 3-D location it was in at the beginning of the 3-D to 2-D transition process (step 67). Finally, this new scene is rendered with the changed settings and values (step 68), and the change to an orthogonal 2-D view is complete (step 69),
  • A key feature of rendering of a 3-D view of multiple channels is automated reorientation of the view position and angle. With the freedom of movement provided by a full 3-D environment, the user can select a variety of off-center vantage points with regards to the 3-D view on display 13. Often to provide efficient analysis it is desirable to reorient on a particular point of interest, particularly by achieving a “straight-on” vantage perpendicular to the 3-D view. This is achieved by using the view geometry exemplified in FIG. 31 and the process in the software operating on computer system 12 for enabling automatic reorientation is shown in FIG. 32.
  • Using user interface 14, the user starts from any location, illustrated as “1(Start)” and may click on, for purpose of selection, a point of interest in the 3-D view on display 13, illustrated as “2(POI)” (step 70) and press a button on the graphic user interface 14 in the screen on display 13 or a keyboard button of user controller 13 a (step 71) to have the system 10 automatically reorient on that point. As large jumps in rendered views during this transition can be extremely disorienting from the user's perspective, the model visualizer 28 smoothly animates the transition along the dotting line, labeled “3” (FIG. 31) until the perpendicular target vantage point, labeled “4(End)” is achieved. First the software calculates the virtual distance in 3-D between the starting viewpoint and the POI (step 72). This distance is called Dp.
  • Next the computer system calculates the point location in 3-D space (considered Pe) that is exactly Dp distance from the POI perpendicular along either the x, y, or z axis (step 73). The axis chosen is dependent on which button or keystroke the user selected for the desired vantage point. Then the computer system calculates the desired final viewing angle (called Ve) that will result in the POI being in the center of the field of view from location Pe (step 74). The angle Ve will always be 0 degrees from the chosen axis and 90 degrees from the remaining two axis.
  • Next the software calculates the virtual 3-D distance (called Pd) between Ps and Pe (step 75), followed by calculating the angle difference (called Ad) between the starting view angle (called Vs) and Ve (step 76). Once the computer system 12 has calculated the start and end locations and angles using the current position, angle, and the POI, the model visualizer 28 enters a view animation and render loop to perform the transition in small steps until the target endpoint is reached. At this point the software checks to see if the current viewpoint position (called Pc) equals Pe and the current viewpoint angle (called Ac) equals Ae (step 78). If both position and angle are equal to the final desired values at step 78 then the process is complete (step 79). Otherwise the software adjusts Pc to be 1/Nth of Pd closer to Pe (step 80). N is a discrete number of steps predetermined by the software over which to perform the transition.
  • In this example N is 50, but it could be any number greater than or equal to 1. Greater numbers result in a smoother, but longer transition. Additionally a discrete time duration could he used instead, such that the transition occurs over a given time period independent of the number of transition frames that can be rendered by any given system during that time. Next the computer system adjusts Ac to be 1/Nth of Ad closer to Ae (step 81). With the new field of view and viewpoint location values calculated, the model visualizer 28 is then ready to render a new scene of the 3-D view (step 82). Then the process repeats back to step 78 and continues from there until the final location and angle are reached.
  • The result is that the model visualizer 28 calculates the necessary movement path and viewpoints, performing the entire process in a smooth transition effect so that the user is able to keep focus on their point of interest and the data around it. This process can he performed to orient the user's position and angle in 3-D space to be aligned perpendicular to the point of interest along any two of the three model axis at a time as chosen by the user.
  • Furthermore in a specific embodiment, this reorientation process can be combined with the perspective to orthogonal (or reverse) view transition method described above whereby selecting a point of interest and making just a single click or key press, the user can invoke the computer system 12 to automatically center on the desired vantage point and transition to the desired amount of perspective or lack thereof; resulting in a single automatic and fluid transition such as that shown in the example of a view of surfaces 18 h, 18 i, 18 j, and 18 k of FIGS. 33-43. This is extremely useful for being able to move around the model in 3-D perspective to locate an anomaly, and then with a single click smoothly transition to a straight-on orthogonal view to perform in-depth analysis of the data and timing around the event.
  • Another click and the user can transition to a traditional front-on view as shown in FIGS. 43-49, thus providing for an efficient and effective usage flow of the instrument. This movement can be performed while remaining in orthogonal (non-perspective mode) as in FIGS. 43-49, or by transitioning back to 3-D perspective during the movement. Surfaces 18 h, 18 i, 18 j, and 18 k are preferably of different color and have a sufficient degree of translucency to enable discernment of transitions in such surfaces when overlaid as shown in FIGS. 33-48. Surfaces 18 i, 18 j, and 18 k thus may be visible under surface 18 h and through each other, but Surfaces 18 i, 8 j, and 18 k are not discernible in FIGS. 37-43 due to limitation of non-color presentation.
  • Referring to FIG. 50, the user interface 14 may include a control panel 84 which may appear on the display 13 adjacent to the main rendered View on the screen of the 3-D surfaces 18 of multiple channels, as shown in FIG. 12, and operated upon using the user interface's mouse of user controls 13 a. The control panel 84 enables the user to understand the positioning of the viewpoint and signal channels in the 3-D model as well as provide an interface to customize the properties of those channels. This panel shows representations of each channel currently included in the 3-D model, Each channel has a label 85 that can be customized by clicking on it and a graphical representation of the signal 86-92. These are drawn with a color matching that used for the signal in the 3-D model itself.
  • Additionally, the symbols may be located vertically within the panel at a location proportional to the channel's y location in the model, and are therefore not necessarily evenly distributed in area 93 of panel 84. Furthermore the symbols are capable of being dragged up and down by the user to alter the channel's location and the 3-D) model is then re-generated accordingly. By clicking on the symbols the user can alter further properties of the channel such as but not limited to: color, transparency, source, height, graphical representation style (analog, digital, bus, etc.), and numerical base (binary, hex, octal, decimal, etc.).
  • If an acquisition device 15 connected to this system is capable of producing live activity information on the current state of the signals, then the channel symbols in the panel are also used to portray this information. These are exemplified in the diagram as signal conditions of: rising 86, low 87, high 88, toggling 89, falling 90, a stable hexadecimal bus 91, and a bus with some of its signals changing 92.
  • The control panel 84 also contains a camera symbol 94 that represents the user's current viewpoint in the y axis with relation to the 3-D) view. Additionally, the angle of the graphic indicates the current pitch of the view 95. In this way the symbol effectively provides useful orientation information, particularly when the point of view is very close to or within the surfaces 18 on display 13 itself and the main view rendering may be too close-up or confusing. Lastly, the camera symbol 94 is draggable in the y axis similar to the signals, such that the user can quickly relocate to a new vantage point of the view of surfaces 18 on display 13.
  • As described earlier in connection with FIG. 4, the segment controller module 22 segments channels of the data into an array along a z dimension of ordered segments. This array does not necessarily have to be uniform. The z dimensional ordering of a stack of independent data segments may be selected, broken, and ordered based on user defined conditions, whereby each segment is comprised of a set of the time-ordered data. Examples of such conditions are shown in FIGS. 51-57.
  • In the example of FIG. 51A, all of the data comes from a single original time- ordered acquisition, whereby segments can be any part of the original set, overlap, vary in size, and skip data. Some preferred uses include choosing segments based on; different acquisitions, consecutive time slices (FIG. 52), occurrence of a pattern (FIG. 53), specific start and/or stop conditions (FIGS. 54-55), data dependent sub-triggers, source device and combinations thereof. In addition, fake or simulated reference segments can be specified by the user and generated for inclusion in the set of segments for the purpose of creating a constant base case or comparison set as shown in FIG. 56 for example. Other selection and ordering of segments may be used and such selection and ordering is not limited to those described herein.
  • Once the data samples contained in each segment are selected, two other factors affect stacking the segments in the z-dimension to create an array of data to be modeled. The first factor is alignment. The first sample of each segment does not necessarily have to be aligned in the x dimension (time), such as is shown in FIG. 51B. Alternately, segments could be aligned based on other conditions, such as location of samples matching a specified pattern, end sample, or a specific real-time time differential between segments. The second factor in arranging the segments is order. While the preferred method is to order the segments in the 7 dimension in increasing real-time order, segments may also be reordered based on such conditions as values in the data, original source, or acquisition. An example of reordering of an array is shown in FIG. 57.
  • In addition to analysis of test and measurement data, system 10 may be used to analyze domain specific data. One such example of useful analysis that could be assisted by the present invention is video signals where the acquisition device 15 (FIG. 2) samples a video channel from a device or unit 16 under test, or such video signals may be from another source to the memory of computer system 12. Video communications typically include a vertical sync signal that signifies a new video frame and a horizontal sync signal that signifies a new horizontal line of video within that frame. Using these signals as well as the video data itself, the sampled data can be rearranged such that the invention can display a 3-D representation of the live sampled video streaming from a unit under test; such as is represented in the example FIG. 58. To achieve this, the acquisition device's trigger is set up to trigger on a rising edge of the vertical sync line.
  • Additionally, either a hardware sub-trigger or else a software programmed sub-trigger is set for a rising edge on the horizontal sync line. The vertical sync trigger denotes the first segment of the 2-D array to be displayed by the invention, while the horizontal sync trigger would be used to “break” or denote the separation of each segment to be ordered in the z dimension of the array. In this example, the z order would want to place the first segment furthest away and the last segment closest to the front of the model as to mimic a top to bottom scanning of a video frame.
  • Note that this is reversed from the example figure which shows the vertical sync trigger in front simply for demonstration purposes. This is a different mode of operation than an acquisition based z dimension as discussed earlier that would cause a historical aging or scrolling effect. In this case, instead of an aging model that scrolls, the entire model would be updated at once. When a new acquisition containing information for a new video frame is available, the entire model would update again.
  • The result of such a setup is a full video frame of a time-contiguous acquisition that is then restructured into a 2-D array of data similar to what it actually represents. The system 12 is then able to render it such that the actual complete video frame could be seen along with the control and/or other signals. This is extremely useful for determining the cause of anomalies or static in the video signal as the human eye could easily recognize them in the reconstituted frame. Using the translucency of different channels, the user can overlay the video data with various other signals to determine the cause of the problem. Reference planes 46 may further be added to the rendered display.
  • Furthermore, the system 19 may he extended for use in analysis of data in non time-ordered domains. For example, the system 10 can render in three dimensions, for purposes of visualization and analysis, a frequency domain representation of some or all of the input data 17 a; for example, as resulting from application of a Fourier transform to the data. In another embodiment, the system 10 may render the input data 17 a as a probability distribution; for example, as a histogram or other non time-domain representation, though the scope of the invention is not limited to solely those domains and applications specified herein.
  • Additionally, while the 3-D modeling herein has been discussed in terms of the three-dimensional Cartesian coordinate system (x,y,z), the system 10 may also operate upon input data 17 a, or processed derivations thereof, to render 3-D representations on display 13 using non-Cartesian coordinate systems such as: 3-D spherical, cylindrical, or other coordinate systems, similar to that described, above using a Cartesian coordinate system.
  • From the foregoing description it will be apparent that there has been provided an improved system and method for three-dimensional rendering of electrical test and measurements signals, as well as for analysis of video and other applications of signals. The illustrated description as a whole is to be taken as illustrative and not as limiting of the scope of the invention. Such variations, modifications and extensions, which are within the scope of the invention, will undoubtedly become apparent to those skilled in the art.
  • This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.

Claims (3)

What is claimed is:
1. A system for displaying a three dimensional al visual representation of one or more signals from a user controllable viewing perspective, the system including:
a data input component for acquiring a first set of signal values that each represent a measurement of at least a first one signal over a period of time; and wherein each of said signal values has an associated time value;
a computing component for computing a first three dimensional representation and a second three dimensional representation of at least said first signal over time by representing each of said signal values with respect to X, Y and Z axes that are located within a virtual three dimensional space and that are each oriented in a direction that is orthogonal relative to other said axes; and
wherein said first three dimensional representation is computed based upon a first virtual viewing perspective, and wherein a said second three dimensional representation is computed based upon a second virtual viewing perspective, and wherein said first and second virtual viewing perspectives are each defined in association with different virtual locations within said three dimensional space, and wherein at least said second virtual perspective is specified by a user of the system while viewing said three dimensional representation from said first viewing perspective.
2. The system of claim 1 wherein no more than one of said first viewing perspective and said second viewing perspective is equivalent to an orthogonal and two dimensional viewing perspective of said signals.
3. The system of claim 1 wherein said three dimensional representation transitions from said first viewing perspective to said second viewing perspective while maintaining a fixed size of a point of interest within said three dimensional representation.
US13/755,287 2008-02-04 2013-01-31 System for three-dimensional rendering of electrical test and measurement signals Abandoned US20130207969A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/755,287 US20130207969A1 (en) 2008-02-04 2013-01-31 System for three-dimensional rendering of electrical test and measurement signals

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/012,617 US8502821B2 (en) 2008-02-04 2008-02-04 System for three-dimensional rendering of electrical test and measurement signals
US201261592998P 2012-01-31 2012-01-31
US13/755,287 US20130207969A1 (en) 2008-02-04 2013-01-31 System for three-dimensional rendering of electrical test and measurement signals

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/012,617 Continuation US8502821B2 (en) 2008-02-04 2008-02-04 System for three-dimensional rendering of electrical test and measurement signals

Publications (1)

Publication Number Publication Date
US20130207969A1 true US20130207969A1 (en) 2013-08-15

Family

ID=40931209

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/012,617 Active 2030-10-14 US8502821B2 (en) 2008-02-04 2008-02-04 System for three-dimensional rendering of electrical test and measurement signals
US13/755,287 Abandoned US20130207969A1 (en) 2008-02-04 2013-01-31 System for three-dimensional rendering of electrical test and measurement signals
US13/958,265 Abandoned US20130314419A1 (en) 2008-02-04 2013-08-02 System for three-dimensional rendering of electrical test and measurement signals

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/012,617 Active 2030-10-14 US8502821B2 (en) 2008-02-04 2008-02-04 System for three-dimensional rendering of electrical test and measurement signals

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/958,265 Abandoned US20130314419A1 (en) 2008-02-04 2013-08-02 System for three-dimensional rendering of electrical test and measurement signals

Country Status (3)

Country Link
US (3) US8502821B2 (en)
EP (1) EP2250625A1 (en)
WO (1) WO2009099572A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8754885B1 (en) * 2012-03-15 2014-06-17 Google Inc. Street-level zooming with asymmetrical frustum

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7940236B2 (en) * 2007-04-20 2011-05-10 Global Oled Technology Llc Passive matrix electro-luminescent display system
US8949233B2 (en) * 2008-04-28 2015-02-03 Alexandria Investment Research and Technology, Inc. Adaptive knowledge platform
US8494250B2 (en) * 2008-06-06 2013-07-23 Siemens Medical Solutions Usa, Inc. Animation for conveying spatial relationships in three-dimensional medical imaging
US8473854B2 (en) * 2008-08-19 2013-06-25 Rockwell Automation Technologies, Inc. Visualization profiles and templates for auto-configuration of industrial automation systems
US20110040537A1 (en) * 2009-08-17 2011-02-17 Sap Ag Simulation for a multi-dimensional analytical system
JP5586203B2 (en) * 2009-10-08 2014-09-10 株式会社東芝 Ultrasonic diagnostic apparatus, ultrasonic image processing apparatus, and ultrasonic image processing program
US10197600B2 (en) 2011-04-29 2019-02-05 Keysight Technologies, Inc. Oscilloscope with internally generated mixed signal oscilloscope demo mode stimulus, and integrated demonstration and training signals
US9116011B2 (en) 2011-10-21 2015-08-25 Here Global B.V. Three dimensional routing
US8553942B2 (en) 2011-10-21 2013-10-08 Navteq B.V. Reimaging based on depthmap information
US9047688B2 (en) 2011-10-21 2015-06-02 Here Global B.V. Depth cursor and depth measurement in images
US9024970B2 (en) 2011-12-30 2015-05-05 Here Global B.V. Path side image on map overlay
US9404764B2 (en) 2011-12-30 2016-08-02 Here Global B.V. Path side imagery
US20130262015A1 (en) * 2012-03-29 2013-10-03 Travis W. L. White Annotating Measurement Results Data Capture Images with Instrumentation Configuration
US20130262016A1 (en) * 2012-03-29 2013-10-03 Travis W. L. White Automatically Configuring a Measurement System Using Measurement Results Data Capture Images Annotated with Instrumentation Configuration
US10095659B2 (en) 2012-08-03 2018-10-09 Fluke Corporation Handheld devices, systems, and methods for measuring parameters
US9541579B2 (en) * 2012-09-25 2017-01-10 Tektronix, Inc. Methods and systems for generating displays of waveforms
US11641536B2 (en) * 2013-03-15 2023-05-02 Fluke Corporation Capture and association of measurement data
US20150016269A1 (en) * 2013-07-09 2015-01-15 Tektronix, Inc. Frame analysis - a new way to analyze serial and other packetized data
US9766270B2 (en) 2013-12-30 2017-09-19 Fluke Corporation Wireless test measurement
US10109085B2 (en) * 2014-01-08 2018-10-23 Walmart Apollo, Llc Data perspective analysis system and method
WO2016080079A1 (en) * 2014-11-21 2016-05-26 富士フイルム株式会社 Time series data display control device, method and program for operating same, and system
JP6346674B2 (en) * 2014-11-21 2018-06-20 富士フイルム株式会社 Time-series data display control device, operating method and program thereof, and system
US10062411B2 (en) 2014-12-11 2018-08-28 Jeffrey R. Hay Apparatus and method for visualizing periodic motions in mechanical components
US10108325B2 (en) 2014-12-11 2018-10-23 Rdi Technologies, Inc. Method of analyzing, displaying, organizing and responding to vital signals
US10332287B2 (en) * 2015-11-02 2019-06-25 Rohde & Schwarz Gmbh & Co. Kg Measuring device and method for visually presenting a signal parameter in a displayed signal
US11245593B2 (en) * 2016-04-25 2022-02-08 Vmware, Inc. Frequency-domain analysis of data-center operational and performance metrics
CN106370906A (en) * 2016-09-30 2017-02-01 成都定为电子技术有限公司 Electric signal time-frequency-amplitude three-dimensional characteristic measurement and display system and method
US10627910B2 (en) 2017-02-21 2020-04-21 Adobe Inc. Stroke operation prediction for three-dimensional digital content
US10657682B2 (en) * 2017-04-12 2020-05-19 Adobe Inc. Drawing curves in space guided by 3-D objects
US10628997B2 (en) * 2017-08-24 2020-04-21 Emilio Santos Method for generating three-dimensional models from constrained sketches and an instruction set
US11423551B1 (en) 2018-10-17 2022-08-23 Rdi Technologies, Inc. Enhanced presentation methods for visualizing motion of physical structures and machinery
US11373317B1 (en) 2020-01-24 2022-06-28 Rdi Technologies, Inc. Measuring the speed of rotation or reciprocation of a mechanical component using one or more cameras
US11282213B1 (en) 2020-06-24 2022-03-22 Rdi Technologies, Inc. Enhanced analysis techniques using composite frequency spectrum data
US11322182B1 (en) 2020-09-28 2022-05-03 Rdi Technologies, Inc. Enhanced visualization techniques using reconstructed time waveforms
CN115742562B (en) * 2023-01-05 2023-04-21 东方合智数据科技(广东)有限责任公司 Intelligent monitoring method, device and equipment for printing packaging equipment and storage medium

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3769541A (en) * 1971-09-28 1973-10-30 Gen Electric Line width modulated display system
US5960118A (en) * 1995-07-06 1999-09-28 Briskin; Miriam Method for 2D and 3D images capturing, representation, processing and compression
US5959607A (en) * 1996-10-17 1999-09-28 Hewlett-Packard Company Trace coloring system and method for a signal measurement device having a color display
US6151010A (en) * 1996-05-24 2000-11-21 Lecroy, S.A. Digital oscilloscope display and method therefor
US6502045B1 (en) * 1999-05-19 2002-12-31 Ics Systems, Inc. Unified analog/digital waveform software analysis tool with video and audio signal analysis methods
US20030006990A1 (en) * 2001-05-31 2003-01-09 Salant Lawrence Steven Surface mapping and 3-D parametric analysis
US20030058243A1 (en) * 2001-09-21 2003-03-27 Faust Paul G. Delivery and display of measurement instrument data via a network
US20030063097A1 (en) * 2001-09-28 2003-04-03 Xerox Corporation Detection and segmentation of sweeps in color graphics images
US20040044479A1 (en) * 2002-09-04 2004-03-04 Sansone Stanley A. Method and apparatus for interferometry, spectral analysis, and three-dimensional holographic imaging of hydrocarbon accumulations and buried objects
US6707474B1 (en) * 1999-10-29 2004-03-16 Agilent Technologies, Inc. System and method for manipulating relationships among signals and buses of a signal measurement system on a graphical user interface
US6741887B1 (en) * 2000-12-01 2004-05-25 Ge Medical Systems Information Technologies, Inc. Apparatus and method for presenting periodic data
US6810346B2 (en) * 2002-01-31 2004-10-26 Agilent Technologies, Inc. Composite eye diagrams
US20050094879A1 (en) * 2003-10-31 2005-05-05 Michael Harville Method for visual-based recognition of an object
US20060146009A1 (en) * 2003-01-22 2006-07-06 Hanno Syrbe Image control
US20070046952A1 (en) * 2005-09-01 2007-03-01 Hitachi Communication Technologies, Ltd. Apparatus for measuring waveform of optical electric filed, optical transmission apparatus connected thereto and a method for producing the optical transmission apparatus
US20080177182A1 (en) * 2007-01-24 2008-07-24 Kabushiki Kaisha Toshiba Ultrasonic imaging apparatus and method for acquiring ultrasonic image
US7599814B2 (en) * 2006-04-27 2009-10-06 Hrl Laboratories, Llc System and method for computing reachable areas
US20100125816A1 (en) * 2008-11-20 2010-05-20 Bezos Jeffrey P Movement recognition as input mechanism
US7843429B2 (en) * 1997-08-22 2010-11-30 Pryor Timothy R Interactive video based games using objects sensed by TV cameras
US8452435B1 (en) * 2006-05-25 2013-05-28 Adobe Systems Incorporated Computer system and method for providing exploded views of an assembly

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3638066A (en) * 1970-08-21 1972-01-25 Thomas O Paine Contourograph system for monitoring electrocardiograms
US4032912A (en) * 1974-10-03 1977-06-28 General Electric Company Intensity modulated display system
DE2908424C2 (en) * 1978-05-08 1980-12-18 Dr.-Ing. J.F. Toennies Erben Kg, 7800 Freiburg Method and arrangement for the representation of electrical space curves
US4818931A (en) * 1987-02-19 1989-04-04 Hewlett-Packard Company Vector analyzer with display markers and linear transform capability
US4961155A (en) * 1987-09-19 1990-10-02 Kabushiki Kaisha Toyota Chuo Kenkyusho XYZ coordinates measuring system
EP0462289B1 (en) * 1989-12-28 1994-11-02 Kabushiki Kaisha Toyota Chuo Kenkyusho Apparatus for measuring three-dimensional coordinates
US5739807A (en) * 1991-09-13 1998-04-14 Tektronix, Inc. Method for presenting complex number waveforms
US5241302A (en) * 1991-09-13 1993-08-31 Tektronix, Inc. Method for displaying signal characteristics
US5214508A (en) * 1992-02-14 1993-05-25 Tektronix, Inc. Spatial bandwidth testing for digital data-compressed video systems
US5801312A (en) * 1996-04-01 1998-09-01 General Electric Company Method and system for laser ultrasonic imaging of an object
WO1997039360A2 (en) * 1996-04-02 1997-10-23 Lecroy Corporation Apparatus and method for measuring time intervals with very high resolution
US6442730B1 (en) * 1997-01-27 2002-08-27 Lecroy Corporation Recording medium failure analysis apparatus and method
US6298085B1 (en) * 1997-10-23 2001-10-02 Sony Corporation Source encoding using shuffling of data to provide robust error recovery in a burst error-environment
US7051309B1 (en) * 1999-02-16 2006-05-23 Crosetto Dario B Implementation of fast data processing with mixed-signal and purely digital 3D-flow processing boars
US6859742B2 (en) * 2001-07-12 2005-02-22 Landis+Gyr Inc. Redundant precision time keeping for utility meters
US6965383B2 (en) * 2001-12-11 2005-11-15 Lecroy Corporation Scaling persistence data with interpolation
US6735554B2 (en) * 2002-05-16 2004-05-11 Tektronix, Inc. Method and apparatus for representing complex vector data
US6745148B2 (en) * 2002-06-03 2004-06-01 Agilent Technologies, Inc. Intelligent test point selection for bit error rate tester-based diagrams
US20040017399A1 (en) * 2002-07-25 2004-01-29 Beck Douglas James Markers positioned in the trace of a logic analyzer snap to locations defined by clock transitions
US6781584B2 (en) * 2002-07-26 2004-08-24 Agilent Technologies, Inc. Recapture of a portion of a displayed waveform without loss of existing data in the waveform display
US7519874B2 (en) * 2002-09-30 2009-04-14 Lecroy Corporation Method and apparatus for bit error rate analysis
US7103400B2 (en) * 2002-11-08 2006-09-05 Koninklijke Philips Electronics, N.V. Artifact elimination in time-gated anatomical imaging
US20060075212A1 (en) * 2002-12-31 2006-04-06 Zeroplus Technology Co., Ltd. Programmable logic analyzer data analyzing method
WO2004059334A1 (en) * 2002-12-31 2004-07-15 Zeroplus Technology Co., Ltd Programmable logic analyzer data analyzing method
US7216046B2 (en) * 2003-03-19 2007-05-08 Tektronix, Inc. Method of generating a variable persistence waveform database
AU2003230733A1 (en) * 2003-04-08 2004-11-26 Chen, Chung-Chin Logic analyzer data retrieving circuit and its retrieving method
JP4468677B2 (en) * 2003-05-19 2010-05-26 オリンパス株式会社 Ultrasonic image generation method and ultrasonic image generation program
US7223611B2 (en) * 2003-10-07 2007-05-29 Hewlett-Packard Development Company, L.P. Fabrication of nanowires
US20050183066A1 (en) * 2004-02-17 2005-08-18 Jabori Monji G. Correlating debugger
US7236900B2 (en) * 2004-04-20 2007-06-26 Tektronix, Inc. Three dimensional correlated data display
WO2005121814A1 (en) 2004-06-07 2005-12-22 Zeroplus Technology Co., Ltd. Logic analyzer and method of analyzing waveform data using the same
US7589728B2 (en) * 2004-09-15 2009-09-15 Lecroy Corporation Digital oscilloscope display and method for image quality improvement
US7280930B2 (en) * 2005-02-07 2007-10-09 Lecroy Corporation Sequential timebase
US7505039B2 (en) * 2005-07-21 2009-03-17 Lecroy Corporation Track of statistics
US20070061629A1 (en) * 2005-08-15 2007-03-15 Thums Eric E Drop and drag logic analyzer trigger
WO2007051075A1 (en) * 2005-10-28 2007-05-03 Brigham And Women's Hospital, Inc. Ultrasound imaging
EP1963805A4 (en) * 2005-12-09 2010-01-06 Univ Columbia Systems and methods for elastography imaging
WO2008027520A2 (en) * 2006-08-30 2008-03-06 The Trustees Of Columbia University In The City Of New York Systems and methods for composite elastography and wave imaging
US7456779B2 (en) * 2006-08-31 2008-11-25 Sierra Nevada Corporation System and method for 3D radar image rendering
JP4329838B2 (en) * 2007-04-18 2009-09-09 ソニー株式会社 Image signal processing apparatus, image signal processing method, and program
US8384365B2 (en) * 2007-06-15 2013-02-26 The Regents Of The University Of Colorado, A Body Corporate Multi-phase modulator
US20090061730A1 (en) * 2007-08-30 2009-03-05 Chung-Chin Chen Bra cups with air-permeability feature

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3769541A (en) * 1971-09-28 1973-10-30 Gen Electric Line width modulated display system
US5960118A (en) * 1995-07-06 1999-09-28 Briskin; Miriam Method for 2D and 3D images capturing, representation, processing and compression
US6151010A (en) * 1996-05-24 2000-11-21 Lecroy, S.A. Digital oscilloscope display and method therefor
US5959607A (en) * 1996-10-17 1999-09-28 Hewlett-Packard Company Trace coloring system and method for a signal measurement device having a color display
US7843429B2 (en) * 1997-08-22 2010-11-30 Pryor Timothy R Interactive video based games using objects sensed by TV cameras
US6502045B1 (en) * 1999-05-19 2002-12-31 Ics Systems, Inc. Unified analog/digital waveform software analysis tool with video and audio signal analysis methods
US6707474B1 (en) * 1999-10-29 2004-03-16 Agilent Technologies, Inc. System and method for manipulating relationships among signals and buses of a signal measurement system on a graphical user interface
US6741887B1 (en) * 2000-12-01 2004-05-25 Ge Medical Systems Information Technologies, Inc. Apparatus and method for presenting periodic data
US20030006990A1 (en) * 2001-05-31 2003-01-09 Salant Lawrence Steven Surface mapping and 3-D parametric analysis
US20030058243A1 (en) * 2001-09-21 2003-03-27 Faust Paul G. Delivery and display of measurement instrument data via a network
US20030063097A1 (en) * 2001-09-28 2003-04-03 Xerox Corporation Detection and segmentation of sweeps in color graphics images
US6810346B2 (en) * 2002-01-31 2004-10-26 Agilent Technologies, Inc. Composite eye diagrams
US20040044479A1 (en) * 2002-09-04 2004-03-04 Sansone Stanley A. Method and apparatus for interferometry, spectral analysis, and three-dimensional holographic imaging of hydrocarbon accumulations and buried objects
US20060146009A1 (en) * 2003-01-22 2006-07-06 Hanno Syrbe Image control
US20050094879A1 (en) * 2003-10-31 2005-05-05 Michael Harville Method for visual-based recognition of an object
US20070046952A1 (en) * 2005-09-01 2007-03-01 Hitachi Communication Technologies, Ltd. Apparatus for measuring waveform of optical electric filed, optical transmission apparatus connected thereto and a method for producing the optical transmission apparatus
US7599814B2 (en) * 2006-04-27 2009-10-06 Hrl Laboratories, Llc System and method for computing reachable areas
US8452435B1 (en) * 2006-05-25 2013-05-28 Adobe Systems Incorporated Computer system and method for providing exploded views of an assembly
US20080177182A1 (en) * 2007-01-24 2008-07-24 Kabushiki Kaisha Toshiba Ultrasonic imaging apparatus and method for acquiring ultrasonic image
US20100125816A1 (en) * 2008-11-20 2010-05-20 Bezos Jeffrey P Movement recognition as input mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Metratek, Test and Measurement Solutions; user manual 1993-2007. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8754885B1 (en) * 2012-03-15 2014-06-17 Google Inc. Street-level zooming with asymmetrical frustum

Also Published As

Publication number Publication date
WO2009099572A1 (en) 2009-08-13
US20090195536A1 (en) 2009-08-06
US8502821B2 (en) 2013-08-06
US20130314419A1 (en) 2013-11-28
EP2250625A1 (en) 2010-11-17

Similar Documents

Publication Publication Date Title
US8502821B2 (en) System for three-dimensional rendering of electrical test and measurement signals
Hurter et al. Fiberclay: Sculpting three dimensional trajectories to reveal structural insights
US5590271A (en) Interactive visualization environment with improved visual programming interface
US7978210B2 (en) Detail-in-context lenses for digital image cropping and measurement
Chen et al. Visual abstraction and exploration of multi-class scatterplots
US6480194B1 (en) Computer-related method, system, and program product for controlling data visualization in external dimension(s)
KR102029055B1 (en) Method and apparatus for high-dimensional data visualization
US8907948B2 (en) Occlusion reduction and magnification for multidimensional data presentations
US20100011309A1 (en) Data visualisation systems
US20040125138A1 (en) Detail-in-context lenses for multi-layer images
EP1351122A2 (en) Virtual three-dimensional display
EP1692664B1 (en) System for displaying images with multiple attributes
EP2587456A2 (en) Method and systems for generating a dynamic multimodal and multidimensional presentation
US10261306B2 (en) Method to be carried out when operating a microscope and microscope
Schulze-Döbold et al. Volume rendering in a virtual environment
US11113868B2 (en) Rastered volume renderer and manipulator
Klinker An environment for telecollaborative data exploration
US6469702B1 (en) Method and system for editing function curves in two dimensions
US7046241B2 (en) Oriented three-dimensional editing glyphs
KR100419999B1 (en) Method for providing a graphic user interface for a volume rendering image
Haroz et al. Seeing the difference between cosmological simulations
Zhong Studying black and white textures for visualization on e-ink displays
Trautner Relevanzorientierte Exploration von Molekulardynamik-Simulationen
CN117373013A (en) Visual version comparison method and device for three-dimensional model
Abidin et al. Murvis: enhancing the visualization of multiple response survey

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION