WO2015011487A1 - Monitoring the performance of a computer - Google Patents

Monitoring the performance of a computer Download PDF

Info

Publication number
WO2015011487A1
WO2015011487A1 PCT/GB2014/052275 GB2014052275W WO2015011487A1 WO 2015011487 A1 WO2015011487 A1 WO 2015011487A1 GB 2014052275 W GB2014052275 W GB 2014052275W WO 2015011487 A1 WO2015011487 A1 WO 2015011487A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
computer
wait
duration
cursor
Prior art date
Application number
PCT/GB2014/052275
Other languages
French (fr)
Inventor
David Mchattie
Jeremy Barker
Original Assignee
David Mchattie
Jeremy Barker
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by David Mchattie, Jeremy Barker filed Critical David Mchattie
Publication of WO2015011487A1 publication Critical patent/WO2015011487A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3438Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment monitoring of user actions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/349Performance evaluation by tracing or monitoring for interfaces, buses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/835Timestamp
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/88Monitoring involving counting

Definitions

  • the present invention is concerned with measuring a wait duration specific to a user interaction with a computer by detecting a wait state in which the computer is not available for user input in order to determine how much time a computer is available to a user during a user session.
  • GB2370140A discloses a method for determining system resource parameters to improve response time for specific users.
  • US6046816A discloses measuring the total time between a user requesting a print of a document and the completion of the print job.
  • JP2- 105236A discloses a method for counting the response time in a time-sharing system.
  • US2012144246 discloses systems and methods for monitoring operational performance of at least one application. The approach does not use any explicit instrumentation, and relies on performance monitoring of at least one Core Responsive Element (CRE).
  • the CRE may be linked to a wait cursor.
  • Computer response time has been defined as the elapsed time between the end of an inquiry or demand on a computer system and the beginning of a response; for example, the length of the time between an indication of the end of an inquiry and the display of the first character of the response at a user terminal.
  • the computer is in a wait state of varying duration.
  • the response time can appear to be
  • an indication is given to the user that the computer or application is in a wait state and not available for user input, usually because it is busy performing an operation, or the operating system is active working on another task. This can be done, for example, by the input cursor ceasing to flash.
  • a common way of letting the user know the computer is in such a wait state and cannot accept user input is to change the mouse cursor.
  • the wait cursor an hourglass in Windows® before Vista® and many other systems, spinning ring in Windows Vista, watch in classic Mac OS, or spinning ball in Mac OS X
  • This wait state can be detected.
  • Another wait time is associated with logging on to the computer or system, in which the operating system is setting up parameters specific to the user, or returning from a standby or hibernated state. This can generate a wait state cursor, or it can be a display with a moving progress bar on it. The operating system is unavailable for user input during this phase of its operation.
  • a further wait state occurs as a result of poor network performance, in which the operating system or a program running on the operating system is waiting for information to be delivered or received by an external network, as for example when accessing email or a web site.
  • wait states conspire to reduce the productivity of the user, can increase a users stress level, and can generally lead to a poor user experience.
  • Obtaining an objective measure of the delay caused to a user during these wait states by a lack of computer resources (of whatever kind: e.g. memory, processor, connectivity, device-related, network speed, operating system state, application system state) while interacting with a computer has long been a desired goal but, due to the difficulty of pinpointing the source of the delay precisely, not, so far, successfully solved.
  • the wait cursor itself does not accurately reflect application performance (as experienced by a User).
  • the wait cursor can change (to a non-wait cursor) for a number of reasons not associated with a change in state of the measured application, (i.e. the event within the application that triggered the wait cursor).
  • the User can choose another application to interact with, which can change the wait cursor state (i.e. move the cursor) from the wait state it is associated with application 1 , to that which is not in a wait state, application 2, or application 3 and so on.
  • the prior art addresses thus only the total time of wait state associated with an application. It does not address the cumulative wait states experienced by the User in their normal day to day interaction with many applications.
  • the present invention measures the wait cursors cumulative effect on the User's interaction with systems, software and applications - a true interpretation and reflection of the User experience specific to system and application performance.
  • the invention is primarily measuring the degradation of the users' experience/efficiency caused by extended wait times (as indicated by the display of the wait cursor) and not application performance, which in the prior art is the total wait event applicable to only one application at a time, not the present inventions measurement of cumulative wait events. While tracking the wait cursor is not a credible method for measuring application performance, it is a reliable method for understanding the User experience/efficiency as it tracks what the User is actually doing and experiencing, not irrelevant machine data specific to what an application is doing.
  • the present invention measures a wait duration specific to a user interaction with a computer. It involves the following steps: detecting a wait state in which the user is waiting for the computer; and measuring a duration of the wait state.
  • the step of detecting the wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor.
  • a property of the wait cursor includes the one or more applications being used by the user.
  • the duration of the wait state is a period of time during which the cursor is a wait cursor.
  • the method includes the additional steps of: collecting performance data relating to the one or more applications and the duration of the wait state. This means that the measure and cause of the delay experienced by the user is determined from the performance data.
  • the method includes the additional steps of: starting a user waiting meter when a wait cursor is displayed and stopping a user waiting meter when one of the following occurs. Either non-wait cursor is displayed or the user selects another application to continue working.
  • the duration of the wait state is a duration of time the user waiting meter was running.
  • the present invention also includes a method for measuring a user experience metric specific to a user interaction with a computer. This comprises any of the above steps and additionally includes the steps of: measuring a duration of user activity to give a user active value.
  • the user experience metric is a function of the sum of all user active values for a given time period and a sum of all user durations for the given time period.
  • measuring a duration of user activity to give a user active value comprises the steps of: starting a user active meter when a user presses a key on a keyboard or clicks a mouse button and stopping the user active meter when the user does not press the keyboard key or click the mouse button for a preset duration of time.
  • the user active value is a duration of time the user active meter was running.
  • the performance of the computer as experienced by the user is determined from a ratio of a duration of the wait time to a duration of time the computer is available for use by the user, wherein the duration of time the computer is available for use by the user is the total duration of the user session minus the wait time.
  • the present invention provides a method for monitoring the performance of a computer when in use by a user, comprising the steps of: detecting a wait state in which the user is waiting for the computer; and collecting performance data relating to a duration of the wait state. This provides an assessment of the period of time the user is waiting for the computer to complete an action, process or command.
  • the step of detecting a wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor; or detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input.
  • the method assesses the performance provided to the user of the computer from the performance data.
  • the step of collecting performance data relating to the wait state includes measuring a duration of the wait state. This provides an assessment of the period of time the user is waiting for the computer to complete an action, process or command.
  • the step of detecting a wait state includes detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor.
  • the wait cursor is a clear indicator of the kind of wait state a user experiences whilst using the computer.
  • the pointing device is selected from the group consisting of: mouse, touch pad and touch screen.
  • the wait state can be determined from the user's use of a range of input devices.
  • Input devices may include eye movement, motion detection, voice control or any similar means allowing a user to interact with information shown on a screen.
  • the duration of the wait state is the elapsed time between an end of a user input and a beginning of a computer response. This gives a measure of the time during which the user is unable to use the computer.
  • the duration of the wait state relates to a single user session.
  • the duration of the wait state is a sum of all the durations of a wait state during a user session. This gives an indication of the total amount of time the user has not been able to use the computer.
  • the step of collecting performance data additionally comprises the step of: measuring a duration the computer is in use by the user. This provides information on how much time the user had been able to use the computer.
  • the duration of time the computer is in use by the user is the duration of a user session. This provides information on the total amount of time the user had been able to use the computer.
  • the performance of the computer is measured by the duration of the wait state and its ratio to the duration the computer is in use. This gives an indication of the proportion of the session the computer has been unavailable for use.
  • the step of collecting performance data additionally comprises the step of measuring a duration the computer is available for use by the user.
  • This provides information on the availability of the computer for use by the user.
  • the duration the computer is available for use by the user is a sum of all the durations the computer is available for use during a user session. This provides information on the total amount of time the computer was available for use by the user.
  • the performance of the computer is measured as a ratio between the duration of the wait state and the duration of when the computer is available. This gives an indication of the proportion of time the computer has been unavailable for use.
  • the step of detecting a wait state includes detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input. This provides an assessment of the period of time the user is waiting for the computer to become available for use.
  • the step of collecting performance data relating to the wait state includes data relating to resource utilisation.
  • data relating to resource utilisation includes one or more of: CPU utilisation, memory utilisation, disk queue utilisation, and disk free space. This is especially important where the performance of the computer is slowed because of a lack of memory or disk space or other resource.
  • the step of collecting performance data relating to the wait state includes data relating to the network application.
  • data relating to resource utilisation includes one or more of: network upload speed and network download speed. This is especially important where the performance of the computer is slowed because of a poor, badly configured or busy network connection.
  • the step of collecting performance data relating to the wait state includes the additional step of: analysing the performance data; and identifying a lack of resources.
  • a duration of the wait state may be reduced by increasing availability of resources.
  • the performance data is analysed to provide a report, wherein the performance of the computer is improved.
  • the report shows that a key resource is limited - improvements to the computer or its connections can lead to an immediate improvement in performance.
  • the performance data is uploaded for analysis.
  • the computer is one of a server, a desktop computer, a laptop computer, a mobile computer, a smart phone.
  • the method may be applied to any computing device that the user may choose to use.
  • the present invention provides a method for measuring a delay experienced by a user interacting with a computer comprising the steps of: detecting a state in which the user is waiting for the computer and collecting performance data relating to a duration of the wait state.
  • the step of detecting a wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor, or detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input.
  • the method measures the delay experienced by the user from the performance data.
  • the present invention provides a method for determining a cause of a delay experienced by a user interacting with a computer comprising the steps of: detecting a wait state in which the user is waiting for the computer and collecting performance data relating to a duration of the wait state.
  • the step of detecting a wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor; or detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input.
  • the method determines a cause of the delay experienced by the user from the performance data.
  • Figure 1 shows a schematic for a method for monitoring the performance of a computer
  • Figure 2 shows a schematic for a method for monitoring the performance of a computer, wherein the wait state is associated with a wait state cursor;
  • Figure 3 shows a schematic for a method for monitoring the performance of a computer, wherein the wait state is associated with a logon event
  • Figure 4 shows a schematic showing how computer performance can be monitored and improved.
  • the present invention measures a user's subjective experience of using one or more applications running on a computer. It does this by measuring the duration of wait states when the user cannot do any work on the application and also measuring the duration of active states, when the user is able to utilise the application normally.
  • the user's subjective experience which van be termed a user experience metric, a user experience index, or a user experience rating, is derived from these two parameters. Importantly, this approach does not measure the performance of the computer per se, but is a measure of the productivity of the user interacting with applications running on the computer.
  • Figure 1 shows a method for monitoring the performance of a computer 100 when in use by user 102.
  • a wait state in which the user is waiting for the computer is detected and its duration can be measured. This can be because an application running on the computer with which the user is interacting is not available for user input (e.g. when a wait cursor is displayed, or during login).
  • performance data relating to the wait state are collected. This data can include the duration of a wait state, the application the user is utilising, and resource utilisation data, amongst others.
  • the data relating to the wait state are stored. The performance of the computer is determined from the performance data.
  • the performance of the computer can be improved by analysing the stored data.
  • the method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the performance data.
  • the cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources.
  • a wait duration specific to a user interaction with a computer can be measured, and involves the steps of: detecting a wait state in which the user is waiting for the computer and measuring a duration of the wait state.
  • the step of detecting a wait state includes:
  • Changing windows to escape the wait cursor is detected by the current invention because the application being used is a property of the wait cursor; the prior art is unable to achieve this, because it does not envisage that the wait cursor can change due to User action as well as application state changes. This is achieved by starting a user waiting meter when a wait cursor is displayed and stopping it when one of the following happens: a non- wait cursor is displayed; or the user selects another application to continue working.
  • the duration of the wait state is a duration of time the user waiting meter was running
  • Figure 2 shows a method for monitoring the performance of a computer 100 when in use by a user 102 interacting with computer 100 by means of a pointing device 202, which may be a mouse as shown, or it may be a touch pad, touch screen or other similar input device.
  • the input device is any device which allows a user to interact with information shown, and it may also include eye movement, limb motion, neural activity or voice activation.
  • the wait state is detected in steps 204 and 206.
  • step 204 a change in pointing device cursor is detected, and in step 206 the change is assessed: is it a change to a wait cursor?
  • the wait state cursor (such as an hourglass in Windows® before Vista® and many other systems, spinning ring in Windows Vista, watch in classic Mac OS, or spinning ball in Mac OS X) is displayed when the mouse cursor is in the corresponding window. If the cursor is a wait cursor, performance data relating to the wait state are collected. These data include a duration of the wait state and resource utilisation, and in steps 208 to 216, these are measured. In step 208, a wait timer and various resource utilisation counters are actuated. Alternatively, and usefully when the wait state of the operating system is so prolonged that the operating system seems frozen, the wait timer logs the time stamp at which the wait state started.
  • step 212 the cursor is monitored, and if, in step 214 the cursor remains a wait cursor, step 212 is repeated. If in step 214 the cursor is no longer a wait cursor, then in step 216 the wait timer and resource utilisation counters are halted. The duration of the wait state is thus the period of time during which the cursor is a wait cursor. In step 1 10, performance data relating to the wait state is stored. The method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the
  • the method of the present invention can be achieved by an agent running on the computer monitoring the cursor, be it a mouse, touch pad, touch screen or other pointing device cursor, and detects when the cursor becomes a wait state cursor (an hourglass or spinning circle that temporarily replaces the arrow). When the wait state has been detected, the agent then:
  • Initialises resource utilisation counters i.e. sets them to zero, for example for one or more of the following:
  • Disk 0 Queue Utilisation 'RU3' in Integer Units e.g. 0 to 10.
  • the approach isn't limited to the monitoring of a cursor on a computer having a graphical user interface, but also includes any input form in which a wait state occurs when the computer is not available for input of data by the user.
  • the user can measure a duration of time the computer is in use by a user. This can be the total duration of a user session, or it can be the duration of time the computer is available for use by the user, which is the difference between total duration of a user session and wait time 'W3'.
  • the data collected may be used to provide a measure of computer performance as experienced by a user from, for example, a ratio of a duration of wait time 'W3' to total duration of a user session, or a ratio of a duration of wait time 'W3' to the duration of time the computer is available for use by the user.
  • Figure 3 shows a method for monitoring the performance of a computer 100 when in use by a user 102, where the user is logging on to computer 100 and waiting for the computer to accept input.
  • a logon event is detected and the monitoring of the login process, and collection of performance data relating to the login process begins. These data include a duration of the login wait state.
  • the time stamp immediately following the logon event is captured. If there has been a logon audit event 306, the start time of the logon audit event is captured.
  • the duration of the wait state associated with logon is determined by the difference between the time stamp immediately following the logon event and the start time of the logon audit event.
  • the duration of the wait state is thus the period of time during in which the computer is not available for user input.
  • the resultant performance data i.e. the logon delay time
  • the method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the performance data.
  • the cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources.
  • the method of the present invention can be achieved by an agent running on the computer and which is actuated following logon.
  • the agent When user 102 logs onto the target PC, the agent is automatically started and reads the Windows Event Log for the latest logon event to determine when the Windows OS time stamped the start of the logon sequence. The agent subtracts the start logon timestamp from the current time to determine the Logon Delay and writes the result to the log file as a "LogonDelay" entry.
  • the performance of the computer is measured by the duration of the wait state.
  • the method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the performance data.
  • the cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources.
  • Figure 4 shows a schematic showing how computer performance can be monitored and improved.
  • Data stored at step 110 can be uploaded to a server.
  • the data is uploaded when the computer is idle, for example when there has been no mouse or keyboard activity for at least 60 seconds.
  • the server analyses the data to improve the performance of the computer, and in step 504, provides a report.
  • the method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the performance data.
  • the cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources.
  • WaitEvent data are averaged with other WaitEvent entries on the same day, for example, the sum of all 'RU 1' data divided by the number of 'RU1 ' entries. This is repeated for each of the 'RU2' - 'RU4' Resource Utilisation data to the form the Summarised WaitEvent data.
  • the summarised WaitEvent data are displayed, on demand by the user, within a User determinable date range (for example, in 1 day units) on to a GUI driven by the server, typically a web page.
  • the summary includes one or more of the following:
  • Total Wait Time (Total Lost Time); A presentation of the cost of the Total Wait Time is determined by converting the Total Wait Time to hours and then multiplying by a cost per hour figure, as defined and stored in the Users local parameter file on the User's PC; or centrally on a server An Application Wait Time Rating is determined by comparison with predetermined thresholds e.g. Poor, OK, Good, and presented to the GUI.
  • the thresholds are stored centrally and are considered global to the whole system to provide consistency and facilitate comparisons between disparate PC's and installations.
  • a raw data export facility can also be provided to allow organisations to perform their own customised analysis.
  • the aggregated LogonWait data are displayed, on demand by the user, on a GUI driven by the server, typically a web page, including:
  • a logon Delay Rating is determined by comparison with predetermined thresholds e.g. Poor, OK, Good
  • the aggregated NetSpeed data are displayed, on demand by the user, on a GUI driven by the server, typically a web page, including:
  • Network Speed Ratings (upload and download) which is determined by comparison with predetermined thresholds e.g. Poor, OK, Good
  • the approach can be applied to detecting and/or measuring any existing wait state parameter provided directly or indirectly by the operating system kernel of a computer (i.e. any device capable of being so monitored), or, if absent, to supply one by modifying or supplementing the kernel of the operating system and to record, by means of whatever accurate timing algorithm or timing resource available on the device, the changes in state in a local data file held on the computer's storage medium (or another connected computer).
  • the present invention would simulate a hardware layer input "beneath" the operating system to allow a wait state flag to be set and passed through whatever operating system was then installed upon the computer.
  • the method of the invention monitors events, such as commencing a login to a connected resource and achieving login, or accessing a resource on a computer connected by network to the computer on which the monitoring is taking place, and information about the events is captured to the data file.
  • Accurate timing means are utilised (such that the synchronicity of events is preserved as between the events monitored) to capture resource utilisation of the user's computer by means of known activity measures, (for example - disk data transfer rate relative to maximum data transfer rate for the disk; memory data transfer rate/ number of read / write requests, and so forth), such that measures of the relative intensity of resource utilisation is synchronously captured in the data file.
  • Retrieval of the data for either a) near real time monitoring or b) archival retrieval of the records in the data file enables determinations of which resource constraints generate which wait state.
  • This process being partly or wholly conducted by analysis of the local datafile and / or partly or wholly from global analysis and comparison of patterns revealed from analysis of the community database of all possible performance data collected from monitored devices of similar type and configuration.
  • the present invention is based on the observation that the User is the most valuable component of a computer system and making a User ineffective, by providing them with a computer, software or applications that do not perform well to save money, is undoubtedly a false economy. Users can perform actions in a synchronous manner, i.e. one after the other, so it is relatively easy to capture and record each User activity as compared to actually understanding the activity of dozens of concurrent applications running in an asynchronous Windows environment, of what the User actually is experiencing. So by taking the User as the point of observation the wait cursor is an accurate and useful measure of User
  • the user active meter starts
  • the User active value is output to a file in the readings folder
  • a wait cursor is displayed (hourglass or blue donut)
  • the User waiting value is output to a file in the readings folder
  • the User waiting value is output to a file in the readings folder
  • Total Active Time The sum of all User active values for a given time period
  • Total Wait Time The sum of all User wait values for a given time period
  • Hidden Cost (of keeping valuable Users waiting) User Cost Per Hour multiplied by Total Wait Time (in hours) for a given time period.

Abstract

A method for monitoring the performance of a computer (100) when in use by user (102) has a first step (106), in which a wait state is detected during a user session (104) where the user is waiting for the computer. This can be because a process running on the computer means that the computer is not available for user input (e.g. when a wait cursor is displayed, or during login). In step (108) performance data relating to the wait state are collected. This data can include the duration of a wait state and resource utilisation data, amongst others. In step (110), the data relating to the wait state are stored. The performance of the computer is determined from the performance data. In step (112), the performance of the computer can be improved by analysing the stored data. The method measures a duration of the delay, and/or determines a cause of the delay experienced by the user from the performance data. The cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources.

Description

Monitoring the Performance of a Computer
Technical Field
The present invention is concerned with measuring a wait duration specific to a user interaction with a computer by detecting a wait state in which the computer is not available for user input in order to determine how much time a computer is available to a user during a user session.
Background Art
GB2370140A discloses a method for determining system resource parameters to improve response time for specific users. US6046816A discloses measuring the total time between a user requesting a print of a document and the completion of the print job. JP2- 105236A discloses a method for counting the response time in a time-sharing system. In US2012144246 discloses systems and methods for monitoring operational performance of at least one application. The approach does not use any explicit instrumentation, and relies on performance monitoring of at least one Core Responsive Element (CRE). The CRE may be linked to a wait cursor.
Computer response time has been defined as the elapsed time between the end of an inquiry or demand on a computer system and the beginning of a response; for example, the length of the time between an indication of the end of an inquiry and the display of the first character of the response at a user terminal. In other words, the computer is in a wait state of varying duration. For simple operations, the response time can appear to be
instantaneous to the user.
For other operations where the response time is longer, an indication is given to the user that the computer or application is in a wait state and not available for user input, usually because it is busy performing an operation, or the operating system is active working on another task. This can be done, for example, by the input cursor ceasing to flash. On computer operating systems using a graphical user interface, for example those running on personal computers, laptops, smart phones and the like, a common way of letting the user know the computer is in such a wait state and cannot accept user input is to change the mouse cursor. The wait cursor (an hourglass in Windows® before Vista® and many other systems, spinning ring in Windows Vista, watch in classic Mac OS, or spinning ball in Mac OS X) is displayed when the mouse cursor is in the corresponding window. This wait state can be detected.
Another wait time is associated with logging on to the computer or system, in which the operating system is setting up parameters specific to the user, or returning from a standby or hibernated state. This can generate a wait state cursor, or it can be a display with a moving progress bar on it. The operating system is unavailable for user input during this phase of its operation.
A further wait state occurs as a result of poor network performance, in which the operating system or a program running on the operating system is waiting for information to be delivered or received by an external network, as for example when accessing email or a web site.
These wait states conspire to reduce the productivity of the user, can increase a users stress level, and can generally lead to a poor user experience. Obtaining an objective measure of the delay caused to a user during these wait states by a lack of computer resources (of whatever kind: e.g. memory, processor, connectivity, device-related, network speed, operating system state, application system state) while interacting with a computer has long been a desired goal but, due to the difficulty of pinpointing the source of the delay precisely, not, so far, successfully solved.
The wait cursor itself does not accurately reflect application performance (as experienced by a User). The wait cursor can change (to a non-wait cursor) for a number of reasons not associated with a change in state of the measured application, (i.e. the event within the application that triggered the wait cursor). For example, the User can choose another application to interact with, which can change the wait cursor state (i.e. move the cursor) from the wait state it is associated with application 1 , to that which is not in a wait state, application 2, or application 3 and so on. The prior art addresses thus only the total time of wait state associated with an application. It does not address the cumulative wait states experienced by the User in their normal day to day interaction with many applications. In other words it is assumed that a User executes an application synchronously, i.e. waits for an application task (or wait cursor) to finish before moving to another task, which is what is measured in the prior art. However Users are actually capable of executing many tasks at the same time if needed, moving between applications and interacting with applications asynchronously.
What is needed is an approach which measures the wait event specific to the User interaction, and not the wait event specific to the application.
Disclosure of Invention
The present invention measures the wait cursors cumulative effect on the User's interaction with systems, software and applications - a true interpretation and reflection of the User experience specific to system and application performance. The invention is primarily measuring the degradation of the users' experience/efficiency caused by extended wait times (as indicated by the display of the wait cursor) and not application performance, which in the prior art is the total wait event applicable to only one application at a time, not the present inventions measurement of cumulative wait events. While tracking the wait cursor is not a credible method for measuring application performance, it is a reliable method for understanding the User experience/efficiency as it tracks what the User is actually doing and experiencing, not irrelevant machine data specific to what an application is doing. The User might not even be in interaction with the application whilst in a wait state - for example, the application is 'hanging' so the User "goes and gets a coffee." The present invention measures a wait duration specific to a user interaction with a computer. It involves the following steps: detecting a wait state in which the user is waiting for the computer; and measuring a duration of the wait state. The step of detecting the wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor. A property of the wait cursor includes the one or more applications being used by the user. The duration of the wait state is a period of time during which the cursor is a wait cursor. This means that a delay experienced by a user interacting with one or more applications running on the computer is measured. Preferably, the method includes the additional steps of: collecting performance data relating to the one or more applications and the duration of the wait state. This means that the measure and cause of the delay experienced by the user is determined from the performance data.
Preferably, the method includes the additional steps of: starting a user waiting meter when a wait cursor is displayed and stopping a user waiting meter when one of the following occurs. Either non-wait cursor is displayed or the user selects another application to continue working. The duration of the wait state is a duration of time the user waiting meter was running.
The present invention also includes a method for measuring a user experience metric specific to a user interaction with a computer. This comprises any of the above steps and additionally includes the steps of: measuring a duration of user activity to give a user active value. The user experience metric is a function of the sum of all user active values for a given time period and a sum of all user durations for the given time period.
Preferably, measuring a duration of user activity to give a user active value comprises the steps of: starting a user active meter when a user presses a key on a keyboard or clicks a mouse button and stopping the user active meter when the user does not press the keyboard key or click the mouse button for a preset duration of time. The user active value is a duration of time the user active meter was running.
Preferably, the performance of the computer as experienced by the user is determined from a ratio of a duration of the wait time to a duration of time the computer is available for use by the user, wherein the duration of time the computer is available for use by the user is the total duration of the user session minus the wait time.
According to one aspect, the present invention provides a method for monitoring the performance of a computer when in use by a user, comprising the steps of: detecting a wait state in which the user is waiting for the computer; and collecting performance data relating to a duration of the wait state. This provides an assessment of the period of time the user is waiting for the computer to complete an action, process or command. The step of detecting a wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor; or detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input.
The method assesses the performance provided to the user of the computer from the performance data. Preferably, the step of collecting performance data relating to the wait state includes measuring a duration of the wait state. This provides an assessment of the period of time the user is waiting for the computer to complete an action, process or command.
Preferably, the step of detecting a wait state includes detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor. The wait cursor is a clear indicator of the kind of wait state a user experiences whilst using the computer.
Preferably, the pointing device is selected from the group consisting of: mouse, touch pad and touch screen. Advantageously, the wait state can be determined from the user's use of a range of input devices. Input devices may include eye movement, motion detection, voice control or any similar means allowing a user to interact with information shown on a screen.
Preferably, the duration of the wait state is the elapsed time between an end of a user input and a beginning of a computer response. This gives a measure of the time during which the user is unable to use the computer. Typically the duration of the wait state relates to a single user session.
Preferably, the duration of the wait state is a sum of all the durations of a wait state during a user session. This gives an indication of the total amount of time the user has not been able to use the computer. Preferably, the step of collecting performance data additionally comprises the step of: measuring a duration the computer is in use by the user. This provides information on how much time the user had been able to use the computer.
Preferably, the duration of time the computer is in use by the user is the duration of a user session. This provides information on the total amount of time the user had been able to use the computer.
Preferably, the performance of the computer is measured by the duration of the wait state and its ratio to the duration the computer is in use. This gives an indication of the proportion of the session the computer has been unavailable for use.
Preferably, the step of collecting performance data additionally comprises the step of measuring a duration the computer is available for use by the user. This provides information on the availability of the computer for use by the user. Preferably, the duration the computer is available for use by the user is a sum of all the durations the computer is available for use during a user session. This provides information on the total amount of time the computer was available for use by the user.
Preferably, the performance of the computer is measured as a ratio between the duration of the wait state and the duration of when the computer is available. This gives an indication of the proportion of time the computer has been unavailable for use.
Preferably, the step of detecting a wait state includes detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input. This provides an assessment of the period of time the user is waiting for the computer to become available for use.
Preferably, the step of collecting performance data relating to the wait state includes data relating to resource utilisation. Preferably, data relating to resource utilisation includes one or more of: CPU utilisation, memory utilisation, disk queue utilisation, and disk free space. This is especially important where the performance of the computer is slowed because of a lack of memory or disk space or other resource.
Preferably, the step of collecting performance data relating to the wait state includes data relating to the network application. Preferably, data relating to resource utilisation includes one or more of: network upload speed and network download speed. This is especially important where the performance of the computer is slowed because of a poor, badly configured or busy network connection.
Preferably, the step of collecting performance data relating to the wait state includes the additional step of: analysing the performance data; and identifying a lack of resources. A duration of the wait state may be reduced by increasing availability of resources.
Preferably, the performance data is analysed to provide a report, wherein the performance of the computer is improved. This is useful if, for example, the report shows that a key resource is limited - improvements to the computer or its connections can lead to an immediate improvement in performance.
Preferably, the performance data is uploaded for analysis. This means that the analysis does not consume user resources. Preferably, the computer is one of a server, a desktop computer, a laptop computer, a mobile computer, a smart phone. The method may be applied to any computing device that the user may choose to use.
According to a further aspect, the present invention provides a method for measuring a delay experienced by a user interacting with a computer comprising the steps of: detecting a state in which the user is waiting for the computer and collecting performance data relating to a duration of the wait state. The step of detecting a wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor, or detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input.
The method measures the delay experienced by the user from the performance data.
According to a further aspect, the present invention provides a method for determining a cause of a delay experienced by a user interacting with a computer comprising the steps of: detecting a wait state in which the user is waiting for the computer and collecting performance data relating to a duration of the wait state. The step of detecting a wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor; or detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input.
The method determines a cause of the delay experienced by the user from the performance data. Brief Description of Drawings
Preferred embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 shows a schematic for a method for monitoring the performance of a computer; Figure 2 shows a schematic for a method for monitoring the performance of a computer, wherein the wait state is associated with a wait state cursor;
Figure 3 shows a schematic for a method for monitoring the performance of a computer, wherein the wait state is associated with a logon event;
Figure 4 shows a schematic showing how computer performance can be monitored and improved.
Modes for Carrying Out the Invention
The present invention measures a user's subjective experience of using one or more applications running on a computer. It does this by measuring the duration of wait states when the user cannot do any work on the application and also measuring the duration of active states, when the user is able to utilise the application normally. The user's subjective experience, which van be termed a user experience metric, a user experience index, or a user experience rating, is derived from these two parameters. Importantly, this approach does not measure the performance of the computer per se, but is a measure of the productivity of the user interacting with applications running on the computer. Figure 1 shows a method for monitoring the performance of a computer 100 when in use by user 102. During user session 104, at step 106, a wait state in which the user is waiting for the computer is detected and its duration can be measured. This can be because an application running on the computer with which the user is interacting is not available for user input (e.g. when a wait cursor is displayed, or during login). In step 108 performance data relating to the wait state are collected. This data can include the duration of a wait state, the application the user is utilising, and resource utilisation data, amongst others. In step 110, the data relating to the wait state are stored. The performance of the computer is determined from the performance data. In step 112, the performance of the computer can be improved by analysing the stored data. The method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the performance data. The cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources. Thus a wait duration specific to a user interaction with a computer can be measured, and involves the steps of: detecting a wait state in which the user is waiting for the computer and measuring a duration of the wait state. The step of detecting a wait state includes:
detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein a property of the wait cursor includes the one or more applications being used by the user and wherein the duration of the wait state is a period of time during which the cursor is a wait cursor. This means that, in use a delay experienced by a user interacting with one or more applications running on the computer is measured
Changing windows to escape the wait cursor is detected by the current invention because the application being used is a property of the wait cursor; the prior art is unable to achieve this, because it does not envisage that the wait cursor can change due to User action as well as application state changes. This is achieved by starting a user waiting meter when a wait cursor is displayed and stopping it when one of the following happens: a non- wait cursor is displayed; or the user selects another application to continue working. Here the duration of the wait state is a duration of time the user waiting meter was running
Figure 2 shows a method for monitoring the performance of a computer 100 when in use by a user 102 interacting with computer 100 by means of a pointing device 202, which may be a mouse as shown, or it may be a touch pad, touch screen or other similar input device. The input device is any device which allows a user to interact with information shown, and it may also include eye movement, limb motion, neural activity or voice activation. The wait state is detected in steps 204 and 206. In step 204, a change in pointing device cursor is detected, and in step 206 the change is assessed: is it a change to a wait cursor? The wait state cursor (such as an hourglass in Windows® before Vista® and many other systems, spinning ring in Windows Vista, watch in classic Mac OS, or spinning ball in Mac OS X) is displayed when the mouse cursor is in the corresponding window. If the cursor is a wait cursor, performance data relating to the wait state are collected. These data include a duration of the wait state and resource utilisation, and in steps 208 to 216, these are measured. In step 208, a wait timer and various resource utilisation counters are actuated. Alternatively, and usefully when the wait state of the operating system is so prolonged that the operating system seems frozen, the wait timer logs the time stamp at which the wait state started. In step 212 the cursor is monitored, and if, in step 214 the cursor remains a wait cursor, step 212 is repeated. If in step 214 the cursor is no longer a wait cursor, then in step 216 the wait timer and resource utilisation counters are halted. The duration of the wait state is thus the period of time during which the cursor is a wait cursor. In step 1 10, performance data relating to the wait state is stored. The method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the
performance data. The cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources. By way of example, the method of the present invention can be achieved by an agent running on the computer monitoring the cursor, be it a mouse, touch pad, touch screen or other pointing device cursor, and detects when the cursor becomes a wait state cursor (an hourglass or spinning circle that temporarily replaces the arrow). When the wait state has been detected, the agent then:
Starts the wait state timer to measure the length of the wait state (point 'WT)
Initialises resource utilisation counters, i.e. sets them to zero, for example for one or more of the following:
CPU 0 Utilisation;
Memory Utilisation;
Disk 0 Queue Utilisation; and
Disk 0 Free Space.
The agent continues to monitor the cursor, and when it detects that the cursor is no longer a wait state cursor, the agent then: Stops the wait state timer (point 'W2') and calculates the wait time 'W2' - 'WT = 'W3' in milliseconds
Retrieves the resource utilisation counters for the resource allocation counters, for example, one or more of the following:
CPU 0 Utilisation 'RU1' as a percentage;
Memory Utilisation 'RU2' in Mbytes;
Disk 0 Queue Utilisation 'RU3' in Integer Units (e.g. 0 to 10); and
Disk 0 Free Space 'RU4' in Mbytes.
Writes the wait state data (including the Wait Time 'W3') to the log file as a "WaitState" entry.
The approach isn't limited to the monitoring of a cursor on a computer having a graphical user interface, but also includes any input form in which a wait state occurs when the computer is not available for input of data by the user. Optionally the user can measure a duration of time the computer is in use by a user. This can be the total duration of a user session, or it can be the duration of time the computer is available for use by the user, which is the difference between total duration of a user session and wait time 'W3'. The data collected may be used to provide a measure of computer performance as experienced by a user from, for example, a ratio of a duration of wait time 'W3' to total duration of a user session, or a ratio of a duration of wait time 'W3' to the duration of time the computer is available for use by the user.
Figure 3 shows a method for monitoring the performance of a computer 100 when in use by a user 102, where the user is logging on to computer 100 and waiting for the computer to accept input. In step 302, a logon event is detected and the monitoring of the login process, and collection of performance data relating to the login process begins. These data include a duration of the login wait state. In step 304, the time stamp immediately following the logon event is captured. If there has been a logon audit event 306, the start time of the logon audit event is captured. In step 308 the duration of the wait state associated with logon is determined by the difference between the time stamp immediately following the logon event and the start time of the logon audit event. The duration of the wait state is thus the period of time during in which the computer is not available for user input. The resultant performance data, i.e. the logon delay time, is logged 1 10. The method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the performance data. The cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources.
By way of example, the method of the present invention can be achieved by an agent running on the computer and which is actuated following logon. When user 102 logs onto the target PC, the agent is automatically started and reads the Windows Event Log for the latest logon event to determine when the Windows OS time stamped the start of the logon sequence. The agent subtracts the start logon timestamp from the current time to determine the Logon Delay and writes the result to the log file as a "LogonDelay" entry.
In the above, the performance of the computer is measured by the duration of the wait state. The method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the performance data. The cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources.
Figure 4 shows a schematic showing how computer performance can be monitored and improved. Data stored at step 110 can be uploaded to a server. In order for the uploading not to lead to a reduction in the performance of the computer, the data is uploaded when the computer is idle, for example when there has been no mouse or keyboard activity for at least 60 seconds. In step 502, the server analyses the data to improve the performance of the computer, and in step 504, provides a report. The method measures a duration of the delay, and / or determines a cause of the delay experienced by the user from the performance data. The cause of the delay is typically due to a lack of resources, and analysing the performance data and identifying a lack of resources means that a duration of the wait state may be reduced by increasing availability of resources.
For example WaitEvent data are averaged with other WaitEvent entries on the same day, for example, the sum of all 'RU 1' data divided by the number of 'RU1 ' entries. This is repeated for each of the 'RU2' - 'RU4' Resource Utilisation data to the form the Summarised WaitEvent data.
The summarised WaitEvent data are displayed, on demand by the user, within a User determinable date range (for example, in 1 day units) on to a GUI driven by the server, typically a web page. The summary includes one or more of the following:
Total Wait Time (Total Lost Time); A presentation of the cost of the Total Wait Time is determined by converting the Total Wait Time to hours and then multiplying by a cost per hour figure, as defined and stored in the Users local parameter file on the User's PC; or centrally on a server An Application Wait Time Rating is determined by comparison with predetermined thresholds e.g. Poor, OK, Good, and presented to the GUI. The thresholds are stored centrally and are considered global to the whole system to provide consistency and facilitate comparisons between disparate PC's and installations.
A raw data export facility can also be provided to allow organisations to perform their own customised analysis.
Similarly the LogonDelay event data are aggregated with other LogonDelay events on the same day
The aggregated LogonWait data are displayed, on demand by the user, on a GUI driven by the server, typically a web page, including:
A Logon Delay Rating is determined by comparison with predetermined thresholds e.g. Poor, OK, Good
Similarly the NetSpeed data are aggregated with other NetSpeed entries on the same day
The aggregated NetSpeed data are displayed, on demand by the user, on a GUI driven by the server, typically a web page, including:
Network Speed Ratings (upload and download) which is determined by comparison with predetermined thresholds e.g. Poor, OK, Good
The approach can be applied to detecting and/or measuring any existing wait state parameter provided directly or indirectly by the operating system kernel of a computer (i.e. any device capable of being so monitored), or, if absent, to supply one by modifying or supplementing the kernel of the operating system and to record, by means of whatever accurate timing algorithm or timing resource available on the device, the changes in state in a local data file held on the computer's storage medium (or another connected computer). In an alternative embodiment, if there were no reliable operating system wait state flag, the present invention would simulate a hardware layer input "beneath" the operating system to allow a wait state flag to be set and passed through whatever operating system was then installed upon the computer.
The method of the invention monitors events, such as commencing a login to a connected resource and achieving login, or accessing a resource on a computer connected by network to the computer on which the monitoring is taking place, and information about the events is captured to the data file. Accurate timing means are utilised (such that the synchronicity of events is preserved as between the events monitored) to capture resource utilisation of the user's computer by means of known activity measures, (for example - disk data transfer rate relative to maximum data transfer rate for the disk; memory data transfer rate/ number of read / write requests, and so forth), such that measures of the relative intensity of resource utilisation is synchronously captured in the data file.
Retrieval of the data for either a) near real time monitoring or b) archival retrieval of the records in the data file enables determinations of which resource constraints generate which wait state. This process being partly or wholly conducted by analysis of the local datafile and / or partly or wholly from global analysis and comparison of patterns revealed from analysis of the community database of all possible performance data collected from monitored devices of similar type and configuration.
Collation of such data from all possible monitored devices enables further analysis, to provide supplementing analysis scoring/recommendations for detected patterns and related causes of wait states on monitored devices of similar type and configuration. The present invention is based on the observation that the User is the most valuable component of a computer system and making a User ineffective, by providing them with a computer, software or applications that do not perform well to save money, is undoubtedly a false economy. Users can perform actions in a synchronous manner, i.e. one after the other, so it is relatively easy to capture and record each User activity as compared to actually understanding the activity of dozens of concurrent applications running in an asynchronous Windows environment, of what the User actually is experiencing. So by taking the User as the point of observation the wait cursor is an accurate and useful measure of User
Experience/Effectiveness. For example:
User Active Meter Mechanism
1. The user presses a key on the keyboard or clicks a mouse button
a. The user active meter starts
2. The user does not press a keyboard key or click a mouse button for 60
seconds (arbitrary figure)
a. The User active meter stops
b. The User active value is output to a file in the readings folder
User Waiting Meter Mechanism
1. A wait cursor is displayed (hourglass or blue donut)
a. The User waiting meter starts
2. Then one of two things happens:
A. The User waits until the wait cursor changes to a non-wait cursor The User waiting meter stops
The User waiting value is output to a file in the readings folder
B. The User decides not to wait for current task to complete (which would cancel the wait cursor) and selects another application to continue working
a. The User waiting meter stops
The User waiting value is output to a file in the readings folder
Analysis and Reporting Total Active Time = The sum of all User active values for a given time period Total Wait Time = The sum of all User wait values for a given time period
User Experience Rating = (Total Wait Time / Total Active Time ) * 100
Hidden Cost (of keeping valuable Users waiting) = User Cost Per Hour multiplied by Total Wait Time (in hours) for a given time period.

Claims

Claims
1. A method for measuring a wait duration specific to a user interaction with a computer, comprising the steps of:
detecting a wait state in which the user is waiting for the computer; and
measuring a duration of the wait state;
in which the step of detecting a wait state includes: detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein a property of the wait cursor includes the one or more applications being used by the user and wherein the duration of the wait state is a period of time during which the cursor is a wait cursor;
wherein, in use, a delay experienced by a user interacting with one or more applications running on the computer is measured.
A method according to claim 1 , comprising the additional step of:
collecting performance data relating to the one or more applications and the duration of the wait state;
wherein, in use, the measure and cause of the delay experienced by the user is determined from the performance data.
3. A method according to claim 1 or claim 2, comprising the additional steps of:
starting a user waiting meter when a wait cursor is displayed; and
stopping a user waiting meter when one of the following occurs:
a non-wait cursor is displayed; or
the user selects another application to continue working;
wherein duration of the wait state is a duration of time the user waiting meter was running.
4. A method for measuring a user experience metric specific to a user interaction with a computer comprising the steps of:
measuring a wait duration specific to a user interaction with a computer according to any of claims 1 to 3;
measuring a duration of user activity to give a user active value; and
wherein the user experience metric is a function of the sum of all user active values for a given time period and a sum of all user durations for the given time period.
5. A method according to claim 4, in which measuring a duration of user activity to give a user active value comprises the steps of:
starting a user active meter when a user presses a key on a keyboard or clicks a mouse button; and
stopping the user active meter when the user does not press the keyboard key or click the mouse button for a preset duration of time;
wherein the user active value is a duration of time the user active meter was running.
6. A method according to any of claims 1 to 5, wherein a performance of the computer as experienced by the user is determined from a ratio of a duration of the wait time to a duration of time the computer is available for use by the user, wherein the duration of time the computer is available for use by the user is the total duration of the user session minus the wait time.
7. A method according to any preceding claim, in which the step of detecting a wait state includes detecting a logon event.
8. A method according to claim 7, including the additional step of collecting performance data relating to the wait state includes data relating to resource utilisation.
9. A method according to claim 8, in which data relating to resource utilisation includes one or more of: CPU utilisation, memory utilisation, disk queue utilisation; and disk free space; network upload speed and network download speed.
10. A method according to any of claims 2 to 9, in which the performance data is analysed to provide a report, wherein the performance of the computer is improved.
11. A method according to any preceding claim, in which the computer is one of a server, a desktop computer, a laptop computer, a mobile computer, a smart phone.
12. A method according to claim 1 , in which the pointing device is selected from the group consisting of: mouse, touch pad and touch screen.
13. A method for determining a cause of a delay experienced by a user interacting with a computer comprising the steps of:
detecting a wait state in which the user is waiting for the computer;
collecting performance data relating to a duration of the wait state;
in which the step of detecting a wait state includes:
detecting a change in a cursor associated with a pointing device being used by the user and determining if the cursor is a wait cursor, wherein the duration of the wait state is a period of time during which the cursor is a wait cursor; or
detecting a logon event, wherein the duration of the wait state is the period of time during in which the computer is not available for user input;
wherein, in use, the cause of the delay experienced by the user is determined from the performance data.
14. A method substantially as described herein described with reference to the drawings.
PCT/GB2014/052275 2013-07-24 2014-07-24 Monitoring the performance of a computer WO2015011487A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1313239.4A GB2519735A (en) 2013-07-24 2013-07-24 Monitoring the performance of a computer
GB1313239.4 2013-07-24

Publications (1)

Publication Number Publication Date
WO2015011487A1 true WO2015011487A1 (en) 2015-01-29

Family

ID=49119229

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2014/052275 WO2015011487A1 (en) 2013-07-24 2014-07-24 Monitoring the performance of a computer

Country Status (2)

Country Link
GB (2) GB2519735A (en)
WO (1) WO2015011487A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02105236A (en) 1988-10-13 1990-04-17 Nec Corp System for counting response time in time sharing system
US5696702A (en) * 1995-04-17 1997-12-09 Skinner; Gary R. Time and work tracker
US6046816A (en) 1997-07-01 2000-04-04 Adobe Systems Incorporated Print data flow operation standardized test technique
GB2370140A (en) 2000-08-31 2002-06-19 Hewlett Packard Co A delay accounting method for computer system response time improvement
US20050088410A1 (en) * 2003-10-23 2005-04-28 Apple Computer, Inc. Dynamically changing cursor for user interface
US20120144246A1 (en) 2010-12-02 2012-06-07 Microsoft Corporation Performance monitoring for applications without explicit instrumentation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW282525B (en) * 1994-06-17 1996-08-01 Intel Corp
US5872976A (en) * 1997-04-01 1999-02-16 Landmark Systems Corporation Client-based system for monitoring the performance of application programs
US7072800B1 (en) * 2002-09-26 2006-07-04 Computer Associates Think, Inc. Application response monitor
US7243265B1 (en) * 2003-05-12 2007-07-10 Sun Microsystems, Inc. Nearest neighbor approach for improved training of real-time health monitors for data processing systems
IL181041A0 (en) * 2007-01-29 2007-07-04 Deutsche Telekom Ag Improved method and system for detecting malicious behavioral patterns in a computer, using machine learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02105236A (en) 1988-10-13 1990-04-17 Nec Corp System for counting response time in time sharing system
US5696702A (en) * 1995-04-17 1997-12-09 Skinner; Gary R. Time and work tracker
US6046816A (en) 1997-07-01 2000-04-04 Adobe Systems Incorporated Print data flow operation standardized test technique
GB2370140A (en) 2000-08-31 2002-06-19 Hewlett Packard Co A delay accounting method for computer system response time improvement
US20050088410A1 (en) * 2003-10-23 2005-04-28 Apple Computer, Inc. Dynamically changing cursor for user interface
US20120144246A1 (en) 2010-12-02 2012-06-07 Microsoft Corporation Performance monitoring for applications without explicit instrumentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MIKE TSYKIN ET AL: "END-TO-END RESPONSE TIME AND BEYOND : DIRECT MEASUREMENT OF SERVICE LEVELS", 31 January 1998 (1998-01-31), XP055146856, Retrieved from the Internet <URL:http://www.cmgaus.org/cmga_web_root/proceedings/1998/tsykin98.pdf> [retrieved on 20141015] *

Also Published As

Publication number Publication date
GB2519735A (en) 2015-05-06
GB2518504A (en) 2015-03-25
GB201413179D0 (en) 2014-09-10
GB201313239D0 (en) 2013-09-04

Similar Documents

Publication Publication Date Title
EP3019971B1 (en) Methods and systems for performance monitoring for mobile applications
US10592522B2 (en) Correlating performance data and log data using diverse data stores
Ding et al. Log2: A {Cost-Aware} logging mechanism for performance diagnosis
Reiss et al. Towards understanding heterogeneous clouds at scale: Google trace analysis
US11210189B2 (en) Monitoring performance of computing systems
US20040267548A1 (en) Workload profiling in computers
US10360140B2 (en) Production sampling for determining code coverage
US20160350197A1 (en) Measuring user interface responsiveness
Kim et al. FEPMA: Fine-grained event-driven power meter for android smartphones based on device driver layer event monitoring
WO2017039892A1 (en) Estimation of application performance variation without a priori knowledge of the application
US11360872B2 (en) Creating statistical analyses of data for transmission to servers
US9471237B1 (en) Memory consumption tracking
US20110209160A1 (en) Managed Code State Indicator
Rameshan et al. Hubbub-scale: Towards reliable elastic scaling under multi-tenancy
CN112346962B (en) Control data testing method and device applied to control testing system
US10708344B1 (en) Continuous performance management in information processing systems
CN113850506A (en) Method and device for evaluating working quality, storage medium and electronic equipment
US11379777B2 (en) Estimating a result of configuration change(s) in an enterprise
WO2015011487A1 (en) Monitoring the performance of a computer
Cornejo et al. In the field monitoring of interactive application
CN108255669A (en) The method and system of the batch processing of application performed in monitoring Basis of Computer Engineering facility
JPH05181631A (en) Real-time-device resource monitor for data processor having dynamic variable renewal and support for automatic boundary
Bissig et al. Towards measuring real-world performance of android devices
WO2022180863A1 (en) User operation recording device and user operation recording method
US20230107604A1 (en) Quality evaluation apparatus, quality evaluation method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14762050

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14762050

Country of ref document: EP

Kind code of ref document: A1