US20070233638A1 - Method and system for providing cost model data for tuning of query cache memory in databases - Google Patents

Method and system for providing cost model data for tuning of query cache memory in databases Download PDF

Info

Publication number
US20070233638A1
US20070233638A1 US11/278,373 US27837306A US2007233638A1 US 20070233638 A1 US20070233638 A1 US 20070233638A1 US 27837306 A US27837306 A US 27837306A US 2007233638 A1 US2007233638 A1 US 2007233638A1
Authority
US
United States
Prior art keywords
query
cache
size
token
stored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/278,373
Other versions
US7502775B2 (en
Inventor
Matthew Carroll
Christian Garcia-Arellano
Sam Lightstone
Michael Snowbell
Adam Storm
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US11/278,373 priority Critical patent/US7502775B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIGHTSTONE, SAM SAMPSON, SNOWBELL, MICHAEL JOSHUA, STORM, ADAM J., CARROLL, MATTHEW JAMES, GARCIA-ARELLANO, CHRISTIAN MARCELO
Publication of US20070233638A1 publication Critical patent/US20070233638A1/en
Application granted granted Critical
Publication of US7502775B2 publication Critical patent/US7502775B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99932Access augmentation or optimizing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99933Query processing, i.e. searching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability

Definitions

  • the present invention relates to computer database systems, and more particularly to the tuning of query cache memory in database systems.
  • Database management systems store extensive amounts of organized information for access by users of the systems.
  • the system responds to specific queries from the user as to which data from the database to examine, retrieve and/or present.
  • users provide queries in a particular syntax of a particular database language that is used by the system, such as Structured Query Language (SQL) format.
  • SQL Structured Query Language
  • DBMS's utilize computer main memory for a variety of tasks. Important tasks include sorting and caching, including caching data pages (i.e., the tables of the database), as well as caching queries. Queries are stored in a query cache to speed up query processing.
  • a query cache e.g., “package cache” in the DB2 product from IBM Corporation
  • a query cache typically stores the SQL text for each query issued to the DBMS, as well as an executable form of the query which is the result of compiling the query.
  • the query cache is searched to determine if the DBMS has the executable form of the query already stored in the cache.
  • the executable form is already present, this indicates that the same query was previously received by the DBMS and compiled into its executable form at that time. Thus, the expensive and time-intensive process of compiling the query is avoided at later times if the query is already available in the query cache.
  • the purpose of the query cache is to avoid incurring this cost as much as possible.
  • the storage capacity (size) of the query cache determines how many queries can be stored and thus determines how likely a query will be found in the cache. When the size of the query cache is exceeded, older queries are typically removed from the cache to make room for newer queries. In general, the smaller the query cache, the greater the amount of cache “misses” will occur, in which the query is not found, and compiling of the query must therefore be performed. However, if the size of the query cache is too large, then valuable memory storage would be wastefully allocated when a smaller cache size would be adequate, and when a different memory area of the system may be in greater need of the memory storage.
  • setting a proper cache size is an important task in a DBMS.
  • setting the size of the query cache was a manual task forced upon a database administrator.
  • the administrator would set the query cache one time initially and not make frequent updates.
  • set the query cache size based on a program asking the user initially a set of questions and using a heuristic based on the answers to the questions to set the query cache size.
  • a problem with such solutions is that they do not adapt to changes occurring in the database system, and they are based on equations which are only rough approximations of query size and performance of the system.
  • Other solutions are likewise inflexible, requiring that simulations be run at particular pre-designated cache sizes to find a more optimal cache size for the system.
  • a method for providing cost data for tuning a size of a query cache in a database system includes receiving a query, checking a query cache to determine whether a cache hit or a cache miss occurs, determining the processing time saved by the avoidance of the cache miss and a cache size needed to avoid the cache miss when the cache miss occurs.
  • a similar aspect of the invention is provided for a computer program product comprising a computer readable medium including program instructions for implementing similar features.
  • a system for providing cost data for tuning a size of a query cache in a database system includes a query cache provided in memory of the database system, a simulation area provided in the memory of the database system, and at least one processor in communication with the query cache and the simulation area.
  • the processor is operative to check the query cache to determine whether a cache hit or a cache miss occurs, and determine the processing time saved by the avoidance of the cache miss and a cache size needed to avoid the cache miss when the cache miss occurs.
  • the present invention provides a method and system allowing cost data related to query cache size and processing time to be accurately determined, this data to be used for tuning the query cache of a database system to any desired size.
  • the invention also allows such cost data to be determined based on actual use of a database system, rather than projected use.
  • FIG. 1 is a block diagram illustrating a system suitable for use with the present invention
  • FIG. 2 a is a diagrammatic illustration of a query cache and simulation area for use with the present invention
  • FIG. 2 b is a diagrammatic illustration of a stub of the present invention for storing in the simulation area of FIG. 2 a;
  • FIG. 3 is a flow diagram illustrating a method of the present invention for providing cost data for tuning the size of a query cache in a database system
  • FIGS. 4 a and 4 b are graphs illustrating examples of histograms which can be used to store and present data points determined by the present invention.
  • the present invention relates to computer database systems, and more particularly to the tuning of query cache memory in database systems.
  • the following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements.
  • Various modifications to the preferred embodiment and the generic principles and features described herein will be readily apparent to those skilled in the art.
  • the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein.
  • the present invention is mainly described in terms of particular systems provided in particular implementations. However, one of ordinary skill in the art will readily recognize that this method and system will operate effectively in other implementations. For example, the system implementations usable with the present invention can take a number of different forms. The present invention will also be described in the context of particular methods having certain steps. However, the method and system operate effectively for other methods having different and/or additional steps not inconsistent with the present invention.
  • the invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
  • the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
  • a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
  • Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk.
  • Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
  • a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus.
  • the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • I/O devices including but not limited to keyboards, displays, pointing devices, etc.
  • I/O controllers can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
  • Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • FIGS. 1-4 b To more particularly describe the features of the present invention, please refer to FIGS. 1-4 b in conjunction with the discussion below.
  • FIG. 1 is a block diagram illustrating a system 10 suitable for use with the present invention.
  • System 10 includes exemplary components, such as a server 12 and a client 14 .
  • Server 12 is implemented as a computer system or electronic device.
  • the server 12 can be a mainframe computer, desktop computer, workstation, portable computer, or electronic device.
  • One or more microprocessors and memory devices (not shown) control the operation of the server, including data manipulation, computation, input/output, and other typical functions.
  • the query cache and simulation area used in the present invention (as well as other caches, like a data page cache) can be implemented in memory of the server 12 , or alternatively in other data storage, such as hard disk, and the functions of the invention can be provided by at least one processor of the server 12 .
  • the server 12 includes one or more database servers 16 , which are implemented on the server 12 and performs functions and operations specific to a database system.
  • a database server 16 can include the necessary software to, for example, send and receive appropriate database commands and data, read and write data to a database, allocate space for and organize data, etc.
  • the database server 16 can be implemented based on a known standard, such as that for the DB2 product from IBM Corporation, or other database standards.
  • the database server 16 can include the necessary program instructions and/or components to implement the present invention as described below with reference to FIG. 3 .
  • One or more databases 18 are included in database server 16 , where a database 18 is an entity that includes user data as well as information and functions needed to implement the data storage.
  • the database server 16 organizes data in database 18 according to rules and structures as specified by the database creator and administrator. For example, the data can be organized into tables and table spaces.
  • the storage of data in the database 18 is implemented using one or more storage devices, such as hard disks, magnetic tape, optical storage devices (CD, DVD, etc.), memory, or other types of storage media.
  • the database 18 can be distributed among multiple computer systems or electronic devices, such as multiple servers 12 .
  • Client 14 can be any suitable computer system or electronic device that allows a user (or program) to communicate with the server 12 .
  • the client 14 can implement an application 22 , which can be a database interface application, for example.
  • the database interface allows the user to input queries to the database server 16 , to search for data stored in the database 18 that satisfies the queries, and return that data to the application 22 .
  • user access can be provided to the data of the database 18 using a database language, such as Structured Query Language (SQL).
  • SQL Structured Query Language
  • interface application 22 can be implemented on the server 12 to allow access to database server 16 for a user.
  • the communication between client 14 and server 12 is allowed over a communication layer 24 , which can include transmission wires, wireless communication devices, internal circuits to the server 12 , or any electronic communication medium and mechanism.
  • FIG. 2 a is a diagrammatic illustration of a query cache 50 and a simulation area 52 for use with the present invention.
  • Query cache 50 is storage space allocated in memory of the database system 16 for storing queries. When a query is stored, it can be stored as a “query object” 54 in the cache 50 , the query object 54 including the text query that was input (e.g., an SQL statement), and a compiled form of the query (“executable query”, also known as a “section”).
  • the executable query is in a form that can be executed by the database system, and can include many different optimizations to execute efficiently, as is well known to those of skill in the art.
  • the query cache 50 stores a number of query objects 54 based on the size of the cache 50 and the size of each query object 54 .
  • the size of query objects 54 can vary based on the contents of the query instruction.
  • the simulation area 52 is an area in memory storage of database server 16 that is devoted to the storage of “stubs” 56 of the present invention.
  • the simulation area 52 can be an allocated portion of memory (or other storage) of the system that is independent of the query cache 50 , and, for example, can be in a separate area of memory or in an adjacent area of memory to the cache 50 .
  • a stub 56 of the present invention includes a token 58 , which is a representation of a query object 54 that takes much less storage space than the query object 54 .
  • the simulation area 52 can be made much smaller than the query cache 50 , since the stubs 56 require much smaller storage space.
  • tokens 58 are created and stored for query objects 54 that have been evicted from the query cache 50 to make room for new queries. The creation and use of tokens 58 is described in greater detail below with respect to FIG. 3 .
  • a stub 56 also includes a size (S) parameter 60 , which indicates the size of the query object 54 that was evicted from the query cache 50 , which the token 58 of that stub 56 represents.
  • Stub 56 additionally includes an identification number (N) 62 , which is a number assigned to the stub 56 when it is stored in the simulation area and indicates a place in the order in which the stub 56 was inserted in the simulation area 52 , i.e., the number N for a stub 56 indicates that it is the Nth stub to be inserted in the simulation area.
  • the size parameter 60 and identification number 62 are used to determine the size that the cache 50 needs to be under certain circumstances, which is described in greater detail below with respect to FIG. 3 .
  • FIG. 3 is a flow diagram illustrating a method 100 of the present invention for providing time cost data for tuning the size of a query cache 50 in a database system.
  • Method 100 can be implemented, for example, on the same computer or electronic device (or one of the computer systems or devices) that implements the database server 16 and database 18 .
  • Method 100 can be implemented by program instructions or code. Program instructions can be stored on a computer readable medium.
  • the method 100 can be implemented in hardware (logic gates, etc.), or in a combination of hardware and software.
  • the method begins at 102 , and in step 104 , the process receives a query, which was originally input by a user (including a human user, or a different program or device).
  • the query is in a standard format or language to allow it to be processed by the database system. For example, queries are commonly provided in SQL format.
  • step 106 the process searches the query cache 50 for a stored query that is the same as the query received in step 104 .
  • a stored query if present, would have been received at an earlier time and stored at that time in the cache 50 .
  • the process searches by comparing the text query statements in the cache 50 with the text in the received query.
  • other information in the query objects 54 can also be searched to determine whether the received query is the same as a stored query.
  • step 108 the process determines if there has been a cache hit or a cache miss. If the same query as the received query has been found stored in the query cache 50 , it is a hit; otherwise it is a miss. If there is a cache hit, the process continues to step 136 , described below. If there is a cache miss, the process continues to step 110 , in which it is determined whether the query cache 50 has insufficient space to fit the received query in its storage area. If the query cache 50 has sufficient space, the process continues to step 118 , described below.
  • a stub 56 is created for a selected query object 54 stored in the cache 50 , which will be evicted (purged) from the cache 50 .
  • the selected query object 54 for eviction can be any of the stored query objects 54 , selected based on different criteria in different embodiments.
  • One selection criterion is age: the oldest query object 54 is selected, i.e., the query object 54 that was executed the furthest in the past. This can be determined, for example, based on timestamp values stored in the query objects 54 in the cache.
  • a created stub 56 includes a token 58 , a size parameter 60 , and an identification number 62 , as described above with reference to FIG. 2 b.
  • the token 58 is a representation of the evicted query that requires much less storage space than the query object 54 , and can be created using any of a variety of processes.
  • the token generation process should be deterministic, to always provide the same token 58 from the same query; should provide a token 58 small relative to the size of the typical size of a query object 54 ; and, as much as possible, should provide a unique token 58 for each different query.
  • a cyclical redundancy check (CRC) can be performed on the textual part of the query to generate the resulting value as a token 58 ; for example, a 64-bit CRC can provide enough bits to handle the typical amount of tokens 58 that may be needed in many current database systems.
  • the CRC can be provided based on the executable form of the query.
  • Other embodiments can use other methods to generate a token 58 , such as particular hash algorithms.
  • step 114 the selected query object 54 is evicted from the query cache 50 .
  • step 116 the stub 56 created in step 112 for the evicted query is inserted in the simulation area 52 of memory storage. The process then continues to step 118 .
  • Step 118 is performed after query eviction in steps 112 - 116 , or if the query cache 50 has insufficient space as determined in step 110 .
  • the process creates a stub 56 for the received query object 54 .
  • the created stub 56 includes a token 58 , a size parameter 60 , and an identification number 62 .
  • the process searches the simulation area 52 for a stored token 58 that is the same as the token 58 created in step 118 for the received query.
  • step 122 the process determines whether the search for the token 58 in the simulation area 52 is successful. If not, the process continues to step 124 , in which the received query is compiled and the query object 54 stored in the query cache 50 as normal (the time to compile the query can also be computed, similarly as in step 132 , described below, for use in step 138 ). The process then continues to step 126 to execute the executable query as is well known to those of skill in the art, and the process is complete at 128 . It should be noted that different processes or threads can perform the compilation and/or execution of the query, which can be the same or different from the process of the present invention for determining cost data for tuning the query cache size.
  • step 130 in which the received query is compiled and its query object 54 is stored in the query cache 50 .
  • the processing time to compile the received query is also measured in this step. This can be performed, for example, by taking a time stamp t 1 before the query is compiled, taking a time stamp t 2 when the compilation is complete, and subtracting time t 1 from time t 2 .
  • the “processing time” measured here can be based on other operations.
  • processing time in this case would be the measured I/O time needed to retrieve the pre-compiled query, rather than the time for compiling it directly.
  • processing time may usually be referred to herein as time to compile, this term can also refer to other processing operations needed to obtain a compiled form of the query.
  • a larger size of the cache 50 is determined, this larger size being the size the cache 50 needs to be to avoid the cache miss that occurred.
  • the query cache 50 would have to be large enough to accommodate the storage of the received query, so that this query would be available as a cached query when the query was received.
  • this determination is made by using the simulation area 52 and the stored stub 56 that matched the received query.
  • the size of the query cache 50 needed to avoid the cache miss is the current query cache size, plus the sum of the sizes S of all tokens 58 in the simulation area 52 that have an identification number 62 less than or equal to the identification number 62 N of the matching token 58 . Many different optimizations can be used to approximate this sum.
  • one method is to order the stubs 56 in the simulation area 52 by their identification numbers 60 and traverse all stubs 56 in the simulation area 52 , from lowest ID number to N, adding the size value S in each stub 56 to the total sum.
  • Other methods can be used in other embodiments.
  • step 134 the processing time (as determined in step 130 ) and the larger cache size (as determined in step 132 ) are stored in memory or other storage for future reference.
  • every pair of time and size is stored as an individual data point.
  • an optimization can be used to reduce the storage requirements of the processing time and size data points. For example, each time datapoint can be added to a histogram whose bins represent cache size intervals. FIG. 4 a shows one example of such a histogram, described below.
  • step 134 the process continues to step 126 where the compiled, executable query is executed as in standard query processing, and the process is complete at step 128 .
  • step 136 a cache size is determined, this cache size being the minimum size needed to allow the cache hit. This can be performed by summing the size of all query objects 54 in the cache 50 , which are newer or equal in age to the query object 54 that matched the received query. The size of each query object 54 can be stored in the query cache 50 and retrieved in this step, or, alternatively, the size of the query object 54 computed on the fly during this step.
  • the query objects 54 can be traversed and ordered based on their time stamps indicating when the query object 54 was stored in the cache 50 , and the newest queries' sizes added to the total sum of sizes.
  • a more approximate measure of the minimum cache size can be estimated by maintaining a running average size of query object 54 stored in the query cache 50 , keeping a count of the number of objects 54 in the query cache 50 , and summing the average size for the appropriate number of query objects 54 .
  • This average size approximation can also be performed for step 132 ; for example, the increase in cache size needed to avoid the cache miss can be the average query object size multiplied by the number of objects in the simulation area which have an identification number 62 less than or equal to the identification number 62 of the matching token 58 .
  • step 138 the minimum cache size determined in step 136 , and the processing time taken to originally compile (or otherwise obtain) the received query, are stored in memory or other storage for future reference.
  • the time taken to originally compile the received query is retrieved from the query object 54 in the cache 50 (this time value was stored in the cache 50 in the query object 54 at time of compilation).
  • This data indicates the minimum cache size needed to not incur the associated time cost data point (that time indicating the time to compile the query statement). If the cache 50 is reduced below this minimum cache size, then the time to compile is the time cost.
  • every pair of time and cache size is stored as an individual data point.
  • an optimization can be used to reduce the storage requirements of the time and size data points. For example, each time datapoint can be added to a histogram whose bins represent cache size intervals. FIG. 4 b shows one example of such a histogram, described below.
  • step 138 the process continues to step 126 where the compiled, executable query is executed as in standard query processing, and the process is complete at 128 .
  • steps 136 - 138 are optional. In some embodiments, only the size and time data points determined from steps 130 and 132 are stored and used for tuning query cache 50 size (in such an embodiment, the processing time of compilation of a query may not need to be stored in the query cache 50 , since it will not be needed for the step 138 ). Thus, only the data points for time saved when increasing the cache size would be available. However, in many embodiments, for small amounts of cache size changes, an estimate of the time cost for decreasing the query cache size by a certain amount can be approximated as the same as the equivalent time savings for increasing the query cache size by the same amount.
  • FIG. 4 a is a graph illustrating an example of a histogram 200 of the present invention for storing and presenting data points determined by steps 130 - 134 of the process of FIG. 3 .
  • Histogram 200 has a horizontal axis representing cache size, and a vertical axis representing processing time saved for the cache sizes. Time and size data points are aggregated by the histogram.
  • each bucket along the horizontal axis represents a size increase of 10 megabytes over the current cache size (buckets can be other sizes in other histogram embodiments).
  • the time from step 130 is added to the time saved value of the bucket corresponding to the cache size determined in step 132 .
  • each bucket includes a count or sum of all the time saved that would occur if the query cache 50 were increased in size by an amount in the range of cache sizes represented by that bucket.
  • the processing time saved for a desired cache size tuning adjustment can be determined from the histogram by summing the time values for the appropriate histogram buckets for the amount of increase to the desired size. For example, if the current query cache size is 20 megabytes, and it is desired to know how much time is saved by increasing the cache size to 50 megabytes, then it is an increase of 30 megabytes, and the histogram buckets representing 0-10, 10-20, and 20-30 megabytes (buckets 202 , 204 , and 206 ) are selected and the time saved values of these buckets is summed to find the total time saved.
  • histogram data (or other stored data) can be reset to zero or shifted when any update of the cache size is implemented. This is because some of the stored data of the histogram is related to the old cache size and will no longer be pertinent to the new query cache size. For example, if the query cache size is increased by 10 megabytes, then the whole histogram can be reset to zero; or in other embodiments the pertinent buckets of the histogram can be zeroed and the remaining buckets shifted, e.g., the 10-20 megabyte bucket can become the 0-10 megabyte bucket.
  • FIG. 4 b is a graph illustrating an example of a histogram 210 of the present invention for storing and presenting data points determined by steps 136 and 138 of the process of FIG. 3 .
  • Histogram 210 has a horizontal axis representing cache sizes, and a vertical axis representing the processing time cost if the cache 50 were reduced in size below the different minimum cache sizes. Time and size data points are aggregated by the histogram.
  • each bucket along the horizontal axis represents a size decrease of 10 megabytes under the current cache size.
  • the time from step 138 is added to the time value of the bucket corresponding to the cache size determined in step 136 .
  • each bucket includes a count or sum of all the time cost that would be incurred if the query cache 50 were reduced in size by an amount in the range of cache sizes represented by that bucket.
  • the time cost for a cache size decrease can be determined from the histogram by summing the time values for the appropriate histogram buckets for the amount of size decrease from the current size. For example, if the current query cache size is 30 megabytes, and it is desired to know how much time is cost by decreasing the cache size to 10 megabytes, then the amount of decrease is 20 megabytes, and the histogram buckets representing 0-10 and 10-20 megabytes (buckets 212 and 214 ) are selected and the time values of these buckets is summed to find the total time cost (it should be noted that the “time cost” value is actually a negative value in histogram 210 ).
  • the present invention advantageously allows cost model data for a query cache 50 to be provided for use in tuning the size of a query cache 50 .
  • This cost model data can be useful for an administrator of the system who is tuning the query cache size to achieve a more optimal performance of the database system by showing the time cost for any desired change in size of the query cache 50 , i.e., how beneficial it is to add memory to the query cache 50 , and how costly it is to take memory away from the query cache 50 .
  • the present invention is able to provide accurate cost model data based on the actual operation of the database system, rather than projected operation.
  • the cost model data can also be used in an automatic system that can self-tune cache sizes.
  • each memory area used for a different purpose such as the query cache, a data page cache, etc.
  • Each of the memory areas may be in use concurrently and therefore in competition for memory. Since each of the memory areas uses memory in different ways, it can be difficult to decide which memory area is most in need of memory.
  • the cost model data of the present invention can be used in an automatic system that looks at all the different memory areas and caches with different cost models and makes adjustments to different caches based on total time costs and overall performance.
  • a query cache cost model of the present invention can be used with a data page cache cost model for a data page cache of the database system. If the query cache model suggests that 5 seconds may be saved if 10 megabytes of memory are added to the query cache, and the data page cache cost model suggests that only 1 second is lost if 10 megabytes of memory are taken from the data page cache, then the automated system can decide to give the query cache 50 the 10 megabytes from the data page cache, for a net gain of 4 seconds less processing time.

Abstract

Providing cost model data for tuning query cache memory size in database systems. In one aspect, a query is received, and a query cache is checked to determine whether a stored query matches the received query. When a cache miss has occurred due to no stored queries matching the received query, the processing time saved by the avoidance of the cache miss and a cache size needed to avoid the cache miss are determined.

Description

    FIELD OF THE INVENTION
  • The present invention relates to computer database systems, and more particularly to the tuning of query cache memory in database systems.
  • BACKGROUND OF THE INVENTION
  • Database management systems (DBMS's) store extensive amounts of organized information for access by users of the systems. In a database system environment, the system responds to specific queries from the user as to which data from the database to examine, retrieve and/or present. Typically, users provide queries in a particular syntax of a particular database language that is used by the system, such as Structured Query Language (SQL) format.
  • DBMS's utilize computer main memory for a variety of tasks. Important tasks include sorting and caching, including caching data pages (i.e., the tables of the database), as well as caching queries. Queries are stored in a query cache to speed up query processing. A query cache (e.g., “package cache” in the DB2 product from IBM Corporation) typically stores the SQL text for each query issued to the DBMS, as well as an executable form of the query which is the result of compiling the query. When a query is issued, the query cache is searched to determine if the DBMS has the executable form of the query already stored in the cache. If the executable form is already present, this indicates that the same query was previously received by the DBMS and compiled into its executable form at that time. Thus, the expensive and time-intensive process of compiling the query is avoided at later times if the query is already available in the query cache. The purpose of the query cache is to avoid incurring this cost as much as possible.
  • The storage capacity (size) of the query cache determines how many queries can be stored and thus determines how likely a query will be found in the cache. When the size of the query cache is exceeded, older queries are typically removed from the cache to make room for newer queries. In general, the smaller the query cache, the greater the amount of cache “misses” will occur, in which the query is not found, and compiling of the query must therefore be performed. However, if the size of the query cache is too large, then valuable memory storage would be wastefully allocated when a smaller cache size would be adequate, and when a different memory area of the system may be in greater need of the memory storage.
  • Thus, setting a proper cache size is an important task in a DBMS. In prior systems, setting the size of the query cache was a manual task forced upon a database administrator. Typically, the administrator would set the query cache one time initially and not make frequent updates. For example, there are implementations that set the query cache size based on a program asking the user initially a set of questions and using a heuristic based on the answers to the questions to set the query cache size. A problem with such solutions is that they do not adapt to changes occurring in the database system, and they are based on equations which are only rough approximations of query size and performance of the system. Other solutions are likewise inflexible, requiring that simulations be run at particular pre-designated cache sizes to find a more optimal cache size for the system.
  • Accordingly, what is needed is the ability to provide accurate estimations as to how much processing time is saved and how much processing time it will cost for tuning a query cache to any desired size, and at any time over the actual operation of a database system, to assist the tuning of the size of that query cache. The present invention addresses such a need.
  • SUMMARY OF THE INVENTION
  • The invention of the present application relates to providing cost model data for tuning query cache memory size in database systems. In one aspect of the invention, a method for providing cost data for tuning a size of a query cache in a database system includes receiving a query, checking a query cache to determine whether a cache hit or a cache miss occurs, determining the processing time saved by the avoidance of the cache miss and a cache size needed to avoid the cache miss when the cache miss occurs. A similar aspect of the invention is provided for a computer program product comprising a computer readable medium including program instructions for implementing similar features.
  • In another aspect of the invention, a system for providing cost data for tuning a size of a query cache in a database system includes a query cache provided in memory of the database system, a simulation area provided in the memory of the database system, and at least one processor in communication with the query cache and the simulation area. The processor is operative to check the query cache to determine whether a cache hit or a cache miss occurs, and determine the processing time saved by the avoidance of the cache miss and a cache size needed to avoid the cache miss when the cache miss occurs.
  • The present invention provides a method and system allowing cost data related to query cache size and processing time to be accurately determined, this data to be used for tuning the query cache of a database system to any desired size. The invention also allows such cost data to be determined based on actual use of a database system, rather than projected use.
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 is a block diagram illustrating a system suitable for use with the present invention;
  • FIG. 2 a is a diagrammatic illustration of a query cache and simulation area for use with the present invention;
  • FIG. 2 b is a diagrammatic illustration of a stub of the present invention for storing in the simulation area of FIG. 2 a;
  • FIG. 3 is a flow diagram illustrating a method of the present invention for providing cost data for tuning the size of a query cache in a database system; and
  • FIGS. 4 a and 4 b are graphs illustrating examples of histograms which can be used to store and present data points determined by the present invention.
  • DETAILED DESCRIPTION
  • The present invention relates to computer database systems, and more particularly to the tuning of query cache memory in database systems. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein.
  • The present invention is mainly described in terms of particular systems provided in particular implementations. However, one of ordinary skill in the art will readily recognize that this method and system will operate effectively in other implementations. For example, the system implementations usable with the present invention can take a number of different forms. The present invention will also be described in the context of particular methods having certain steps. However, the method and system operate effectively for other methods having different and/or additional steps not inconsistent with the present invention.
  • The invention can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements. In a preferred embodiment, the invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
  • Furthermore, the invention can take the form of a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • The medium can be an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. Examples of a computer-readable medium include a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk. Current examples of optical disks include compact disk-read only memory (CD-ROM), compact disk-read/write (CD-R/W) and DVD.
  • A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
  • Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers.
  • Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
  • To more particularly describe the features of the present invention, please refer to FIGS. 1-4 b in conjunction with the discussion below.
  • FIG. 1 is a block diagram illustrating a system 10 suitable for use with the present invention. System 10 includes exemplary components, such as a server 12 and a client 14.
  • Server 12 is implemented as a computer system or electronic device. For example, the server 12 can be a mainframe computer, desktop computer, workstation, portable computer, or electronic device. One or more microprocessors and memory devices (not shown) control the operation of the server, including data manipulation, computation, input/output, and other typical functions. For example, the query cache and simulation area used in the present invention (as well as other caches, like a data page cache) can be implemented in memory of the server 12, or alternatively in other data storage, such as hard disk, and the functions of the invention can be provided by at least one processor of the server 12.
  • The server 12 includes one or more database servers 16, which are implemented on the server 12 and performs functions and operations specific to a database system. A database server 16 can include the necessary software to, for example, send and receive appropriate database commands and data, read and write data to a database, allocate space for and organize data, etc. For example, the database server 16 can be implemented based on a known standard, such as that for the DB2 product from IBM Corporation, or other database standards. The database server 16 can include the necessary program instructions and/or components to implement the present invention as described below with reference to FIG. 3.
  • One or more databases 18 are included in database server 16, where a database 18 is an entity that includes user data as well as information and functions needed to implement the data storage. The database server 16 organizes data in database 18 according to rules and structures as specified by the database creator and administrator. For example, the data can be organized into tables and table spaces. The storage of data in the database 18 is implemented using one or more storage devices, such as hard disks, magnetic tape, optical storage devices (CD, DVD, etc.), memory, or other types of storage media. In an alternate embodiment, the database 18 can be distributed among multiple computer systems or electronic devices, such as multiple servers 12.
  • Client 14 can be any suitable computer system or electronic device that allows a user (or program) to communicate with the server 12. The client 14 can implement an application 22, which can be a database interface application, for example. The database interface allows the user to input queries to the database server 16, to search for data stored in the database 18 that satisfies the queries, and return that data to the application 22. For example, user access can be provided to the data of the database 18 using a database language, such as Structured Query Language (SQL). In alternate embodiments, interface application 22 can be implemented on the server 12 to allow access to database server 16 for a user.
  • The communication between client 14 and server 12 (or application 22 and database server 16) is allowed over a communication layer 24, which can include transmission wires, wireless communication devices, internal circuits to the server 12, or any electronic communication medium and mechanism.
  • FIG. 2 a is a diagrammatic illustration of a query cache 50 and a simulation area 52 for use with the present invention. Query cache 50 is storage space allocated in memory of the database system 16 for storing queries. When a query is stored, it can be stored as a “query object” 54 in the cache 50, the query object 54 including the text query that was input (e.g., an SQL statement), and a compiled form of the query (“executable query”, also known as a “section”). The executable query is in a form that can be executed by the database system, and can include many different optimizations to execute efficiently, as is well known to those of skill in the art. Other information can also be stored in the query object 54, such as one or more timestamps, e.g., when the query started and/or finished compiling, and/or a time value indicating how long the compilation took. Also, other information such as various configuration parameters, environmental factors, etc. can be stored in the query object 54. The query cache 50 stores a number of query objects 54 based on the size of the cache 50 and the size of each query object 54. The size of query objects 54 can vary based on the contents of the query instruction.
  • The simulation area 52 is an area in memory storage of database server 16 that is devoted to the storage of “stubs” 56 of the present invention. The simulation area 52 can be an allocated portion of memory (or other storage) of the system that is independent of the query cache 50, and, for example, can be in a separate area of memory or in an adjacent area of memory to the cache 50.
  • As shown in FIG. 2 b, a stub 56 of the present invention includes a token 58, which is a representation of a query object 54 that takes much less storage space than the query object 54. In typical implementations, the simulation area 52 can be made much smaller than the query cache 50, since the stubs 56 require much smaller storage space. According to the present invention, tokens 58 are created and stored for query objects 54 that have been evicted from the query cache 50 to make room for new queries. The creation and use of tokens 58 is described in greater detail below with respect to FIG. 3.
  • A stub 56 also includes a size (S) parameter 60, which indicates the size of the query object 54 that was evicted from the query cache 50, which the token 58 of that stub 56 represents. Stub 56 additionally includes an identification number (N) 62, which is a number assigned to the stub 56 when it is stored in the simulation area and indicates a place in the order in which the stub 56 was inserted in the simulation area 52, i.e., the number N for a stub 56 indicates that it is the Nth stub to be inserted in the simulation area. The size parameter 60 and identification number 62 are used to determine the size that the cache 50 needs to be under certain circumstances, which is described in greater detail below with respect to FIG. 3.
  • FIG. 3 is a flow diagram illustrating a method 100 of the present invention for providing time cost data for tuning the size of a query cache 50 in a database system. Method 100 can be implemented, for example, on the same computer or electronic device (or one of the computer systems or devices) that implements the database server 16 and database 18. Method 100 can be implemented by program instructions or code. Program instructions can be stored on a computer readable medium. Alternatively, the method 100 can be implemented in hardware (logic gates, etc.), or in a combination of hardware and software.
  • The method begins at 102, and in step 104, the process receives a query, which was originally input by a user (including a human user, or a different program or device). The query is in a standard format or language to allow it to be processed by the database system. For example, queries are commonly provided in SQL format.
  • In step 106, the process searches the query cache 50 for a stored query that is the same as the query received in step 104. Such a stored query, if present, would have been received at an earlier time and stored at that time in the cache 50. Typically, the process searches by comparing the text query statements in the cache 50 with the text in the received query. Alternatively, other information in the query objects 54 can also be searched to determine whether the received query is the same as a stored query.
  • In step 108, the process determines if there has been a cache hit or a cache miss. If the same query as the received query has been found stored in the query cache 50, it is a hit; otherwise it is a miss. If there is a cache hit, the process continues to step 136, described below. If there is a cache miss, the process continues to step 110, in which it is determined whether the query cache 50 has insufficient space to fit the received query in its storage area. If the query cache 50 has sufficient space, the process continues to step 118, described below.
  • If the query cache 50 is has sufficient space, the process continues to step 112, in which a stub 56 is created for a selected query object 54 stored in the cache 50, which will be evicted (purged) from the cache 50. The selected query object 54 for eviction can be any of the stored query objects 54, selected based on different criteria in different embodiments. One selection criterion is age: the oldest query object 54 is selected, i.e., the query object 54 that was executed the furthest in the past. This can be determined, for example, based on timestamp values stored in the query objects 54 in the cache. According to the present invention, a created stub 56 includes a token 58, a size parameter 60, and an identification number 62, as described above with reference to FIG. 2 b.
  • The token 58 is a representation of the evicted query that requires much less storage space than the query object 54, and can be created using any of a variety of processes. The token generation process should be deterministic, to always provide the same token 58 from the same query; should provide a token 58 small relative to the size of the typical size of a query object 54; and, as much as possible, should provide a unique token 58 for each different query. For example, a cyclical redundancy check (CRC) can be performed on the textual part of the query to generate the resulting value as a token 58; for example, a 64-bit CRC can provide enough bits to handle the typical amount of tokens 58 that may be needed in many current database systems. Alternatively, the CRC can be provided based on the executable form of the query. Other embodiments can use other methods to generate a token 58, such as particular hash algorithms.
  • In step 114, the selected query object 54 is evicted from the query cache 50. In step 116, the stub 56 created in step 112 for the evicted query is inserted in the simulation area 52 of memory storage. The process then continues to step 118.
  • Step 118 is performed after query eviction in steps 112-116, or if the query cache 50 has insufficient space as determined in step 110. In step 118, the process creates a stub 56 for the received query object 54. As described above, the created stub 56 includes a token 58, a size parameter 60, and an identification number 62. In step 120, the process searches the simulation area 52 for a stored token 58 that is the same as the token 58 created in step 118 for the received query.
  • In step 122, the process determines whether the search for the token 58 in the simulation area 52 is successful. If not, the process continues to step 124, in which the received query is compiled and the query object 54 stored in the query cache 50 as normal (the time to compile the query can also be computed, similarly as in step 132, described below, for use in step 138). The process then continues to step 126 to execute the executable query as is well known to those of skill in the art, and the process is complete at 128. It should be noted that different processes or threads can perform the compilation and/or execution of the query, which can be the same or different from the process of the present invention for determining cost data for tuning the query cache size.
  • If a token 58 for the received query was matched in the simulation area 52 in step 122, then the process continues to step 130, in which the received query is compiled and its query object 54 is stored in the query cache 50. Furthermore, the processing time to compile the received query is also measured in this step. This can be performed, for example, by taking a time stamp t1 before the query is compiled, taking a time stamp t2 when the compilation is complete, and subtracting time t1 from time t2. In some alternate embodiments, the “processing time” measured here can be based on other operations. For example, some embodiments may have all queries pre-compiled and stored in data storage; the “processing time” in this case would be the measured I/O time needed to retrieve the pre-compiled query, rather than the time for compiling it directly. Thus, while the term “processing time” may usually be referred to herein as time to compile, this term can also refer to other processing operations needed to obtain a compiled form of the query.
  • In step 132, a larger size of the cache 50 is determined, this larger size being the size the cache 50 needs to be to avoid the cache miss that occurred. To avoid the miss, the query cache 50 would have to be large enough to accommodate the storage of the received query, so that this query would be available as a cached query when the query was received. In the described embodiment, this determination is made by using the simulation area 52 and the stored stub 56 that matched the received query. The size of the query cache 50 needed to avoid the cache miss is the current query cache size, plus the sum of the sizes S of all tokens 58 in the simulation area 52 that have an identification number 62 less than or equal to the identification number 62 N of the matching token 58. Many different optimizations can be used to approximate this sum. For example, one method is to order the stubs 56 in the simulation area 52 by their identification numbers 60 and traverse all stubs 56 in the simulation area 52, from lowest ID number to N, adding the size value S in each stub 56 to the total sum. Other methods can be used in other embodiments.
  • In step 134, the processing time (as determined in step 130) and the larger cache size (as determined in step 132) are stored in memory or other storage for future reference. In some embodiments, every pair of time and size is stored as an individual data point. In other embodiments, an optimization can be used to reduce the storage requirements of the processing time and size data points. For example, each time datapoint can be added to a histogram whose bins represent cache size intervals. FIG. 4 a shows one example of such a histogram, described below.
  • After step 134, the process continues to step 126 where the compiled, executable query is executed as in standard query processing, and the process is complete at step 128.
  • If a cache hit occurred in step 108 when searching the query cache 50 for the received query, then the process continues to step 136. In step 136, a cache size is determined, this cache size being the minimum size needed to allow the cache hit. This can be performed by summing the size of all query objects 54 in the cache 50, which are newer or equal in age to the query object 54 that matched the received query. The size of each query object 54 can be stored in the query cache 50 and retrieved in this step, or, alternatively, the size of the query object 54 computed on the fly during this step. For example, the query objects 54 can be traversed and ordered based on their time stamps indicating when the query object 54 was stored in the cache 50, and the newest queries' sizes added to the total sum of sizes. In alternate embodiments, a more approximate measure of the minimum cache size can be estimated by maintaining a running average size of query object 54 stored in the query cache 50, keeping a count of the number of objects 54 in the query cache 50, and summing the average size for the appropriate number of query objects 54. This average size approximation can also be performed for step 132; for example, the increase in cache size needed to avoid the cache miss can be the average query object size multiplied by the number of objects in the simulation area which have an identification number 62 less than or equal to the identification number 62 of the matching token 58.
  • In step 138, the minimum cache size determined in step 136, and the processing time taken to originally compile (or otherwise obtain) the received query, are stored in memory or other storage for future reference. The time taken to originally compile the received query is retrieved from the query object 54 in the cache 50 (this time value was stored in the cache 50 in the query object 54 at time of compilation). This data indicates the minimum cache size needed to not incur the associated time cost data point (that time indicating the time to compile the query statement). If the cache 50 is reduced below this minimum cache size, then the time to compile is the time cost.
  • In some embodiments, every pair of time and cache size is stored as an individual data point. In other embodiments, an optimization can be used to reduce the storage requirements of the time and size data points. For example, each time datapoint can be added to a histogram whose bins represent cache size intervals. FIG. 4 b shows one example of such a histogram, described below.
  • After step 138, the process continues to step 126 where the compiled, executable query is executed as in standard query processing, and the process is complete at 128.
  • It should be noted that steps 136-138 are optional. In some embodiments, only the size and time data points determined from steps 130 and 132 are stored and used for tuning query cache 50 size (in such an embodiment, the processing time of compilation of a query may not need to be stored in the query cache 50, since it will not be needed for the step 138). Thus, only the data points for time saved when increasing the cache size would be available. However, in many embodiments, for small amounts of cache size changes, an estimate of the time cost for decreasing the query cache size by a certain amount can be approximated as the same as the equivalent time savings for increasing the query cache size by the same amount.
  • FIG. 4 a is a graph illustrating an example of a histogram 200 of the present invention for storing and presenting data points determined by steps 130-134 of the process of FIG. 3. Histogram 200 has a horizontal axis representing cache size, and a vertical axis representing processing time saved for the cache sizes. Time and size data points are aggregated by the histogram.
  • Over time, as queries are received and the query cache 50 is used, the histogram is filled with a number of data points resulting from multiple iterations of the steps 130-134 of the process of FIG. 3. The histogram can be used to assist the determination of the consequences of resizes of the query cache 50. In example histogram 200, each bucket along the horizontal axis represents a size increase of 10 megabytes over the current cache size (buckets can be other sizes in other histogram embodiments). When storing a time and size data point determined by steps 130 and 132, the time from step 130 is added to the time saved value of the bucket corresponding to the cache size determined in step 132. Thus, each bucket includes a count or sum of all the time saved that would occur if the query cache 50 were increased in size by an amount in the range of cache sizes represented by that bucket.
  • The processing time saved for a desired cache size tuning adjustment can be determined from the histogram by summing the time values for the appropriate histogram buckets for the amount of increase to the desired size. For example, if the current query cache size is 20 megabytes, and it is desired to know how much time is saved by increasing the cache size to 50 megabytes, then it is an increase of 30 megabytes, and the histogram buckets representing 0-10, 10-20, and 20-30 megabytes ( buckets 202, 204, and 206) are selected and the time saved values of these buckets is summed to find the total time saved.
  • In some embodiments, histogram data (or other stored data) can be reset to zero or shifted when any update of the cache size is implemented. This is because some of the stored data of the histogram is related to the old cache size and will no longer be pertinent to the new query cache size. For example, if the query cache size is increased by 10 megabytes, then the whole histogram can be reset to zero; or in other embodiments the pertinent buckets of the histogram can be zeroed and the remaining buckets shifted, e.g., the 10-20 megabyte bucket can become the 0-10 megabyte bucket.
  • FIG. 4 b is a graph illustrating an example of a histogram 210 of the present invention for storing and presenting data points determined by steps 136 and 138 of the process of FIG. 3. Histogram 210 has a horizontal axis representing cache sizes, and a vertical axis representing the processing time cost if the cache 50 were reduced in size below the different minimum cache sizes. Time and size data points are aggregated by the histogram.
  • Over time, as the query cache 50 is used, the histogram is filled with a number of data points resulting from steps 136 and 138 of the process of FIG. 3. The histogram can be used to assist the determination of the consequences of resizes of the query cache 50. In histogram 210, each bucket along the horizontal axis represents a size decrease of 10 megabytes under the current cache size. When storing a time and size data point determined by steps 136 and 138, the time from step 138 is added to the time value of the bucket corresponding to the cache size determined in step 136. Thus, each bucket includes a count or sum of all the time cost that would be incurred if the query cache 50 were reduced in size by an amount in the range of cache sizes represented by that bucket.
  • The time cost for a cache size decrease can be determined from the histogram by summing the time values for the appropriate histogram buckets for the amount of size decrease from the current size. For example, if the current query cache size is 30 megabytes, and it is desired to know how much time is cost by decreasing the cache size to 10 megabytes, then the amount of decrease is 20 megabytes, and the histogram buckets representing 0-10 and 10-20 megabytes (buckets 212 and 214) are selected and the time values of these buckets is summed to find the total time cost (it should be noted that the “time cost” value is actually a negative value in histogram 210).
  • The present invention advantageously allows cost model data for a query cache 50 to be provided for use in tuning the size of a query cache 50. This cost model data can be useful for an administrator of the system who is tuning the query cache size to achieve a more optimal performance of the database system by showing the time cost for any desired change in size of the query cache 50, i.e., how beneficial it is to add memory to the query cache 50, and how costly it is to take memory away from the query cache 50. The present invention is able to provide accurate cost model data based on the actual operation of the database system, rather than projected operation.
  • The cost model data can also be used in an automatic system that can self-tune cache sizes. Typically, there are several different memory areas used in a database system, each memory area used for a different purpose, such as the query cache, a data page cache, etc. Each of the memory areas may be in use concurrently and therefore in competition for memory. Since each of the memory areas uses memory in different ways, it can be difficult to decide which memory area is most in need of memory.
  • The cost model data of the present invention can be used in an automatic system that looks at all the different memory areas and caches with different cost models and makes adjustments to different caches based on total time costs and overall performance. For example, a query cache cost model of the present invention can be used with a data page cache cost model for a data page cache of the database system. If the query cache model suggests that 5 seconds may be saved if 10 megabytes of memory are added to the query cache, and the data page cache cost model suggests that only 1 second is lost if 10 megabytes of memory are taken from the data page cache, then the automated system can decide to give the query cache 50 the 10 megabytes from the data page cache, for a net gain of 4 seconds less processing time. One example of a cost model for data page memory tuning is described in U.S. Pat. No. 6,961,835, which describes a cost model for data page memory tuning which uses a simulation area that can also be used for the simulation area 52 of the query cache cost model of the present invention.
  • Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.

Claims (20)

1. A method for providing cost data for tuning a size of a query cache in a database system, the method comprising:
receiving a query;
checking the query cache to determine whether a cache hit or a cache miss occurs; and
determining a processing time saved by an avoidance of the cache miss and a cache size needed to avoid the cache miss, when the cache miss occurs.
2. The method of claim 1, wherein when the cache miss occurs, further comprising generating a token from the received query and checking whether the generated token matches a token stored in a simulation area.
3. The method of claim 2, wherein a plurality of tokens are stored in the simulation area, each of the tokens representing a query object previously evicted from the query cache.
4. The method of claim 3, wherein the determining of the cache size needed to avoid the cache miss is performed only when the generated token matches a token stored in the simulation area.
5. The method of claim 1, further comprising compiling the received query and storing the compiled executable query in the query cache.
6. The method of claim 5, further comprising:
evicting a query object from the query cache when the query cache has insufficient space for storing the compiled executable query;
creating a token representing the evicted query object, and
storing the created token in a simulation area.
7. The method of claim 5, wherein the time required to compile the received query is used as the processing time saved.
8. The method of claim 4, wherein each token stored in the simulation area is included in a stub that also includes a size of the query object represented by the token, and an identification number indicating the order in which the token was stored in the simulation area.
9. The method of claim 4, wherein the cache size needed to avoid the cache miss is determined by summing the sizes of all the query objects represented by the tokens stored in the simulation area, the summed query objects being the same age or older than the query object represented by the matching token.
10. The method of claim 1, further comprising, when the cache hit occurs, determining a minimum cache size needed to allow the cache hit and the processing time cost when the minimum cache size is not met.
11. The method of claim 1, further comprising storing the cache size and the processing time as data points for each received query when the cache miss occurs, such that the processing time saved can be determined for any desired query cache size by consulting the data points.
12. The method of claim 11, wherein the data points are stored as a histogram.
13. A computer program product comprising a computer readable medium including program instructions for providing cost data for tuning a size of a query cache in a database system and to be implemented by a computer system, the program instructions for:
receiving a query;
checking the query cache to determine whether a cache hit or a cache miss occurs; and
determining a processing time saved by an avoidance of the cache miss and a cache size needed to avoid the cache miss, when the cache miss occurs.
14. The computer program product of claim 13, wherein when the cache miss occurs, further comprising generating a token from the received query and checking whether the generated token matches a token stored in a simulation area.
15. The computer program product of claim 13, further comprising program instructions for:
evicting a query object from the query cache when the query cache has in insufficient space for storing the compiled executable query; and
creating a token representing the evicted query object, and
storing the created token in a simulation area; and
compiling the received query and storing the compiled executable query in the query cache.
16. The computer program product of claim 14, wherein a plurality of tokens are stored in the simulation area, each of the tokens representing a query object that was previously evicted from the query cache, and wherein the cache size needed to avoid the cache miss is determined by summing the sizes of all the query objects represented by the tokens stored in the simulation area, the summed query objects being the same age or older than the query object represented by the matching token.
17. The computer program product of claim 13, further comprising program instructions for, when the cache hit occurs, determining a minimum cache size needed to allow the cache hit and the processing time cost when the minimum cache size is not met.
18. The computer program product of claim 13, further comprising program instructions for storing the cache size and the processing time as data points for each received query when the cache miss occurs, such that the processing time saved can be determined for any desired query cache size by consulting the data points.
19. A system for providing cost data for tuning a size of a query cache in a database system, the system comprising:
a query cache provided in memory of the database system;
a simulation area provided in the memory of the database system; and
at least one processor in communication with the query cache and the simulation area, the at least one processor operative to check the query cache to determine whether a cache hit or a cache miss occurs, and determine the processing time saved by the avoidance of the cache miss and a cache size needed to avoid the cache miss when the cache miss occurs.
20. The system of claim 19, wherein when the cache miss occurs, further comprising generating a token from the received query and checking whether the generated token matches a token stored in a simulation area, wherein a plurality of tokens are stored in the simulation area, each of the tokens representing a query object previously evicted from the query cache when a new query object needed to be stored in the query cache, and wherein the cache size needed to avoid the cache miss is determined by summing the sizes of all query objects represented by the tokens stored in the simulation area, the summed query objects being the same age or older than the query object represented by the matching token.
US11/278,373 2006-03-31 2006-03-31 Providing cost model data for tuning of query cache memory in databases Expired - Fee Related US7502775B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/278,373 US7502775B2 (en) 2006-03-31 2006-03-31 Providing cost model data for tuning of query cache memory in databases

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/278,373 US7502775B2 (en) 2006-03-31 2006-03-31 Providing cost model data for tuning of query cache memory in databases

Publications (2)

Publication Number Publication Date
US20070233638A1 true US20070233638A1 (en) 2007-10-04
US7502775B2 US7502775B2 (en) 2009-03-10

Family

ID=38560584

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/278,373 Expired - Fee Related US7502775B2 (en) 2006-03-31 2006-03-31 Providing cost model data for tuning of query cache memory in databases

Country Status (1)

Country Link
US (1) US7502775B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070282888A1 (en) * 2006-05-31 2007-12-06 Sun Microsystems, Inc. Dynamic data stream histograms for large ranges
US20080126315A1 (en) * 2006-09-22 2008-05-29 Siemens Energy & Automation, Inc. System controller database manager with query stacking and caching for electrical distribution system
US20150088964A1 (en) * 2013-09-25 2015-03-26 Edgecast Networks, Inc. Instantaneous non-blocking content purging in a distributed platform
US10152339B1 (en) * 2014-06-25 2018-12-11 EMC IP Holding Company LLC Methods and apparatus for server caching simulator
US11282095B2 (en) * 2016-01-22 2022-03-22 Walmart Apollo, Llc Systems and methods of enabling forecasting
US11334565B1 (en) * 2016-10-28 2022-05-17 Intuit, Inc. System to convert natural-language financial questions into database queries
US11392538B2 (en) * 2010-09-30 2022-07-19 Commvault Systems, Inc. Archiving data objects using secondary copies
EP4123461A1 (en) * 2021-07-20 2023-01-25 Sap Se Intelligent query plan cache size management
US11567938B1 (en) 2021-07-20 2023-01-31 Sap Se Intelligent query plan cache size management

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10642735B2 (en) 2013-03-15 2020-05-05 Oracle International Corporation Statement cache auto-tuning
JP6160419B2 (en) * 2013-10-02 2017-07-12 富士通株式会社 Information processing program, information processing apparatus, and information processing method
US9558229B2 (en) * 2013-11-26 2017-01-31 Sap Se Transaction private log buffering for high performance of transaction processing
US10585707B2 (en) 2017-06-27 2020-03-10 International Business Machines Corporation Database resource scaling

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5668987A (en) * 1995-08-31 1997-09-16 Sybase, Inc. Database system with subquery optimizer
US5822749A (en) * 1994-07-12 1998-10-13 Sybase, Inc. Database system with methods for improving query performance with cache optimization strategies
US6282613B1 (en) * 1999-04-30 2001-08-28 International Business Machines Corporation Very efficient technique for dynamically tracking locality of a reference
US6301641B1 (en) * 1997-02-27 2001-10-09 U.S. Philips Corporation Method for reducing the frequency of cache misses in a computer
US6351724B1 (en) * 1997-12-19 2002-02-26 Advanced Micro Devices, Inc. Apparatus and method for monitoring the performance of a microprocessor
US6446062B1 (en) * 1999-12-23 2002-09-03 Bull Hn Information Systems Inc. Method and apparatus for improving the performance of a generated code cache search operation through the use of static key values
US6493810B1 (en) * 2000-04-28 2002-12-10 Microsoft Corporation Method and system for allocating cache memory for a network database service
US6691080B1 (en) * 1999-03-23 2004-02-10 Kabushiki Kaisha Toshiba Task execution time estimating method
US20040083208A1 (en) * 2000-12-28 2004-04-29 Kroening James L. Data image cache used in testing
US6742084B1 (en) * 1998-05-15 2004-05-25 Storage Technology Corporation Caching method for selecting data blocks for removal from cache based on recall probability and size
US20040193827A1 (en) * 2003-03-31 2004-09-30 Kazuhiko Mogi Computer system for managing performances of storage apparatus and performance management method of the computer system
US20050071599A1 (en) * 2003-09-30 2005-03-31 Modha Dharmendra Shantilal Storage system and method for dynamically allocating cache space among different workload classes
US20050160225A1 (en) * 2004-01-16 2005-07-21 International Business Machines Corporation Self-tuning cache
US6952664B1 (en) * 2001-04-13 2005-10-04 Oracle International Corp. System and method for predicting cache performance
US20050273310A1 (en) * 2004-06-03 2005-12-08 Newburn Chris J Enhancements to performance monitoring architecture for critical path-based analysis
US20060098675A1 (en) * 2004-11-05 2006-05-11 Michitaka Okuno Traffic control method for network equipment
US20060294192A1 (en) * 2005-06-27 2006-12-28 Yahoo! Inc. Access control systems and methods using visibility tokens with automatic propagation
US20070186045A1 (en) * 2004-07-23 2007-08-09 Shannon Christopher J Cache eviction technique for inclusive cache systems
US20070203890A1 (en) * 2006-02-28 2007-08-30 Microsoft Corporation Serving cached query results based on a query portion

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5822749A (en) * 1994-07-12 1998-10-13 Sybase, Inc. Database system with methods for improving query performance with cache optimization strategies
US5668987A (en) * 1995-08-31 1997-09-16 Sybase, Inc. Database system with subquery optimizer
US6301641B1 (en) * 1997-02-27 2001-10-09 U.S. Philips Corporation Method for reducing the frequency of cache misses in a computer
US6351724B1 (en) * 1997-12-19 2002-02-26 Advanced Micro Devices, Inc. Apparatus and method for monitoring the performance of a microprocessor
US6742084B1 (en) * 1998-05-15 2004-05-25 Storage Technology Corporation Caching method for selecting data blocks for removal from cache based on recall probability and size
US6691080B1 (en) * 1999-03-23 2004-02-10 Kabushiki Kaisha Toshiba Task execution time estimating method
US6282613B1 (en) * 1999-04-30 2001-08-28 International Business Machines Corporation Very efficient technique for dynamically tracking locality of a reference
US6446062B1 (en) * 1999-12-23 2002-09-03 Bull Hn Information Systems Inc. Method and apparatus for improving the performance of a generated code cache search operation through the use of static key values
US6493810B1 (en) * 2000-04-28 2002-12-10 Microsoft Corporation Method and system for allocating cache memory for a network database service
US20040083208A1 (en) * 2000-12-28 2004-04-29 Kroening James L. Data image cache used in testing
US6952664B1 (en) * 2001-04-13 2005-10-04 Oracle International Corp. System and method for predicting cache performance
US20040193827A1 (en) * 2003-03-31 2004-09-30 Kazuhiko Mogi Computer system for managing performances of storage apparatus and performance management method of the computer system
US20050071599A1 (en) * 2003-09-30 2005-03-31 Modha Dharmendra Shantilal Storage system and method for dynamically allocating cache space among different workload classes
US20050160225A1 (en) * 2004-01-16 2005-07-21 International Business Machines Corporation Self-tuning cache
US7284093B2 (en) * 2004-01-16 2007-10-16 International Business Machines Corporation Self-tuning cache
US20050273310A1 (en) * 2004-06-03 2005-12-08 Newburn Chris J Enhancements to performance monitoring architecture for critical path-based analysis
US20070186045A1 (en) * 2004-07-23 2007-08-09 Shannon Christopher J Cache eviction technique for inclusive cache systems
US20060098675A1 (en) * 2004-11-05 2006-05-11 Michitaka Okuno Traffic control method for network equipment
US20060294192A1 (en) * 2005-06-27 2006-12-28 Yahoo! Inc. Access control systems and methods using visibility tokens with automatic propagation
US20070203890A1 (en) * 2006-02-28 2007-08-30 Microsoft Corporation Serving cached query results based on a query portion

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7702699B2 (en) * 2006-05-31 2010-04-20 Oracle America, Inc. Dynamic data stream histograms for large ranges
US20070282888A1 (en) * 2006-05-31 2007-12-06 Sun Microsystems, Inc. Dynamic data stream histograms for large ranges
US20080126315A1 (en) * 2006-09-22 2008-05-29 Siemens Energy & Automation, Inc. System controller database manager with query stacking and caching for electrical distribution system
US7765202B2 (en) * 2006-09-22 2010-07-27 Siemens ndustry, Inc. System controller database manager with query stacking and caching for electrical distribution system
US11392538B2 (en) * 2010-09-30 2022-07-19 Commvault Systems, Inc. Archiving data objects using secondary copies
US11768800B2 (en) 2010-09-30 2023-09-26 Commvault Systems, Inc. Archiving data objects using secondary copies
US20150088964A1 (en) * 2013-09-25 2015-03-26 Edgecast Networks, Inc. Instantaneous non-blocking content purging in a distributed platform
US9413842B2 (en) * 2013-09-25 2016-08-09 Verizon Digital Media Services Inc. Instantaneous non-blocking content purging in a distributed platform
US10367910B2 (en) * 2013-09-25 2019-07-30 Verizon Digital Media Services Inc. Instantaneous non-blocking content purging in a distributed platform
US10152339B1 (en) * 2014-06-25 2018-12-11 EMC IP Holding Company LLC Methods and apparatus for server caching simulator
US11282095B2 (en) * 2016-01-22 2022-03-22 Walmart Apollo, Llc Systems and methods of enabling forecasting
US11334565B1 (en) * 2016-10-28 2022-05-17 Intuit, Inc. System to convert natural-language financial questions into database queries
US11809419B2 (en) 2016-10-28 2023-11-07 Intuit, Inc. System to convert natural-language financial questions into database queries
EP4123461A1 (en) * 2021-07-20 2023-01-25 Sap Se Intelligent query plan cache size management
US20230021502A1 (en) * 2021-07-20 2023-01-26 Sap Se Intelligent query plan cache size management
US11567938B1 (en) 2021-07-20 2023-01-31 Sap Se Intelligent query plan cache size management
JP7433281B2 (en) 2021-07-20 2024-02-19 エスアーペー エスエー Intelligent query plan cache size management

Also Published As

Publication number Publication date
US7502775B2 (en) 2009-03-10

Similar Documents

Publication Publication Date Title
US7502775B2 (en) Providing cost model data for tuning of query cache memory in databases
US10372706B2 (en) Tracking and maintaining expression statistics across database queries
US7031958B2 (en) Patterned based query optimization
US7130838B2 (en) Query optimization via a partitioned environment
US7917502B2 (en) Optimized collection of just-in-time statistics for database query optimization
US9063982B2 (en) Dynamically associating different query execution strategies with selective portions of a database table
US8812481B2 (en) Management of interesting database statistics
US10387411B2 (en) Determining a density of a key value referenced in a database query over a range of rows
US11561973B2 (en) Statistics based query transformation
US8024321B2 (en) Dynamic look ahead predicate generation
US8224807B2 (en) Enhanced utilization of query optimization
US7774318B2 (en) Method and system for fast deletion of database information
Nagel et al. Recycling in pipelined query evaluation
US20060074872A1 (en) Adaptive database buffer memory management using dynamic SQL statement cache statistics
US20150317359A1 (en) Updating statistics in distributed databases
US20080183684A1 (en) Caching an Access Plan for a Query
CN107783985B (en) Distributed database query method, device and management system
US20050160093A1 (en) Generating statistics on text pattern matching predicates for access planning
US20190370235A1 (en) Information Processing Method and Apparatus
EP3940547A1 (en) Workload aware data partitioning
US20080005077A1 (en) Encoded version columns optimized for current version access
US20170262503A1 (en) Fusing global reporting aggregate computation with the underlying operation in the query tree for efficient evaluation
US20060122963A1 (en) System and method for performing a data uniqueness check in a sorted data set
US8548980B2 (en) Accelerating queries based on exact knowledge of specific rows satisfying local conditions
US11507575B2 (en) Complex query rewriting

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CARROLL, MATTHEW JAMES;GARCIA-ARELLANO, CHRISTIAN MARCELO;LIGHTSTONE, SAM SAMPSON;AND OTHERS;REEL/FRAME:017896/0597;SIGNING DATES FROM 20060310 TO 20060330

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

REMI Maintenance fee reminder mailed
FPAY Fee payment

Year of fee payment: 4

SULP Surcharge for late payment
REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20170310