US20040002954A1 - Compressing database workloads - Google Patents

Compressing database workloads Download PDF

Info

Publication number
US20040002954A1
US20040002954A1 US10/180,667 US18066702A US2004002954A1 US 20040002954 A1 US20040002954 A1 US 20040002954A1 US 18066702 A US18066702 A US 18066702A US 2004002954 A1 US2004002954 A1 US 2004002954A1
Authority
US
United States
Prior art keywords
workload
statements
distance
compressed
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/180,667
Other versions
US6912547B2 (en
Inventor
Surajit Chaudhuri
Ashish Gupta
Vivek Narasayya
Sanjay Agrawal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US10/180,667 priority Critical patent/US6912547B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GUPTA, ASHISH KUMAR, AGRAWAL, SANJAY, CHAUDHURI, SURAJIT, NARASAYYA, VIVEK
Publication of US20040002954A1 publication Critical patent/US20040002954A1/en
Priority to US11/008,335 priority patent/US7293036B2/en
Application granted granted Critical
Publication of US6912547B2 publication Critical patent/US6912547B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2462Approximate or statistical queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/217Database tuning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S706/00Data processing: artificial intelligence
    • Y10S706/902Application using ai with detail of the ai system
    • Y10S706/911Nonmedical diagnostics
    • Y10S706/917Communication
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • Y10S707/99932Access augmentation or optimizing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99944Object-oriented database structure
    • Y10S707/99945Object-oriented database structure processing

Definitions

  • the present invention concerns a system for compressing an initial database workload into a compressed database workload in an efficient way such that the quality of the result of an application that is run using the compressed workload instead of the initial workload is not significantly affected.
  • workload is typically a set of SQL statements executed by users of the system.
  • Database practitioners and vendors have recognized the opportunity to tune and manage various aspects of database systems by analyzing workload information.
  • workload-driven tasks have emerged recently for solving problems such as histogram tuning, improving query optimization, index selection, approximate answering of aggregation queries, and statistics selection.
  • application is used to generically refer to such workload-driven tasks.
  • a key factor affecting the scalability of an application is the size of the workload, i.e., the number of SQL statements in the workload.
  • the workload consumed by the application is gathered using mechanisms in modern DBMSs that allow recording of SQL statements that execute on the server.
  • the user of the application such as a database administrator—could collect as the workload a log of SQL statements over a sufficiently large window of time (e.g., a day or week). Consequently, workloads tend to be large in size.
  • these applications often perform detailed analysis of queries in the workload and their inter-relationships, and hence their running time can be affected significantly as the workload size increases.
  • Operation of the applications can be sped up significantly by finding a substitute workload of smaller size (which is referred to as the compressed workload) as input, while qualitatively not degrading the result of the application.
  • the result of the application when run on the compressed workload should be identical (or close) to the result when it is run on the original workload. It is important that this compressed workload be found efficiently, since otherwise the very purpose of using a compressed workload is defeated.
  • One solution to the workload compression problem is to use uniform random sampling to pick a smaller subset of the original workload. While this strategy is efficient, it is not an effective method for workload compression.
  • the key reason for the poor compression achieved by uniform random sampling is that it is oblivious to the application for which the workload is being used, and hence ignores potentially valuable information about the statements in the workload.
  • the invention exploits application knowledge to obtain significantly better workload compression as compared to uniform random sampling.
  • Clustering has been studied in the context of machine learning and data mining.
  • a class of prior art also exists wherein data points are in a metric space, i.e., each pair of points is assigned a distance that is symmetric and satisfies the triangle inequality.
  • One form of equivalence is when two queries are semantically identical, i.e., they return the same result. Determining equivalence does, of course, come at a cost since it could require significant computational effort.
  • the invention uses application knowledge in addressing a problem of workload compression.
  • a distance function quantitatively measures the “difference” between pairs of statements with respect to the application.
  • the distance function can take on arbitrary values—e.g., unlike Euclidean distances.
  • the function does not have to be symmetric or satisfy the triangle inequality. This generality is important in dealing with the variety of applications that compression of a workload have applicability.
  • a computer database system makes use of the invention in evaluating a workload having a plurality of statements based on a task that uses the workload.
  • An initial workload W is provided as is a distance function that evaluates a distance for pairs of statements within the workload.
  • the distance function returns an estimate of the loss in quality for an application if the first statement is discarded from the workload but the second statement of the pair of statements is retained in a compressed workload.
  • the process finds a compressed workload W′ by evaluating the distance function for pairs of statements to produce a compressed workload having fewer statements than the initial workload W.
  • One aspect of the process is establishing a limit ⁇ on a loss in quality between use of the application with the compressed workload when compared with use of the application with the initial workload W. This limit can be used to determine when the workload compression is completed and provides an analytical assessment of the compression.
  • the workload compression can be implemented by choosing statements to remove from the workload W based on an all-pairs greedy search using the distance function and wherein removal of additional statements from the workload is ended when such removal results in a loss of quality that exceeds the limit ⁇ .
  • An alternate statement selection process is based on clustering of statements and results in clustering until the loss in quality limit is exceeded by the compressed workload.
  • FIG. 1 is a schematic illustration of a computer system for use in practicing the present invention
  • FIG. 2 is a depiction of an application working on both an uncompressed and a compressed database workload
  • FIG. 3 is a visualization of a workload compression criteria involving the calculation of distances between queries.
  • FIG. 4 is a block diagram showing components for compressing a workload W to produce a compressed workload W′ in accordance with an exemplary embodiment of the invention
  • the exemplary embodiment of the invention concerns a system for compressing a workload that includes a number of statements.
  • a workload is a set of SQL DML statements.
  • a workload W ⁇ q 1 , . . . q i , . . . ⁇
  • q i is a SQL DML statement (i.e., SELECT, UPDATE, INSERT, DELETE).
  • SELECT UPDATE
  • INSERT INSERT
  • DELETE SQL DML statement
  • Workload compression is a technique for improving the scalability of an application A that consumes a workload W as input and produces a result R (FIG. 2).
  • one goal of the invention is use of a workload compression module 110 to first perform workload compression on W to obtain a compressed workload W′, and then pass W′ as input to the application A, thereby obtaining result R′.
  • workload compression is meaningful if the following two criteria hold:
  • the total running time i.e., time taken for workload compression as well as the running time of the application, is less than the running time of the application on the original workload. This condition imposes the requirements that the workload compression algorithm itself is efficient and that it finds a compressed workload W′ such that the running time of the application on W′ is less than the running time of the application on W.
  • the quality condition requires that the quality of the result R′ is “close enough” to the quality of the result R. More formally, let A be an application and F A be a function that quantitatively evaluates the result of the application with respect to the given workload W, i.e., F A (W,R) returns a real number that measures the quality of result R. Then this condition ensures that values F A (W,R) and F A (W,R′) are close enough. Note that the exact definition of the function F A is application dependent.
  • a generalized workload compression problem can be stated as:
  • A be an application that takes as input a workload W and produces a result R.
  • F A (W,R) be a function that quantitatively evaluates R with respect to W. Give ⁇ , 0 ⁇ 1, find the workload W′ that minimizes the total running time of the application A (including the time for finding W′), subject to the quality constraint:
  • W′ need not be a subset of W, i.e., W′ may contain statements not present in W.
  • Workload compression can be applied in the context of multiple applications that consume a workload as input. Two such applications are explained in the two examples and for each example, a result R is produced by the application and an evaluation function F quantitatively evaluates this result.
  • index selection tool takes as input a workload W and a database, and produces as an output R a set of indexes appropriate for the given workload.
  • the goal of approximate query processing is to allow efficient but approximate answers to ad-hoc queries against large relational databases.
  • Random sampling is an approach for approximately answering aggregation queries (e.g., queries computing SUM or COUNT aggregate expressions).
  • queries computing SUM or COUNT aggregate expressions.
  • the query is executed on a sample of the data rather of the entire data, thereby returning approximate answers but speeding up the query significantly.
  • workload information is analyzed in a preprocessing step and this information is used to produce as result R, an appropriate set of samples of one or more tables in the database.
  • the exemplary embodiment of the invention sacrifices some generality, but is more amenable to efficient solutions.
  • the exemplary embodiment applies the following two criteria: (1)
  • the compressed workload W′ is a subset of the original workload W, and (2)
  • the quality constraint is defined in terms of “distances” between pairs of statements in the workload.
  • the exemplary embodiment uses an application specific Distance function between any pair of statements in the workload. Distance A (q i , q j ) estimates the loss in quality of the result of application A for statement q i if q i is discarded, but statement q j is present in the compressed workload—independent of the other statements in the workload.
  • A be an application that takes as input a workload W.
  • Distance A (q i , q j ) be a distance function for application A that ⁇ q i , q j ⁇ W, the distance function returns an estimate of the loss in quality for statement q i if it is discarded but q j is retained in the compressed workload.
  • which is the limit on the maximum allowable loss in quality
  • is specified in absolute terms and has the same units as the Distance function.
  • variations are possible by replacing the min in the quality constraint with other functions such as max or average. For example, using max makes the worst-case assumption that loss in quality for q i could be as large as the distance to the “furthest” statement from it in the compressed workload. Thus, if max is used, one would expect less degradation in the quality of the application, but also less compression of the workload since one presumably more rapidly reaches the threshold criteria.
  • the exemplary process for compression of the workload disclosed herein makes no assumptions about properties of the Distance function. For example, it does not require that the Distance function be symmetric or obey the triangle inequality.
  • Distance A (q i , q j ) is asymmetric, since it measures the distance with respect to ⁇ q i ⁇ , the discarded query. This generality is important since for the specific examples of workload compression results indicate these properties were not satisfied. As shown below, the distance based workload compression problem is provably hard when the Distance function can return arbitrary values. It is emphasized that the exact definition of Distance (q i , q j ) is application dependent since it must capture the impact on quality of the result produced by the application when q i is discarded and statement q j is retained in the compressed workload. Table 1 (below) summarizes the distance-based workload compression problem for the two applications of workload compression described previously.
  • the problem of a distance based compression that was defined previously aims to minimize the size of the set W′, so the constraint ⁇ q i ⁇ W ⁇ W′ min q j ⁇ W′ ⁇ (w i ⁇ Distance (q i , q j ) ⁇ is satisfied.
  • the solution is NP-Hard.
  • the exemplary embodiment uses a reduction from the decision version of the Minimum k-Median problem which is known to be NP-Complete. To show this, first define the decision problem of the Minimum k-Median problem:
  • Lemma 1 The problem of distance based workload compression is NP-Hard if the Distance function can return arbitrary values.
  • FIG. 4 An overview of an architecture of software components for implementing an exemplary embodiment of the invention is shown in FIG. 4.
  • the input to the workload compression module 110 is a workload W and a constraint ⁇ , and an output is a compressed workload W′.
  • An important part of the system is a Search module 112 (described below) that finds the compressed workload W′.
  • the exemplary embodiment finds the smallest (cardinality) subset of the given workload W that satisfies the given constraint.
  • two alternative processes are disclosed (besides random sampling).
  • the search process consults a Distance function module 114 (described below), but makes no assumptions about properties of the Distance function—in other words the Distance function module 114 can return arbitrary values.
  • the Distance function serves as the basis for estimating the loss in quality due to workload compression and is application specific.
  • the quality of the compressed workload depends on the accuracy and efficiency of the Distance function. Efficiency in computing the Distance function is important since the distance function may be invoked many times for a large workload by the Search component.
  • the accuracy of the Distance function is also important since overestimation of the loss in quality achieves less compression of the workload than ideally possible, whereas underestimation can cause an unacceptable result when the compressed workload is used by the application.
  • An example of how to make this trade-off of efficiency vs. accuracy is presented in designing the Distance functions for each of the two applications: index selection and Aggregation query processing.
  • the exemplary process can associate a weight w i with each statement q i in the workload, which reflects the relative importance of that statement.
  • the weight of a statement can significantly influence the result of the application. For example in index selection, the higher the weight of a statement, the more likely it is that the indexes that are suitable for that statement are part of the final result.
  • a statement is discarded by workload compression, one approach is to add the weight of the discarded statement to statement that is the “closest” to the discarded statement (as defined by the Distance function) in the compressed workload.
  • simply adding the weight can be inappropriate.
  • the exemplary embodiment includes a post-processing step, wherein an Adjust Weights module 120 (described below) uses an application specific AdjustWeight (q i ,q j ) function module 122 to adjust the weight of each statement in the compressed workload. Note that for certain applications, the solution of simply adding weights to the nearest retained statement may be appropriate. Finally, as an alternative embodiment the adjustment of weights during the search step is used to compress the workload.
  • a search component is responsible for finding a subset of W of smallest cardinality satisfying the constraint that the loss is quality is less than the given ⁇ .
  • Disclosed herein are two alternative search processes for solving distance based workload compression. A first search process is based on the K-Mediod clustering process and the second is a new greedy search process. A comparison of these two search processes is provided below.
  • the Minimum k-Median problem is a chistering problem.
  • the first search process adapts the K-Mediod clustering process. See Han, J., and Kamber M. Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, 2001. This process is a building block for constructing an algorithm for distance based compression by performing binary search on the size of the workload W.
  • the pseudo code for this search process (Listing 1) that calls a modified K-Mediod based search process (Listing 2) are presented below.
  • , W' W 2. While (Min_k ⁇ Max_k) 3.
  • the process of listing 2 will terminate in a finite number of iterations of Steps 2-3. Also, the solution obtained by this process is (at least) a local optimum.
  • the running time of distance based compression of the workload depends primarily on the number of invocations of the Distance function. As one can see from the pseudo code, the process of listing 2 performs O((
  • the process of listing 3 maintains three sets, Keep, Prune and Candidates.
  • Keep consists of statements which are definitely going to be retained in the compressed workload W′.
  • Prune consists of the statements which are currently not in W′ and Candidates consists of the statements whose outcome hasn't yet been decided.
  • the process computes the distance to the closest statement (as defined by the Distance function) that hasn't been pruned (Step 4).
  • the statement for which this value is the smallest (Step 5) is considered next for pruning. Prior to actually pruning this statement however, the process verifies that removal of this statement does not violate the constraint A, since this statement may have been the closest statement to one or more statements that had been pruned previously. This check is performed in Step 6.
  • the statements in the sets Candidates and Keep constitute the compressed workload W′.
  • ALL-PAIRS performs O(
  • the process outlined in Listings 1 and 2 will scale better with workload size compared to the ALL-PAIRS process.
  • the ALL-PAIRS process cannot guarantee that the solution obtained is a local optimum for the distance based workload compression problem.
  • An alternative technique for improving the scalability of an application that consumes a workload W is to use sampling to select a subset W′ of W.
  • the simplest of these schemes is uniform random sampling, where each statement in W has an equal probability of being selected.
  • this it is believed this approach can result in poor quality workload compression due to the following problems: (a) Uniform sampling ignores valuable information about statements in the workload and therefore misses opportunity for more compression. (b) When the sampling fraction is small, certain small “clusters” of important statements may be altogether discarded and never make it into the compressed workload. This follows from a well known statistical result. Testing of the invention used a stratified sampling based process, which partitions the workload into strata and then samples uniformly within each stratum.
  • sampling fraction f 0 The process starts with a sampling fraction f 0 and verify if the constraint A is satisfied for that sample—note that this step requires invoking the Distance function. If the constraint is not satisfied, the process by increasing the sampling fraction by a factor m>1 and sampling an additional set of statements. The process terminates when it find a sample that satisfies the constraint.
  • an important component of the distance based workload compression is the computation of Distance (q i , q j ) for any pair of statements q i , q j in the given workload W.
  • the function Distance (q i , q j ) measures the expected loss in quality of the result of the application on workload ⁇ q i ⁇ if the workload ⁇ q j ⁇ is provided as input to the application.
  • a judicious trade-off between accurate and efficient computation of the Distance function is important for ensuring the success of workload compression.
  • An exact method for computing Distance (q i , q j ) is: (i) run the application on workload ⁇ q i ⁇ and compute the quality of the result for ⁇ q i ⁇ , (ii) run the application on ⁇ q j ⁇ and compute the quality of the result for ⁇ q i ⁇ and (iii) take the difference in quality between Steps (i) and (ii).
  • a method is inefficient since it requires running the application, and hence negates the very purpose of workload compression.
  • the challenge of developing an appropriate Distance function for an application is to estimate this loss in quality efficiently.
  • Two guidelines are followed in developing Distance functions for the two applications described herein: index selection and approximate answering of aggregation queries (AQP). These guidelines are broadly applicable in the context of other applications as well.
  • the first guideline is driven by the requirement that the computation of Distance function be efficient.
  • a core set of information about each statement in the workload is identified that can be derived with low overhead and this information is relied upon for computing Distance.
  • the distance function uses information that can be derived from the SQL parser and a selectivity estimation module. This information includes: (a) Type of the statement, (SELECT, INSERT, UPDATE, DELETE) (b) Structure of the query, e.g., tables referenced, projection columns, selection and join predicates etc.
  • the selectivity of the predicate (computed by using a selectivity estimation module based on available statistics in the database) (d) If available, the (estimated) cost of executing the statement. This cost information can be obtained either via one invocation of the query optimizer (e.g., in Microsoft SQL Server using the Showplan interface, or in IBM DB2 using the EXPLAIN mode) or from previously recorded information about the actual execution time of the statement.
  • the query optimizer e.g., in Microsoft SQL Server using the Showplan interface, or in IBM DB2 using the EXPLAIN mode
  • a second guideline is to leverage the technique of logically partitioning the workload.
  • the idea is that for any two queries q i , q j belonging to different partitions, Distance(q i , q j ) between the two queries is ⁇ .
  • Partitioning can be incorporated within the Distance (q i , q j ) function by generating a “signature” for each statement and returning ⁇ if the two signatures are not identical.
  • each statement with a distinct signature belongs to a different logical partition.
  • index selection application when two statements reference disjoint sets of tables, it is reasonable to separate them into different partitions since indexes that are useful for one statement cannot be useful for the other.
  • partitioning There are two benefits of partitioning the workload. First, it provides a way to ensure that at least a minimum number of statements (i.e., at least one statement per partition) will be retained in the compressed workload. Second, since the signature of a query can typically be computed very efficiently compared to the more careful analysis that goes into the rest of the Distance function, partitioning serves as a “shortcut” that reduces the computational overhead of invoking the Distance function. Finally, note that the signature to be used to partition the workload is application dependent, and is therefore incorporated into the Distance function.
  • the workload W consists of SELECT, INSERT, UPDATE, DELETE statements, where the SELECT statements are limited to single-block Select, Project, Join (SPJ) queries with Group-By, Aggregation and Order-By.
  • the Distance function for queries i.e., SELECT statements
  • index selection the Distance (q i , q j ) function measures the expected loss of benefit for ⁇ q i ⁇ if the set of indexes recommended for ⁇ q j ⁇ were used to answer the query instead of the set of indexes recommended for ⁇ q i ⁇ itself. Our goal is to estimate this expected loss of benefit efficiently without actually invoking the index selection application.
  • a first step in the exemplary Distance function is to detect if the two queries belong to the same partition. If they do not, the distance function returns with a value of ⁇ .
  • partitioning is to logically place queries that are “far apart” into disjoint partitions. In the context of index selection, two queries can be considered far apart, if there is little or no overlap in the set of indexes that would be chosen for each query.
  • partition the workload on the basis of the tables accessed in each query and the join predicates (if any). This is done by generating a signature for each query that consists of the table IDs referenced in the query and (table, column) IDs accessed in the join predicate.
  • the straightforward partitioning scheme outlined above may still include queries into the same partition that are still “far apart”.
  • both Q 1 and Q 2 will be assigned to the same partition because they both have the same signature.
  • the queries are still far apart from the point of view of indexes that are appropriate for each query. Due to the respective selectivities of the predicates on age, for Q 1 , an index on column age is likely to be very useful, whereas for Q 2 an index on column age is likely to be of no use.
  • the process further splits each partition on the basis of selectivity information. For a single-table query, it computes the joint selectivity of all the predicates.
  • the process can generate at most two partitions.
  • the process uses the extension of this partitioning scheme to the case of multi-table queries. Under this scheme, all queries belong to a t-table partition (i.e., a partition with queries accessing those t tables) get split into at most 2 t partitions (some of which may be empty). Although the number of such partitions can, in principle, become large, experience indicates that over a variety of large workloads (real and synthetic), the number of partitions grew very slowly with the number of tables.
  • Dist-Sel (q i , q j ), Dist-Reqd (q i , q j ), Dist-GB (q i , q j ) and Dist-OB (q i , q j ).
  • Dist-Sel computes a distance by examining only the columns in the Selection category of the queries q i and q j , and thereby tries to capture the difference in performance category (a).
  • the exemplary process defines Distance (q j , q j ) as the maximum of the four values. Intuitively, by considering the maximum value, the process adopts a conservative approach that considers two queries as “close” only if they are “close” in each of these categories.
  • Dist-Sel The approach is based on the intuition that the column (or sequence of columns) in the predicate with the lowest selectivity (low selectivity means a predicate that selects few records from the table) is the one that will be picked to be indexed by the index selection tool for that query. In other words, while considering 1-column indexes, the process assumes that the column with the smallest selectivity will be picked. On the other hand, when considering 2-column indexes, it assumes that it is the sequence of two columns with the two lowest selectivities that will be picked. The following example illustrates how Dist-Sel(q i , q j ), is computed. c 3 c 1 c 2 5 Q 1 0.1 0.3 0.2 Q 2 0.5 0.2 0.4
  • Cost ( ⁇ Q 1 ⁇ , ⁇ ⁇ ) corresponds to a scan of the entire table i.e., no indexes are present).
  • the intuition is that the presence of index 12 would require scanning 30% of the base relation for answering Q 1 , whereas the presence of index I 1 would require scanning only 10% of it.
  • the process perform the same analysis as in the above example on a per table basis and then takes a weighted average of the table-wise Dist-Sel (q i , q j ) values, the weight being the size of the table in pages.
  • the process uses the size of the table as weight because for the same selectivity value, the amount of I/O required to answer the query is proportional to the size of the table.
  • Dist-GB & Dist-OB First consider Dist-OB (q i , q j ). The process assumes the model of no partial sort benefits, i.e., an index (c 1 , C 2 , c 3 ) is of no use in answering a query with ORDER BY c 1 , c 3 , C 2 , even though there is some overlap in the prefix of the index and the prefix of the order-by clause. Under this simplifying assumption, Dist-OB (q i , q j ) is 0 if the ORDER BY clause of q i is a leading prefix of the ORDER BY clause of q j .
  • Dist-OB (q i , q j ) is equal to the Cost ( ⁇ q i ⁇ , ⁇ ⁇ ).
  • Computation of Dist-GB (q i , q j ) is done similarly, except that the process requires that the group-by columns of q i to be a subset (rather than a leading prefix) of the group-by columns of q j in order for the index chosen for q j to be applicable to q i .
  • the process defines Distance (s i , s j ) between two such statements as Distance (q i , q j ) only if AdjustWeight (q i , q j ) is (approximately) equal to AdjustWeight (u i , u j ) (see below for the definition of the AdjustWeight function), and ⁇ otherwise.
  • AdjustWeight q i , q j
  • the preprocessing step consumes a workload W and produces as output samples of one or more tables in the database.
  • the workload consists of aggregation queries containing the COUNT/SUM aggregates possibly with selections, GROUP BY and foreign-key joins.
  • the preprocessing step chooses the sample so as to minimize the average relative error of answering queries in W over the sample as compared to answering the queries on the full table. Note that for GROUP BY queries the relative error is averaged over all groups of the query—missing groups are assigned a relative error of 1.
  • Distance (q i , q j ) for AQP attempts to estimate the relative error in answering q i if it is pruned but q j is part of the compressed workload.
  • the signature of a query is defined by the subset of tables referenced in the query. If both queries belong to the same partition, they are analyzed based on whether they have selections or GROUP-BY.
  • both queries are pure selection queries, i.e., do not contain GROUP BY
  • the distance function Distance (q i , q j ) is defined as the fraction of records selected by q i that are not selected by q j .
  • the intuition behind this definition is that the error in answering a pure selection query q i depends on the number of records in the sample that are selected by q i . If q j is used to determine the sample, then the error for q i increases as the overlap of q i with q j decreases. Note that if the database engine supports the DIFFERENCE operator, then this function can be estimated by invoking the query optimizer. Otherwise, this metric must be estimated based on analysis of selection predicates and using selectivity estimates.
  • a workload is a set of statements where each statement has an associated weight w i (Section 2).
  • the weight of a statement signifies the importance of the statement in the workload, and plays a role in determining the optimization function of the application.
  • index selection tools typically optimize a weighted function of the (estimated) execution cost of statements in the workload.
  • an index that is useful for a query with large weight is more likely to be chosen by the tool.
  • Adjust Weights software module 122 finds for every pruned statement q i , the statement q j nearest to it in the compressed workload (in terms of the Distance function) and adjusts the weight of q j .
  • Example 3 a naive approach of simply adding the weight of the pruned statement to the nearest retained statement can result in poor quality of the compressed workload.
  • an application specific AdjustWeight (q i , q j ) function serves the purpose of specifying the amount by which the weight of a retained statement q j should be incremented if q i is pruned and q j is the closest statement to q i .
  • an exemplary data processing system for practicing the disclosed invention includes a general purpose computing device in the form of a conventional computer 20 , including one or more processing units 21 , a system memory 22 , and a system bus 23 that couples various system components including the system memory to the processing unit 21 .
  • the system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • the system memory includes read only memory (ROM) 24 and random access memory (RAM) 25 .
  • ROM read only memory
  • RAM random access memory
  • the computer 20 further includes a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29 , and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM or other optical media.
  • the hard disk drive 27 , magnetic disk drive 28 , and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32 , a magnetic disk drive interface 33 , and an optical drive interface 34 , respectively.
  • the drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computer 20 .
  • a number of program modules may be stored on the hard disk, magnetic disk 29 , optical disk 31 , ROM 24 or RAM 25 , including an operating system 35 , one or more application programs 36 , other program modules 37 , and program data 38 .
  • a user may enter commands and information into the computer 20 through input devices such as a keyboard 40 and pointing device 42 .
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are often connected to the processing unit 21 through a serial port interface 46 that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB).
  • a monitor 47 or other type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48 .
  • personal computers typically include other peripheral output devices (not shown), such as speakers and printers.
  • the computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49 .
  • the remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 20 , although only a memory storage device 50 has been illustrated in FIG. 1.
  • the logical connections depicted in FIG. 1 include a local area network (LAN) 51 and a wide area network (WAN) 52 .
  • LAN local area network
  • WAN wide area network
  • the computer 20 When used in a LAN networking environment, the computer 20 is connected to the local network 51 through a network interface or adapter 53 . When used in a WAN networking environment, the computer 20 typically includes a modem 54 or other means for establishing communications over the wide area network 52 , such as the Internet.
  • the modem 54 which may be internal or external, is connected to the system bus 23 via the serial port interface 46 .
  • program modules depicted relative to the computer 20 may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • Testing the exemplary embodiment shows that: (1) In the context of the Index Tuning Wizard for Microsoft SQL Server 2000, the disclosed Distance function for index selection produces significant compression of the workload while obeying the given quality constraint. (2) The same Distance function for index selection works well on another index selection tool, viz., IBM DB2's Index Advisor. (3) The clustering process of compressing the workload scales better than ALL-PAIRS workload compression but the latter can achieve significantly more compression. (4) The disclosed framework for workload compression can be applied to another application as well, viz. AQP, by providing an appropriate Distance function.
  • Testing was conducted on an ⁇ 86900 Mhz dual processor computer with 512 MB RAM and an internal 30 GB hard drive running Microsoft Windows 2000 Server. This system was used to test the disclosed workload compression on several databases and workloads, including real and synthetic schemas and workloads.
  • the system was tested on two benchmark workloads (TPC-H and APB), two real workloads (Real-1 and Real-2) maintained by the assignee of the present invention, and several synthetic workloads.
  • the database for Real-1 is about 600 MB and contains about 90% update statements, whereas Real-2 workload contains decision support queries against a 500 MB database.
  • All the synthetic databases conform to the TPC-H schema and were generated using a synthetic data generation program.
  • the size of the synthetic databases were 1 GB.
  • the synthetic workloads were generated using a query generation program, which has the ability to vary a number of parameters including number of joins, number of group-by columns, number of order-by columns, number of selection conditions in a query, and percentage of update statements in the workload.
  • Evaluation Metrics For the index selection application testing was evaluated using the following metrics to evaluate the workload compression solution: (a) Percentage of queries pruned by workload compression (b) Percentage reduction in total tuning time, i.e., sum of running time of index selection tool on the compressed workload and time spent in compressing the workload as compared to running the tool on the original workload. (c) Percentage loss in quality of the solution produced by the index selection application. The percentage change in the optimizer-estimated cost of the original workload as the metric of quality. This was obtained by running the index selection tool on both the original workload as well as the compressed workload, implementing the recommendations and calculating the optimizer estimated running time of the original workload for both the cases.
  • the constraint (A) on loss in quality was chosen to be 10% of the cost of the original workload W on the current database.
  • the testing measured loss in quality due to workload compression as the average relative error of queries in W when the entire workload is used in the preprocessing phase and the average relative error of queries in W, when the compressed workload is used in the preprocessing phase.
  • Detailed results of the testing are presented in a printed publication entitled “Compressing SQL Workloads” to Chauduri et al which was published Jun. 3, 2002 in Proceedings of the ACM SIGMOD 2002 and is incorporated herein by reference.

Abstract

Relational database applications such as index selection, histogram tuning, approximate query processing, and statistics selection have recognized the importance of leveraging workloads. Often these applications are presented with large workloads, i.e., a set of SQL DML statements, as input. A key factor affecting the scalability of such applications is the size of the workload. The invention concerns workload compression which helps improve the scalability of such applications. The exemplary embodiment is broadly applicable to a variety of workload-driven applications, while allowing for incorporation of application specific knowledge. The process is described in detail in the context of two workload-driven applications: index selection and approximate query processing.

Description

    FIELD OF THE INVENTION
  • The present invention concerns a system for compressing an initial database workload into a compressed database workload in an efficient way such that the quality of the result of an application that is run using the compressed workload instead of the initial workload is not significantly affected. [0001]
  • BACKGROUND ART
  • Information on how a computer database system is used can be important in performance tuning and management of the database system. In the context of relational databases, one specific form of usage information is the workload, which is typically a set of SQL statements executed by users of the system. Database practitioners and vendors have recognized the opportunity to tune and manage various aspects of database systems by analyzing workload information. Several workload-driven tasks have emerged recently for solving problems such as histogram tuning, improving query optimization, index selection, approximate answering of aggregation queries, and statistics selection. The term application is used to generically refer to such workload-driven tasks. [0002]
  • A key factor affecting the scalability of an application is the size of the workload, i.e., the number of SQL statements in the workload. In many cases, the workload consumed by the application is gathered using mechanisms in modern DBMSs that allow recording of SQL statements that execute on the server. In order to capture a representative collection of statements that execute against the system, the user of the application, such as a database administrator—could collect as the workload a log of SQL statements over a sufficiently large window of time (e.g., a day or week). Consequently, workloads tend to be large in size. Moreover, these applications often perform detailed analysis of queries in the workload and their inter-relationships, and hence their running time can be affected significantly as the workload size increases. [0003]
  • Operation of the applications can be sped up significantly by finding a substitute workload of smaller size (which is referred to as the compressed workload) as input, while qualitatively not degrading the result of the application. In other words, the result of the application when run on the compressed workload should be identical (or close) to the result when it is run on the original workload. It is important that this compressed workload be found efficiently, since otherwise the very purpose of using a compressed workload is defeated. [0004]
  • One solution to the workload compression problem is to use uniform random sampling to pick a smaller subset of the original workload. While this strategy is efficient, it is not an effective method for workload compression. The key reason for the poor compression achieved by uniform random sampling is that it is oblivious to the application for which the workload is being used, and hence ignores potentially valuable information about the statements in the workload. The invention exploits application knowledge to obtain significantly better workload compression as compared to uniform random sampling. [0005]
  • There have been prior art publications that use sampling in the area of databases e.g., Gibbons, P. B., Matias Y., and Poosala V. Fast Incremental Maintenance of Approximate Histograms. Proceedings of the 17[0006] th Intl. Conference on very Large Databases, 1997. and Haas P. J., Naughton, J. F, Seshadri S., and Stokes L. Sampling based estimation of the number of distince values of an attribute. Proceedings of the 21st Intl. Conference on Very Large Databases, 1995.. However, these publications address the problem of sampling data and not the workload. Random sampling, has also been studied extensively in the statistics literature. Random sampling ignores interaction among the objects being sampled (which is the primary source of its efficiency).
  • Clustering has been studied in the context of machine learning and data mining. A class of prior art also exists wherein data points are in a metric space, i.e., each pair of points is assigned a distance that is symmetric and satisfies the triangle inequality. There is also prior art that evaluates query equivalence but not with a goal toward compressing a workload. One form of equivalence is when two queries are semantically identical, i.e., they return the same result. Determining equivalence does, of course, come at a cost since it could require significant computational effort. [0007]
  • SUMMARY OF THE INVENTION
  • The invention uses application knowledge in addressing a problem of workload compression. A distance function quantitatively measures the “difference” between pairs of statements with respect to the application. The distance function can take on arbitrary values—e.g., unlike Euclidean distances. The function does not have to be symmetric or satisfy the triangle inequality. This generality is important in dealing with the variety of applications that compression of a workload have applicability. [0008]
  • A computer database system makes use of the invention in evaluating a workload having a plurality of statements based on a task that uses the workload. An initial workload W is provided as is a distance function that evaluates a distance for pairs of statements within the workload. The distance function returns an estimate of the loss in quality for an application if the first statement is discarded from the workload but the second statement of the pair of statements is retained in a compressed workload. The process finds a compressed workload W′ by evaluating the distance function for pairs of statements to produce a compressed workload having fewer statements than the initial workload W. [0009]
  • One aspect of the process is establishing a limit Δ on a loss in quality between use of the application with the compressed workload when compared with use of the application with the initial workload W. This limit can be used to determine when the workload compression is completed and provides an analytical assessment of the compression. [0010]
  • The workload compression can be implemented by choosing statements to remove from the workload W based on an all-pairs greedy search using the distance function and wherein removal of additional statements from the workload is ended when such removal results in a loss of quality that exceeds the limit Δ. An alternate statement selection process is based on clustering of statements and results in clustering until the loss in quality limit is exceeded by the compressed workload. [0011]
  • These and other objects, advantages and features of the invention will become better understood by reference to the accompanying exemplary embodiments of the invention which are described in conjunction with the accompanying drawings.[0012]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic illustration of a computer system for use in practicing the present invention; [0013]
  • FIG. 2 is a depiction of an application working on both an uncompressed and a compressed database workload; [0014]
  • FIG. 3 is a visualization of a workload compression criteria involving the calculation of distances between queries; and [0015]
  • FIG. 4 is a block diagram showing components for compressing a workload W to produce a compressed workload W′ in accordance with an exemplary embodiment of the invention;[0016]
  • EXEMPLARY EMBODIMENT FOR PRACTICING THE INVENTION
  • The exemplary embodiment of the invention concerns a system for compressing a workload that includes a number of statements. One example of a workload is a set of SQL DML statements. Thus, a workload W={q[0017] 1, . . . qi, . . . } where qi is a SQL DML statement (i.e., SELECT, UPDATE, INSERT, DELETE). One can associate a weight w (a real number) with statement qi. Workload compression is a technique for improving the scalability of an application A that consumes a workload W as input and produces a result R (FIG. 2). Instead of passing W as input to the application, one goal of the invention is use of a workload compression module 110 to first perform workload compression on W to obtain a compressed workload W′, and then pass W′ as input to the application A, thereby obtaining result R′. Such use of workload compression is meaningful if the following two criteria hold:
  • Efficiency Criterion: [0018]
  • The total running time, i.e., time taken for workload compression as well as the running time of the application, is less than the running time of the application on the original workload. This condition imposes the requirements that the workload compression algorithm itself is efficient and that it finds a compressed workload W′ such that the running time of the application on W′ is less than the running time of the application on W. [0019]
  • Quality Criterion: [0020]
  • Informally, the quality condition requires that the quality of the result R′ is “close enough” to the quality of the result R. More formally, let A be an application and F[0021] A be a function that quantitatively evaluates the result of the application with respect to the given workload W, i.e., FA(W,R) returns a real number that measures the quality of result R. Then this condition ensures that values FA(W,R) and FA(W,R′) are close enough. Note that the exact definition of the function FA is application dependent.
  • A generalized workload compression problem can be stated as: [0022]
  • Let A be an application that takes as input a workload W and produces a result R. Let F[0023] A(W,R) be a function that quantitatively evaluates R with respect to W. Give δ, 0<δ<1, find the workload W′ that minimizes the total running time of the application A (including the time for finding W′), subject to the quality constraint:
  • |F[0024] A(W,R)−FA(W,R′)|/|FA(W,R)|<δ, where R′ is the result produced by running A on input W′.
  • Note, that in the above formulation, W′ need not be a subset of W, i.e., W′ may contain statements not present in W. Workload compression can be applied in the context of multiple applications that consume a workload as input. Two such applications are explained in the two examples and for each example, a result R is produced by the application and an evaluation function F quantitatively evaluates this result. [0025]
  • EXAMPLE 1 Workload Compression for Index Selection
  • Selecting the right set of indexes is important for the performance of a database system. Automatically selecting appropriate indexes for a database is an important task since it reduces the burden on database administrators, and hence the total cost of managing the database. Recently, several major commercial database systems have developed tools to automate the task of choosing indexes. An index selection tool takes as input a workload W and a database, and produces as an output R a set of indexes appropriate for the given workload. To evaluate the quality of the result R, these tools typically use as F[0026] A (W,R) (where A=index selection), the query optimizer estimated execution time of statements in W if the result R is implemented (i.e., if the set of indexes R is materialized in the database). Thus, e.g., specifying δ=0.05 for index selection implies that if one is compressing the workload an acceptable compressed workload W′ provides an optimizer estimated execution time of statements in W when R′ is implemented cannot deviate by more than 5% compared to the optimizer estimated execution time if R had been implemented.
  • EXAMPLE 2 Workload Compression for Approximate Answering of Aggregation Queries
  • The goal of approximate query processing (AQP) is to allow efficient but approximate answers to ad-hoc queries against large relational databases. Random sampling is an approach for approximately answering aggregation queries (e.g., queries computing SUM or COUNT aggregate expressions). In this approach, the query is executed on a sample of the data rather of the entire data, thereby returning approximate answers but speeding up the query significantly. Recently, several printed publications have recognized the importance of using workload information to pick samples of the data and thereby improve upon the straightforward approach of uniform random sampling. Thus, the workload W is analyzed in a preprocessing step and this information is used to produce as result R, an appropriate set of samples of one or more tables in the database. These samples are chosen with the objective of minimizing the average relative error in answering queries in W over the sample. The preprocessing step described above can be expensive if the workload W is large, and is therefore a suitable application for workload compression. For example, in the stratified sampling approach using a weighted sampling technique, this preprocessing step requires executing the queries in W. A commonly used definition of F[0027] A is the average relative error over all queries in W when answered using the samples. Thus, specifying δ=0.1, for example, implies that an acceptable compressed workload W′ produces a sample over which the average relative error of queries in W cannot deviate by more than 10% compared to the case when we choose the sample based on the original workload W.
  • Simplifying Assumptions [0028]
  • While the problem is general, it may not be amenable to efficient solutions for two reasons. First, statements in the compressed workload W′ need not be a subset of the statements in W. Thus, the space of possible statements that need to be considered during workload compression is potentially much larger. The second reason that makes it hard to solve the general problem efficiently is that exact verification of the constraint on the loss of quality, |F[0029] A(W,R)−FA(W,R′)|/|FA(W,R)|<δ is expensive, since computing FA requires running the application A itself.
  • The exemplary embodiment of the invention sacrifices some generality, but is more amenable to efficient solutions. In particular, the exemplary embodiment applies the following two criteria: (1) The compressed workload W′ is a subset of the original workload W, and (2) The quality constraint is defined in terms of “distances” between pairs of statements in the workload. The exemplary embodiment uses an application specific Distance function between any pair of statements in the workload. Distance[0030] A (qi, qj) estimates the loss in quality of the result of application A for statement qi if qi is discarded, but statement qj is present in the compressed workload—independent of the other statements in the workload. More precisely, if Ri is the result of the application when the workload used is {qi} and Rj is the result of the application when the workload used is {qj}, then ∀qi, qjεW DistanceA (qi, qj) estimates the quantity FA({qi}, Rj)−FA({qi}, Ri). Operation of the distance function is visualized with regard to FIG. 3. In FIG. 3 one sees that W′ is the compressed workload and W−W′ is the set of statements in W that have been discarded by workload compression. For each statement qiεW−W′, we can find the “closest” statement in W′ as determined by the Distance function. The exemplary embodiment finds the smallest W′ such that if one takes a weighted sum of the distances between each discarded statement and the retained statement closest to it, that sum should not exceed a pre-specified value.
  • The Formal Revised Definition of the Problem: [0031]
  • Let A be an application that takes as input a workload W. Let Distance[0032] A (qi, qj) be a distance function for application A that ∀qi, qj εW, the distance function returns an estimate of the loss in quality for statement qi if it is discarded but qj is retained in the compressed workload. Given Δ, which is the limit on the maximum allowable loss in quality, find the smallest workload W′W, such that ΣqiεW−W′ minqjεW′{wi·DistanceA (qi, qj)}<Δ. This means a summation of the result of the operation of the distance function between all the discarded queries in the workload and the next closest query is less that a specified constant Δ.
  • Observe that unlike the general compression problem where the constraint on quality δ is relative, Δ is specified in absolute terms and has the same units as the Distance function. Also note that variations are possible by replacing the min in the quality constraint with other functions such as max or average. For example, using max makes the worst-case assumption that loss in quality for q[0033] i could be as large as the distance to the “furthest” statement from it in the compressed workload. Thus, if max is used, one would expect less degradation in the quality of the application, but also less compression of the workload since one presumably more rapidly reaches the threshold criteria. The exemplary process for compression of the workload disclosed herein makes no assumptions about properties of the Distance function. For example, it does not require that the Distance function be symmetric or obey the triangle inequality. In fact, the very definition of DistanceA (qi, qj) is asymmetric, since it measures the distance with respect to {qi}, the discarded query. This generality is important since for the specific examples of workload compression results indicate these properties were not satisfied. As shown below, the distance based workload compression problem is provably hard when the Distance function can return arbitrary values. It is emphasized that the exact definition of Distance (qi, qj) is application dependent since it must capture the impact on quality of the result produced by the application when qi is discarded and statement qj is retained in the compressed workload. Table 1 (below) summarizes the distance-based workload compression problem for the two applications of workload compression described previously.
    TABLE 1
    Meaning of
    Application Distance(qi, qj) Meaning of Δ
    Index Estimated Maximum
    Selection increase in the allowable
    cost of executing increase in
    statement qi if it (estimated)
    is discarded but running time of
    qj is retained the workload W
    Approximate Increase in the Maximum
    Answering relative error of allowable
    of answering query increase in
    Aggregation qi, if qi is average relative
    Queries discarded but qj error of queries
    is retained in W
  • The problem of a distance based compression that was defined previously aims to minimize the size of the set W′, so the constraint Σ[0034] q i εW−W′minq j εW′{(w i·Distance (qi, qj)}<Δ is satisfied. When the Distance function can generate arbitrary values, the solution is NP-Hard. The exemplary embodiment uses a reduction from the decision version of the Minimum k-Median problem which is known to be NP-Complete. To show this, first define the decision problem of the Minimum k-Median problem:
  • Problem Minimum k-Median: Given a complete graph G (V, E), costs C (u,v) εN (the set of natural numbers), ∀u, vεV, an integer k, and a number s. Does there exist a set of medians V′[0035] V of size k such that the sum of the distances from each vertex to its nearest median is less than s, i.e., ΣuεV−V′minvεV′{C(u,v)}<s?
  • Lemma 1. The problem of distance based workload compression is NP-Hard if the Distance function can return arbitrary values. [0036]
  • Proof: The decision problem for distance based compression is as follows: Let A be an application that takes as input a workload W. Let Distance (q[0037] i, qj) be a function that quantifies the distance between any pair of statements qi, qjεW. Given an integer k, and a number Δ, does there exist a workload W′W of size k such that Σq i εW−W′minq j εW′ (Distance (qi, qj))<Δ.?There is a direct correspondence of the two problems as follows: V
    Figure US20040002954A1-20040101-P00900
    W, V′
    Figure US20040002954A1-20040101-P00900
    W′, k
    Figure US20040002954A1-20040101-P00900
    k, Cost
    Figure US20040002954A1-20040101-P00900
    Distance, and s
    Figure US20040002954A1-20040101-P00900
    Δ. Hence, the decision problem of distance based compression is NP-Complete. Therefore it is NP-Hard.
  • Although for metric spaces there exist constant factor approximation algorithms for the Minimum k-Median problem, it has been shown that the Minimum k-Median problem with arbitrary costs does not have a constant factor approximation algorithm. [0038]
  • Overview of Exemplary Embodiment [0039]
  • An overview of an architecture of software components for implementing an exemplary embodiment of the invention is shown in FIG. 4. The input to the [0040] workload compression module 110 is a workload W and a constraint Δ, and an output is a compressed workload W′. An important part of the system is a Search module 112 (described below) that finds the compressed workload W′. The exemplary embodiment finds the smallest (cardinality) subset of the given workload W that satisfies the given constraint. For this optimization problem, two alternative processes are disclosed (besides random sampling). As described earlier, the search process consults a Distance function module 114 (described below), but makes no assumptions about properties of the Distance function—in other words the Distance function module 114 can return arbitrary values. The Distance function serves as the basis for estimating the loss in quality due to workload compression and is application specific. The quality of the compressed workload depends on the accuracy and efficiency of the Distance function. Efficiency in computing the Distance function is important since the distance function may be invoked many times for a large workload by the Search component. The accuracy of the Distance function is also important since overestimation of the loss in quality achieves less compression of the workload than ideally possible, whereas underestimation can cause an unacceptable result when the compressed workload is used by the application. An example of how to make this trade-off of efficiency vs. accuracy is presented in designing the Distance functions for each of the two applications: index selection and Aggregation query processing.
  • The exemplary process can associate a weight w[0041] i with each statement qi in the workload, which reflects the relative importance of that statement. The weight of a statement can significantly influence the result of the application. For example in index selection, the higher the weight of a statement, the more likely it is that the indexes that are suitable for that statement are part of the final result. When a statement is discarded by workload compression, one approach is to add the weight of the discarded statement to statement that is the “closest” to the discarded statement (as defined by the Distance function) in the compressed workload. However, as illustrated by the following example for index selection, simply adding the weight can be inappropriate.
  • EXAMPLE 3 Problem with Simple Addition of Weights
  • Consider a workload W with the following three queries: [0042]
  • Q[0043] 1: SELECT * FROM persons WHERE age <10.
  • Q[0044] 2: SELECT * FROM persons WHERE age <20.
  • Q[0045] 3: SELECT * FROM persons WHERE income <20000
  • Assume the weights of these queries in W are all 1. Suppose the compressed workload is {Q[0046] 2, Q3}. Using the obvious solution, since Q2 is the closest retained query to Q1, the adjusted weights of these queries is 2 and 1 respectively. However, from the queries, one sees that the presence of an index on column age would result in more benefit for Q1 as compared to Q2. Thus, the compressed workload has been biased against selecting an index on column age.
  • The exemplary embodiment includes a post-processing step, wherein an Adjust Weights module [0047] 120 (described below) uses an application specific AdjustWeight (qi,qj) function module 122 to adjust the weight of each statement in the compressed workload. Note that for certain applications, the solution of simply adding weights to the nearest retained statement may be appropriate. Finally, as an alternative embodiment the adjustment of weights during the search step is used to compress the workload.
  • A search component is responsible for finding a subset of W of smallest cardinality satisfying the constraint that the loss is quality is less than the given Δ. Disclosed herein are two alternative search processes for solving distance based workload compression. A first search process is based on the K-Mediod clustering process and the second is a new greedy search process. A comparison of these two search processes is provided below. [0048]
  • Based on the hardness result above it would be unexpected to achieve a polynomial time algorithm that computes an optimal solution to the distance based compression problem when the Distance function can return arbitrary values. The solutions to the distance based compression leverages efficient heuristic search algorithms. It is noted, however, that for specific Distance functions, the distance based compression may be solvable in polynomial time, and alternative search algorithms customized for a particular application may be appropriate. [0049]
  • K-Mediod Procedure [0050]
  • The Minimum k-Median problem is a chistering problem. The first search process adapts the K-Mediod clustering process. See Han, J., and Kamber M. [0051] Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, 2001. This process is a building block for constructing an algorithm for distance based compression by performing binary search on the size of the workload W. The pseudo code for this search process (Listing 1) that calls a modified K-Mediod based search process (Listing 2) are presented below.
    Listing 1
    Input: Workload W, Constraint Δ
    Output: Compressed workload W'
    1. Let Min_k = 0, Max_k = |W|, W' = W
    2. While (Min_k ≦ Max_k)
    3. k = (Min _k + Max_k)/2
    4.Let WTemp = KMED(W, k)
    5. Let D be the weighted sum of distances from each
    statement in W to the closest statement in WTemp as determined by the
    Distance function.
    6.If D ≦ Δ, W' = WTemp, Max_k = k − 1
    7. Else Min_k = k + 1
    8.End If
    9. End While
    10. Return W'
    Listing 2
    Input: Workload W, k
    Output: Workload W' of size k
    1 Pick k statements s1..sk from W at random. Each statement chosen forms
    the “seed” of a cluster.
    2. For each statement e ε W, assign it to the cluster that contains the seed
    closest to e, as determined by the Distance function.
    3. For each cluster C, re-compute the seed for that cluster as the “median”
    statement within the cluster, i.e., the statement e ε C such that ΣvεC Dis-
    tance (v, e) is the smallest.
    4. Repeat steps 2-3 until convergence, i.e., until the same clusters are ob-
    tained in some two iterations.
  • It can be shown that the process of listing 2 will terminate in a finite number of iterations of Steps 2-3. Also, the solution obtained by this process is (at least) a local optimum. The running time of distance based compression of the workload depends primarily on the number of invocations of the Distance function. As one can see from the pseudo code, the process of listing 2 performs O((|W|−k)*k+k*(|W|/k)[0052] 2) invocations of the Distance function assuming each cluster on average contains the same number of statements. Note that since the process of listing 1 performs a binary search over the range 0 . . . |W|, it invokes KMED at most log2|W| times. WC-KMED produces a solution that is a local optimum.
  • All-Pairs Greedy Search Process [0053]
  • The process outlined in listings 1 and 2 does not invoke the Distance function on every pair of statements in W. An alternate ALL-PAIRS process does look at the Distance of each pair of statements in the workload. The goal in using this ALL-PAIRS process is to determine how the computing of all Distances would impact the quality of workload compression. Once the pair-wise distances are computed, the ALL-PAIRS process adopts a greedy approach that discards the next “best” statement from W until it is unable to discard any more statements without violating the given constraint Δ. The pseudocode is given in Listing 3. [0054]
    Listing 3
    Input: Workload W, Constraint Δ
    Output: Compressed workload W'
    1. Let the sets Candidates = W, Keep = { }, Prune = { }
    2. Let Total-Dist = 0
    3. While (Total-Dist < Δ)
    4.   For each statement qi, ε Candidates, compute d i = min(i ≠ j , qj ε Candidates ∪
    Keep) (wi . Distance (qi, qj))
    5.   Let qmin be the statement with the minimum distance
         dmin computed in Step 4.
    6.   If (dmin + Total-Dist < Δ)
    Move qmin from Candidates to Prune
    Old-Dist Total-Dist
    Total-Dist = Σi mink (w. Distance (qi, Distance (qi, qk)) where
    qi ε Prune and qk ε Candidates ∪ Keep
    If (Total-Dist > Δ)
    Move qmin from Prune to Keep,
    Total-Dist = Old-Dist
    EndIf
    7.   Else Break
    8.   EndIf
    9. End While
    Return W' = Candidates ∪ Keep
  • At each step the process of listing 3 maintains three sets, Keep, Prune and Candidates. Keep consists of statements which are definitely going to be retained in the compressed workload W′. Prune consists of the statements which are currently not in W′ and Candidates consists of the statements whose outcome hasn't yet been decided. In each iteration of the While loop in Step 3, for each statement in Candidates, the process computes the distance to the closest statement (as defined by the Distance function) that hasn't been pruned (Step 4). The statement for which this value is the smallest (Step 5) is considered next for pruning. Prior to actually pruning this statement however, the process verifies that removal of this statement does not violate the constraint A, since this statement may have been the closest statement to one or more statements that had been pruned previously. This check is performed in Step 6. At the end of the process, the statements in the sets Candidates and Keep constitute the compressed workload W′. [0055]
  • The process of ALL-PAIRS performs O(|W|[0056] 2) computations of the Distance function since in the first execution of Step 4 all pair-wise invocations of Distance are performed. Thus, it is expected the process outlined in Listings 1 and 2 will scale better with workload size compared to the ALL-PAIRS process. Also, unlike the process of listing land 2, the ALL-PAIRS process cannot guarantee that the solution obtained is a local optimum for the distance based workload compression problem.
  • While the process of listings 1 and 2 is based on a known algorithm for solving the k-Median problem, in testing it was found that the process of listing 3 often achieves more compression for distance based workload compression. Intuitively, the process of listing 3 performs better when the workload has many small clusters and the intra-cluster distances are small relative to inter-cluster distances. [0057]
  • Random Sampling [0058]
  • An alternative technique for improving the scalability of an application that consumes a workload W is to use sampling to select a subset W′ of W. The simplest of these schemes is uniform random sampling, where each statement in W has an equal probability of being selected. However, this it is believed this approach can result in poor quality workload compression due to the following problems: (a) Uniform sampling ignores valuable information about statements in the workload and therefore misses opportunity for more compression. (b) When the sampling fraction is small, certain small “clusters” of important statements may be altogether discarded and never make it into the compressed workload. This follows from a well known statistical result. Testing of the invention used a stratified sampling based process, which partitions the workload into strata and then samples uniformly within each stratum. One issue with applying sampling is how much (i.e., what fraction of W) to sample? The process starts with a sampling fraction f[0059] 0 and verify if the constraint A is satisfied for that sample—note that this step requires invoking the Distance function. If the constraint is not satisfied, the process by increasing the sampling fraction by a factor m>1 and sampling an additional set of statements. The process terminates when it find a sample that satisfies the constraint.
  • Distance Function [0060]
  • As described above, an important component of the distance based workload compression is the computation of Distance (q[0061] i, qj) for any pair of statements qi, qj in the given workload W. Recall that the function Distance (qi, qj) measures the expected loss in quality of the result of the application on workload {qi} if the workload {qj} is provided as input to the application. A judicious trade-off between accurate and efficient computation of the Distance function is important for ensuring the success of workload compression. Accuracy is important since overestimation of Distance (qi, qj) results in less workload compression than possible, while underestimation of Distance (qi, qj) can result in poor quality of the result of the application when the compressed workload is used. Efficiency of computing Distance is important since the search algorithms for workload compression may need to invoke the Distance function many times for different pairs of statements.
  • An exact method for computing Distance (q[0062] i, qj) is: (i) run the application on workload {qi} and compute the quality of the result for {qi}, (ii) run the application on {qj} and compute the quality of the result for {qi} and (iii) take the difference in quality between Steps (i) and (ii). However, for most applications such a method is inefficient since it requires running the application, and hence negates the very purpose of workload compression. Thus, the challenge of developing an appropriate Distance function for an application is to estimate this loss in quality efficiently. Two guidelines are followed in developing Distance functions for the two applications described herein: index selection and approximate answering of aggregation queries (AQP). These guidelines are broadly applicable in the context of other applications as well.
  • The first guideline is driven by the requirement that the computation of Distance function be efficient. A core set of information about each statement in the workload is identified that can be derived with low overhead and this information is relied upon for computing Distance. For example, in the exemplary implementation of a distance function for both applications, the distance function uses information that can be derived from the SQL parser and a selectivity estimation module. This information includes: (a) Type of the statement, (SELECT, INSERT, UPDATE, DELETE) (b) Structure of the query, e.g., tables referenced, projection columns, selection and join predicates etc. (c) For selection predicates, the selectivity of the predicate (computed by using a selectivity estimation module based on available statistics in the database) (d) If available, the (estimated) cost of executing the statement. This cost information can be obtained either via one invocation of the query optimizer (e.g., in Microsoft SQL Server using the Showplan interface, or in IBM DB2 using the EXPLAIN mode) or from previously recorded information about the actual execution time of the statement. [0063]
  • A second guideline is to leverage the technique of logically partitioning the workload. The idea is that for any two queries q[0064] i, qj belonging to different partitions, Distance(qi, qj) between the two queries is ∞. Partitioning can be incorporated within the Distance (qi, qj) function by generating a “signature” for each statement and returning ∞ if the two signatures are not identical. Thus, each statement with a distinct signature belongs to a different logical partition. As a simple example, in the index selection application, when two statements reference disjoint sets of tables, it is reasonable to separate them into different partitions since indexes that are useful for one statement cannot be useful for the other. There are two benefits of partitioning the workload. First, it provides a way to ensure that at least a minimum number of statements (i.e., at least one statement per partition) will be retained in the compressed workload. Second, since the signature of a query can typically be computed very efficiently compared to the more careful analysis that goes into the rest of the Distance function, partitioning serves as a “shortcut” that reduces the computational overhead of invoking the Distance function. Finally, note that the signature to be used to partition the workload is application dependent, and is therefore incorporated into the Distance function.
  • Distance Function for Index Selection [0065]
  • Assume that the workload W consists of SELECT, INSERT, UPDATE, DELETE statements, where the SELECT statements are limited to single-block Select, Project, Join (SPJ) queries with Group-By, Aggregation and Order-By. The Distance function for queries (i.e., SELECT statements) is discussed first. For index selection, the Distance (q[0066] i, qj) function measures the expected loss of benefit for {qi} if the set of indexes recommended for {qj} were used to answer the query instead of the set of indexes recommended for {qi} itself. Our goal is to estimate this expected loss of benefit efficiently without actually invoking the index selection application. While more sophisticated Distance functions could be designed for index selection, we believe that the disclosed process captures essential aspects of index selection, without making assumptions about the specific process used by the index selection tool. This is backed by our test results (below) which show the effectiveness of the exemplary Distance function for index selection tools on two different commercial database systems.
  • Partitioning the Workload [0067]
  • A first step in the exemplary Distance function is to detect if the two queries belong to the same partition. If they do not, the distance function returns with a value of ∞. As mentioned above, the intuition behind partitioning is to logically place queries that are “far apart” into disjoint partitions. In the context of index selection, two queries can be considered far apart, if there is little or no overlap in the set of indexes that would be chosen for each query. Based on this intuition we partition the workload on the basis of the tables accessed in each query and the join predicates (if any). This is done by generating a signature for each query that consists of the table IDs referenced in the query and (table, column) IDs accessed in the join predicate. [0068]
  • EXAMPLE 4 Motivating Example for Selectivity Based Partitioning
  • Consider the following two queries: [0069]
  • Q[0070] 1: SELECT * from persons where age >80
  • Q[0071] 2: SELECT * from persons where age >1
  • As example 4 shows, the straightforward partitioning scheme outlined above may still include queries into the same partition that are still “far apart”. According to the above scheme, both Q[0072] 1 and Q2 will be assigned to the same partition because they both have the same signature. However, note that the queries are still far apart from the point of view of indexes that are appropriate for each query. Due to the respective selectivities of the predicates on age, for Q1, an index on column age is likely to be very useful, whereas for Q2 an index on column age is likely to be of no use. Motivated by this observation, the process further splits each partition on the basis of selectivity information. For a single-table query, it computes the joint selectivity of all the predicates. All queries with joint selectivity less than or equal to a predetermined selectivity so (we used a value of s0=0.1) are assigned to one partition, and those with selectivity exceeding so are assigned to a different partition. Thus, for single-table queries, the process can generate at most two partitions. The process uses the extension of this partitioning scheme to the case of multi-table queries. Under this scheme, all queries belong to a t-table partition (i.e., a partition with queries accessing those t tables) get split into at most 2t partitions (some of which may be empty). Although the number of such partitions can, in principle, become large, experience indicates that over a variety of large workloads (real and synthetic), the number of partitions grew very slowly with the number of tables.
  • Quantifying Distance [0073]
  • The disclosed approach for computing the Distance function is based on the observation that the effectiveness of an index for a query can be broadly categorized into one or more of the following performance categories: (a) Reduces the number of rows that need to be scanned from the table, (b) Eliminates the need to access the table altogether since the index contains all columns required to answer the query (i.e., the index is “covering” for the query) or (c) Reduces/eliminates the cost of sorting for an operator in the query. When computing a distance (q[0074] i, qj) the process evaluates each query and classifies the columns referenced in the query as: (1) Selection columns—contains all columns that occur in selection conditions. Indexes on these columns fall into performance category (a) above. (2) Required Columns—contains all columns that were referenced in any part of the query (including projection columns). Indexes on these columns fall into performance category (b) above. (3) Group-By Columns—contains all columns that occur in the GROUP BY clause of the query. (4) Order-By Columns—contains all columns that occur in the ORDER BY clause of the query. Indexes on columns in Category (3) and (4) fall into performance category (c) above.
  • The process then computes four functions Dist-Sel (q[0075] i, qj), Dist-Reqd (qi, qj), Dist-GB (qi, qj) and Dist-OB (qi, qj). Each of these functions captures the loss of benefit for a particular performance category. For example, Dist-Sel computes a distance by examining only the columns in the Selection category of the queries qi and qj, and thereby tries to capture the difference in performance category (a). Once each of the functions is computed the exemplary process defines Distance (qj, qj) as the maximum of the four values. Intuitively, by considering the maximum value, the process adopts a conservative approach that considers two queries as “close” only if they are “close” in each of these categories.
  • Computing Dist-Sel: The approach is based on the intuition that the column (or sequence of columns) in the predicate with the lowest selectivity (low selectivity means a predicate that selects few records from the table) is the one that will be picked to be indexed by the index selection tool for that query. In other words, while considering 1-column indexes, the process assumes that the column with the smallest selectivity will be picked. On the other hand, when considering 2-column indexes, it assumes that it is the sequence of two columns with the two lowest selectivities that will be picked. The following example illustrates how Dist-Sel(q[0076] i, qj), is computed.
    c3
    c1 c2 5
    Q1 0.1 0.3 0.2
    Q2 0.5 0.2 0.4
  • EXAMPLE 5
  • Computing Dist-Sel: Suppose one has two queries Q[0077] 1 and Q2 referencing a single table T, with predicates on columns c1, c2, c3. The selectivities of these predicates are given by the adjoining table. The best 1-column index for Q1 is I1=(c1). On the other hand, the best 1-column index for Q2 is I2=(c2). The loss of benefit for Q1 if Q1 is pruned and query Q2 is retained, is given by difference of cost between evaluating Q1 in the presence of I2 and cost of evaluating Q1 in presence of I1, which is given by (0.3-0.1)*Cost ({Q1}, { })=0.2*Cost ({Q1}, { }). (Note that Cost ({Q1}, { }) corresponds to a scan of the entire table i.e., no indexes are present). The intuition is that the presence of index 12 would require scanning 30% of the base relation for answering Q1, whereas the presence of index I1 would require scanning only 10% of it. Examining 2-column indexes, one sees that the best 2-column index for Q2 is (C2, C3) and the best 2-column index for Q1 is (c1, c3). Therefore, the loss of benefit is given by (0.3*0.2-0.1*0.2)*Cost ({Q1}, { })=0.04*Cost ({Q1}, { }). Similarly, for 3-column indexes, one sees that the loss of benefit is 0. In general, this analysis can similarly be extended for up to p-column indexes. The process takes Dist-Sel (Q1, Q2) as the maximum of the numbers computed—in the example, 0.2*Cost ({Q1}, { }).
  • Note that in case of multi-table queries, the process perform the same analysis as in the above example on a per table basis and then takes a weighted average of the table-wise Dist-Sel (q[0078] i, qj) values, the weight being the size of the table in pages. The process uses the size of the table as weight because for the same selectivity value, the amount of I/O required to answer the query is proportional to the size of the table.
  • Computing Dist-Reqd: Dist-Reqd (q[0079] i, qj) tries to capture the loss of benefit for performance category (b), i.e., use of covering indexes. The intuition behind the scheme is presented for single-table queries. The extension for the multi-table case is similar to the extension for Dist-Sel. Intuitively, if qi is pruned away, and the required columns of qi are a subset of the required columns of qj, then the covering index for qj can be used to answer qi and hence Dist-Reqd (qi, qj) is relatively small. However, if the required columns of qi are not a subset of the required columns of qj, then the covering index chosen for qj will not be useful for answering qi Pruning away qi in this case requires scanning the entire table for answering qi, and therefore the loss of benefit (i.e., Dist-Reqd) is large.
  • Computing Dist-GB & Dist-OB: First consider Dist-OB (q[0080] i, qj). The process assumes the model of no partial sort benefits, i.e., an index (c1, C2, c3) is of no use in answering a query with ORDER BY c1, c3, C2, even though there is some overlap in the prefix of the index and the prefix of the order-by clause. Under this simplifying assumption, Dist-OB (qi, qj) is 0 if the ORDER BY clause of qi is a leading prefix of the ORDER BY clause of qj. Otherwise Dist-OB (qi, qj) is equal to the Cost ({qi}, { }). Computation of Dist-GB (qi, qj) is done similarly, except that the process requires that the group-by columns of qi to be a subset (rather than a leading prefix) of the group-by columns of qj in order for the index chosen for qj to be applicable to qi.
  • Extensions for updates: If both statements are “pure” updates, e.g., of the form INSERT INTO T[0081] 1 VALUES ( . . . ), then the process sets the Distance between such statements to 0, since one statement can be safely pruned against the other without any impact on choice of indexes so long as the weight of the pruned statement is added to the weight of the retained statement. However, in general, an UPDATE/INSERT/DELETE statement can have an “update” part and a “query” part: e.g., INSERT INTO T1 SELECT * from T2 WHERE <condition>. Such a statement s can be viewed as (q,u) where q is the query part and u is the update part. In this case, the process defines Distance (si, sj) between two such statements as Distance (qi, qj) only if AdjustWeight (qi, qj) is (approximately) equal to AdjustWeight (ui, uj) (see below for the definition of the AdjustWeight function), and ∞ otherwise. The reason for this is that otherwise the process would be biasing the compressed workload either towards the query part or towards the update part.
  • Properties of the Distance function [0082]
  • As noted above, properties of the Distance function such as symmetry or triangle inequality can save a (potentially significant) number of explicit computations of the function. From the definition of the above Distance function, it is clear that it does not obey symmetry, i.e., Distance (q[0083] i, qj)≠Distance (qj, qi). It is also easy to construct examples that show that the triangle inequality doesn't hold true for this distance metric, i.e., Distance (qi, qj)+Distance (qj, qk) is not necessarily greater than Distance (qi, qk) However, the disclosed Distance function for index selection (defined above) satisfies the following property. For the Distance function defined in above, if Distance (qi, qj)=Distance (qj, qi)=0, then ∀qk, Distance (qk, qi)=Distance (qk, qj).
  • The process can exploit the above property to refine the query signature used in partitioning and hence reduce the work done when the Distance function is invoked. In testing over real as well as synthetic workloads, it was found that this optimization saved anywhere between 10%-40% of the Distance computation overhead. [0084]
  • Distance Function for Approximate Answering of Aggregation Queries [0085]
  • For an overview of the AQP application, refer to Example 2 above. As explained in the example, the preprocessing step consumes a workload W and produces as output samples of one or more tables in the database. We assume that the workload consists of aggregation queries containing the COUNT/SUM aggregates possibly with selections, GROUP BY and foreign-key joins. The preprocessing step chooses the sample so as to minimize the average relative error of answering queries in W over the sample as compared to answering the queries on the full table. Note that for GROUP BY queries the relative error is averaged over all groups of the query—missing groups are assigned a relative error of 1. [0086]
  • Distance (q[0087] i, qj) for AQP attempts to estimate the relative error in answering qi if it is pruned but qj is part of the compressed workload. As with the Distance function for index selection, we leverage the idea of partitioning and return ∞ if qi and qj have different signatures. The signature of a query is defined by the subset of tables referenced in the query. If both queries belong to the same partition, they are analyzed based on whether they have selections or GROUP-BY. If both queries are pure selection queries, i.e., do not contain GROUP BY, then the distance function Distance (qi, qj) is defined as the fraction of records selected by qi that are not selected by qj. The intuition behind this definition is that the error in answering a pure selection query qi depends on the number of records in the sample that are selected by qi. If qj is used to determine the sample, then the error for qi increases as the overlap of qi with qj decreases. Note that if the database engine supports the DIFFERENCE operator, then this function can be estimated by invoking the query optimizer. Otherwise, this metric must be estimated based on analysis of selection predicates and using selectivity estimates.
  • When one query is a pure selection query and the other is a GROUP BY query, the distance function Distance (q[0088] i,qj)=∞. When both queries have GROUP BY columns, Distance (qi, qj) is defined as follows. Let G be the set of grouping columns that occur in query qi and let G′ be the set of grouping columns that occur in both qi and qj. Let D(X) be the number of groups in a query (without selections) that contains exactly the grouping columns X. Then Distance (qi, qj)=1−D(G′)/D(G). The intuition is that (i) the error for GROUP BY queries is dominated by missing groups, and (ii) the number of missing groups is likely to increase as the overlap between the grouping columns of qi and qj decreases.
  • Adjusting Weights [0089]
  • Recall that a workload is a set of statements where each statement has an associated weight w[0090] i (Section 2). The weight of a statement signifies the importance of the statement in the workload, and plays a role in determining the optimization function of the application. For example, index selection tools typically optimize a weighted function of the (estimated) execution cost of statements in the workload. Thus, an index that is useful for a query with large weight is more likely to be chosen by the tool. In order to prevent statements in the compressed workload from having unduly high or low weight relative to other statements, it is important that the weights of statements in the compressed workload be set appropriately.
  • In the exemplary architecture (see FIG. 4), this issue is addressed in the Adjust [0091] Weights software module 122 as follows. At the end of the search algorithm (see above), the exemplary process finds for every pruned statement qi, the statement qj nearest to it in the compressed workload (in terms of the Distance function) and adjusts the weight of qj. However, as illustrated by Example 3 (see above) a naive approach of simply adding the weight of the pruned statement to the nearest retained statement can result in poor quality of the compressed workload. In the exemplary embodiment, an application specific AdjustWeight (qi, qj) function serves the purpose of specifying the amount by which the weight of a retained statement qj should be incremented if qi is pruned and qj is the closest statement to qi.
  • An appropriate AdjustWeight function for the index selection problem is presented. If q[0092] i is pruned and its nearest statement is qj, then the process sets the weight of qj in the compressed workload to wj+wiijjj where αij is the benefit that query qi gets from the indexes recommended for qj. The scenario from Example 3 is revisited to illustrate how the approach solves the problem of biasing.
  • EXAMPLE 3
  • (Continued from Above): Suppose the benefits of an index on column age for Q[0093] 1 and Q2 are 50 units and 40 units respectively. The actual total benefit from index on column age for W is 50*1+40*1=90 units, whereas for W′, this benefit is 40*2=80 units. Therefore, as pointed out earlier, we have biased the workload away from picking an index on column age. Using the approach described above, the weight of Q2 in the compressed workload would be w2′w2+w11222=1+1*50/40=2.25. We can now easily verify that the benefit of the index on column age for the compressed workload is 2.25*40=90, which is same as the benefit for the original workload.
  • Finally, we note that for the AQP application, we use the default AdjustWeight (q[0094] i,qj) function, which simply adds the weight of qi to qj.
  • Computer System [0095]
  • With reference to FIG. 1 an exemplary data processing system for practicing the disclosed invention includes a general purpose computing device in the form of a [0096] conventional computer 20, including one or more processing units 21, a system memory 22, and a system bus 23 that couples various system components including the system memory to the processing unit 21. The system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory includes read only memory (ROM) 24 and random access memory (RAM) 25. A basic input/output system 26 (BIOS), containing the basic routines that helps to transfer information between elements within the computer 20, such as during start-up, is stored in ROM 24.
  • The [0097] computer 20 further includes a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM or other optical media. The hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical drive interface 34, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computer 20. Although the exemplary environment described herein employs a hard disk, a removable magnetic disk 29 and a removable optical disk 31, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, digital video disks, Bernoulli cartridges, random access memories (RAMs), read only memories (ROM), and the like, may also be used in the exemplary operating environment.
  • A number of program modules may be stored on the hard disk, [0098] magnetic disk 29, optical disk 31, ROM 24 or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A user may enter commands and information into the computer 20 through input devices such as a keyboard 40 and pointing device 42. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 21 through a serial port interface 46 that is coupled to the system bus, but may be connected by other interfaces, such as a parallel port, game port or a universal serial bus (USB). A monitor 47 or other type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48. In addition to the monitor, personal computers typically include other peripheral output devices (not shown), such as speakers and printers.
  • The [0099] computer 20 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 49. The remote computer 49 may be another personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 20, although only a memory storage device 50 has been illustrated in FIG. 1. The logical connections depicted in FIG. 1 include a local area network (LAN) 51 and a wide area network (WAN) 52. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • When used in a LAN networking environment, the [0100] computer 20 is connected to the local network 51 through a network interface or adapter 53. When used in a WAN networking environment, the computer 20 typically includes a modem 54 or other means for establishing communications over the wide area network 52, such as the Internet. The modem 54, which may be internal or external, is connected to the system bus 23 via the serial port interface 46. In a networked environment, program modules depicted relative to the computer 20, or portions thereof, may be stored in the remote memory storage device. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • Testing [0101]
  • Testing the exemplary embodiment shows that: (1) In the context of the Index Tuning Wizard for Microsoft SQL Server 2000, the disclosed Distance function for index selection produces significant compression of the workload while obeying the given quality constraint. (2) The same Distance function for index selection works well on another index selection tool, viz., IBM DB2's Index Advisor. (3) The clustering process of compressing the workload scales better than ALL-PAIRS workload compression but the latter can achieve significantly more compression. (4) The disclosed framework for workload compression can be applied to another application as well, viz. AQP, by providing an appropriate Distance function. [0102]
  • Testing was conducted on an ×86900 Mhz dual processor computer with 512 MB RAM and an internal 30 GB hard drive running Microsoft Windows 2000 Server. This system was used to test the disclosed workload compression on several databases and workloads, including real and synthetic schemas and workloads. The system was tested on two benchmark workloads (TPC-H and APB), two real workloads (Real-1 and Real-2) maintained by the assignee of the present invention, and several synthetic workloads. The database for Real-1 is about 600 MB and contains about 90% update statements, whereas Real-2 workload contains decision support queries against a 500 MB database. All the synthetic databases conform to the TPC-H schema and were generated using a synthetic data generation program. The size of the synthetic databases were 1 GB. The synthetic workloads were generated using a query generation program, which has the ability to vary a number of parameters including number of joins, number of group-by columns, number of order-by columns, number of selection conditions in a query, and percentage of update statements in the workload. [0103]
  • Evaluation Metrics: For the index selection application testing was evaluated using the following metrics to evaluate the workload compression solution: (a) Percentage of queries pruned by workload compression (b) Percentage reduction in total tuning time, i.e., sum of running time of index selection tool on the compressed workload and time spent in compressing the workload as compared to running the tool on the original workload. (c) Percentage loss in quality of the solution produced by the index selection application. The percentage change in the optimizer-estimated cost of the original workload as the metric of quality. This was obtained by running the index selection tool on both the original workload as well as the compressed workload, implementing the recommendations and calculating the optimizer estimated running time of the original workload for both the cases. In the testing, the constraint (A) on loss in quality was chosen to be 10% of the cost of the original workload W on the current database. For the AQP application, the testing measured loss in quality due to workload compression as the average relative error of queries in W when the entire workload is used in the preprocessing phase and the average relative error of queries in W, when the compressed workload is used in the preprocessing phase. Detailed results of the testing are presented in a printed publication entitled “Compressing SQL Workloads” to Chauduri et al which was published Jun. 3, 2002 in Proceedings of the ACM SIGMOD 2002 and is incorporated herein by reference. [0104]
  • While the present invention has been described with a degree of particularity, it is the intent that the invention include all modifications and alterations from the disclosed exemplary embodiment falling within the spirit or scope of the appended claims. [0105]

Claims (45)

1. For use in a computer database system, a method for compressing a workload used by an application and including a plurality of statements comprising:
providing a distance function for pairs of statements within a workload which returns an estimate of the loss in quality for an application if a first statement is discarded from the workload but a second statement of the pair of statements is retained in a compressed workload; and
finding a compressed workload W′ by evaluating the distance function for pairs of statements to produce a compressed workload having fewer statements than an initial workload W.
2. The method of claim 1 additionally comprising establishing a limit Δ on a loss in quality between use of the application with the compressed workload W′ when compared with use of the application with the initial workload W.
3. The method of claim 2 wherein choosing a statement to remove from the workload W is based on an all-pairs search using the distance function and wherein removal of additional statements from the workload is ended when such removal results in a loss of quality that exceeds the limit Δ.
4. The method of claim 3 wherein a distance calculation for candidate statements is determined and wherein statements are removed from the workload for statements having a maximum contribution to distance until a total of such distance contribution for statements removed from the workload equals or exceeds the limit Δ.
5. The method of claim 3 wherein a distance calculation for candidate statements is determined and wherein statements are removed from the workload for statements having a minimum contribution to distance until a total of such distance contribution for statements removed from the workload equals or exceeds the limit Δ.
6. The method of claim 1 additionally comprising partitioning the workload into groups of statements.
7. The method of claim 6 wherein the distance function assigns a large distance between statements that are categorized into different groups.
8. The method of claim 1 wherein the distance function is used to assign a number of statements to different clusters and then compress the set of statements within a cluster.
9. The method of claim 8 wherein the set of statements within a cluster is compressed into a representative statement.
10. The method of claim 9 additionally comprising establishing a limit Δ on a loss in quality between use of the application with the compressed workload W′ when compared with use of the application with the initial workload W if the loss of quality when using representative statements from the clustering does not exceed the limit the statements are again clustered with a lesser number of clusters.
11. The method of claim 1 wherein the compressed workload is a subset of the initial workload.
12. The method of claim 11 comprising finding a smallest workload W′W, such that Σq i εW−W′minq j εW′{wi·DistanceA (qi, qj)}<Δ.
13. The method of claim 1 where the statements are queries and updates and the application is index selection for processing the queries and updates and further wherein the distance function is based on index choices for a given pair of queries and updates.
14. The method of claim 13 wherein the distance function categorizes queries and updates by the selection, projection, order by, and group by properties of a query and compares these four properties between pairs of queries.
15. The method of claim 14 wherein the distance function determines a distance based on a maximum distance measure for the four properties.
16. The method of claim 1 wherein the statements in the workload are aggregation queries and the application is to find a sample of the database for providing an approximate answer to aggregation queries in the workload such that the average relative error in answers to the workload is minimized.
17. The method of claim 16 further wherein the distance function provides an indication of an increase in error of an approximate answer to the a specified aggregation query if said specified aggregation query is removed from the workload and a second aggregation query of said pair of queries is kept in the compressed workload.
18. The method of claim 1 wherein statements in the workload are given different weights of importance and wherein weights of statements that are removed from the workload W are combined with weights of statements that remain in the compressed workload W′.
19. The method of claim 2 wherein statements are randomly sampled from the workload W and the process evaluates the distance function for statements that are randomly sampled from the workload W and removes sampled statements from the workload until the limit on loss of quality is reached.
20. The method of claim 1 wherein statements in the workload are partitioned into groups and a distance between statements is determined only if the two statements are classified in the same group.
21. A computer system for evaluating a workload having a plurality of statements based on a application that uses the workload comprising:
a database management system including a log that contains an initial workload W containing a plurality of statements;
an evaluation component that determines a distance function for pairs of statements within the workload which returns an estimate of the loss in quality for an application if a first statement is discarded from the workload but a second statement of the pair of statements is retained in a compressed workload; and
a search component that finds a compressed workload W′ by evaluating the distance function for pairs of statements to produce a compressed workload having fewer statements than the initial workload W.
22. The system of claim 21 wherein the search component establishes a limit Δ on a loss in quality between use of the application with the compressed workload W′ when compared with use of the application with the initial workload W.
23. The system of claim 22 wherein statements in the workload have weights and additionally comprising an adjust weight component that adjusts weights of those statements that are retained in the compressed workload W′ based on the weights of statements that are discarded from the initial workload W.
24. The system of claim 21 wherein the search component uses the evaluation component to assign a number of statements to different clusters based on the distance function and then compresses statements within a cluster.
25. The system of claim 22 wherein search component chooses a statement to remove from the workload W based on an all-pairs search using the distance function and wherein removal of additional statements from the workload is ended when such removal results a loss of quality that exceeds the limit Δ.
26. For use in a computer database system, a computer readable medium having instructions stored on said medium for:
providing a distance function for pairs of statements within a workload which returns an estimate of the loss in quality for an application if a first statement is discarded from the workload but a second statement of the pair of statements is retained in a compressed workload; and
finding a compressed workload W′ by evaluating the distance function for pairs of statements to produce a compressed workload having fewer statements than an initial workload W.
27. The computer readable medium of claim 26 additionally comprising instructions for establishing a limit Δ on a loss in quality between use of the application with the compressed workload W′ when compared with use of the application with the initial workload W.
28. The computer readable medium of claim 27 wherein the instructions choose the statement to remove from the workload W based on an all-pairs search using the distance function and wherein removal of additional statements from the workload is ended when such removal results in a loss of quality that exceeds the limit Δ.
29. The computer readable medium of claim 28 wherein the instructions determine a distance calculation for candidate statements and remove candidate statements from the workload for statements having maximum contribution to distance until a total of such distance contribution for statements removed from the workload equals or exceeds the limit Δ.
30. The computer readable medium of claim 28 wherein the instructions determine a distance calculation for candidate statements and remove candidate statements from the workload for statements having minimum contribution to distance until a total of such distance contributions for statements removed from the workload equals or exceeds the limit Δ.
31. The computer readable medium of claim 26 additionally comprising instructions that partition the workload into groups of statements.
32. The computer readable medium of claim 31 wherein the instructions that implement the distance function assign a large distance between statements that are categorized into different groups.
33. The computer readable medium of claim 26 wherein the instructions use the distance function to assign a number of statements to different clusters and then compress the set of statements within a cluster.
34. The computer readable medium of claim 33 wherein the instructions cause the set of statements within a cluster to be compressed into a representative statement for that cluster.
35. The computer readable medium of claim 34 wherein the instructions additionally establish a limit Δ on a loss in quality between use of the application with the compressed workload W′ when compared with use of the application with the initial workload W and if the loss of quality when using representative statements from the clustering does not exceed the limit the instructions again cluster the statements with a lesser number of clusters.
36. The computer readable medium of claim 26 wherein the instructions choose statements to include in the compressed workload as a subset of the initial workload.
37. The computer readable medium of claim 36 wherein the instructions find a smallest workload W′W, such that Σq i εW−W′minq j εW′{w i·DistanceA (qi, qj)}<Δ.
38. The computer readable medium of claim 26 where the statements are queries and updates, the application is index selection for database attributes for processing the queries and updates, and wherein the distance function is based on index choices for a given pair of queries and updates.
39. The computer readable medium of claim 38 wherein instructions that implement the distance function categorize queries and updates by the selection, projection, order by, and group by properties of a query and compares these four properties between pairs of queries.
40. The computer readable medium of claim 39 wherein the instructions that implement the distance function determine a distance based on a maximum distance measure for the four properties.
41. The computer readable medium of claim 26 wherein the statements in the workload are aggregation queries and the application finds a sample of the database for providing an approximate answer to aggregation queries in the workload such that the average relative error in answers to the workload is minimized.
42. The computer readable medium of claim 41 further wherein the instructions that implement the distance function provides an indication of an increase in error of the approximate answer to a specified aggregation query if the specified aggregation query is removed from the workload and a second aggregation query of said pair of queries is kept in the compressed workload.
43. The computer readable medium of claim 26 wherein statements in the workload are given different weights of importance and wherein the instructions combine weights of statements that are removed from the workload W with weights of statements that remain in the compressed workload W′.
44. The computer readable medium of claim 27 wherein the instruction randomly sample pairs of statements and the instructions remove evaluates statements that are randomly sampled from the workload W and removes sampled statements from the workload until the limit on loss of quality is reached.
45. The computer readable medium of claim 26 wherein the instructions partition statements in the workload into groups and determine a distance between statements only if the two statements are classified in the same group.
US10/180,667 2002-06-26 2002-06-26 Compressing database workloads Expired - Lifetime US6912547B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/180,667 US6912547B2 (en) 2002-06-26 2002-06-26 Compressing database workloads
US11/008,335 US7293036B2 (en) 2002-06-26 2004-12-08 Compressing database workloads

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/180,667 US6912547B2 (en) 2002-06-26 2002-06-26 Compressing database workloads

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/008,335 Continuation US7293036B2 (en) 2002-06-26 2004-12-08 Compressing database workloads

Publications (2)

Publication Number Publication Date
US20040002954A1 true US20040002954A1 (en) 2004-01-01
US6912547B2 US6912547B2 (en) 2005-06-28

Family

ID=29778972

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/180,667 Expired - Lifetime US6912547B2 (en) 2002-06-26 2002-06-26 Compressing database workloads
US11/008,335 Expired - Fee Related US7293036B2 (en) 2002-06-26 2004-12-08 Compressing database workloads

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/008,335 Expired - Fee Related US7293036B2 (en) 2002-06-26 2004-12-08 Compressing database workloads

Country Status (1)

Country Link
US (2) US6912547B2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060099555A1 (en) * 2004-10-15 2006-05-11 Andris Talbergs Goal tracking device and method of tracking the completion of goals
US20060112097A1 (en) * 2004-11-24 2006-05-25 Callaghan Mark D Encoding data to be sorted
US20060161546A1 (en) * 2005-01-18 2006-07-20 Callaghan Mark D Method for sorting data
US20070083500A1 (en) * 2005-10-07 2007-04-12 Bez Systems, Inc. Method of incorporating DBMS wizards with analytical models for DBMS servers performance optimization
US20080195447A1 (en) * 2007-02-09 2008-08-14 Eric Bouillet System and method for capacity sizing for computer systems
US20090006071A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Methods for Definition and Scalable Execution of Performance Models for Distributed Applications
US8645425B1 (en) * 2004-02-25 2014-02-04 Teradata Us, Inc. Guiding the development of workload group definition classifications
US20150235160A1 (en) * 2014-02-20 2015-08-20 Xerox Corporation Generating gold questions for crowdsourcing
US9336302B1 (en) 2012-07-20 2016-05-10 Zuci Realty Llc Insight and algorithmic clustering for automated synthesis
US10452630B2 (en) 2016-10-17 2019-10-22 Salesforce.Com, Inc. Techniques and architectures for reverse mapping of database queries
US10762085B2 (en) * 2016-10-17 2020-09-01 Salesforce.Com, Inc. Automated database query tuning
US11205103B2 (en) 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis
US11226963B2 (en) * 2018-10-11 2022-01-18 Varada Ltd. Method and system for executing queries on indexed views
US20220350807A1 (en) * 2020-05-21 2022-11-03 Zhejiang Bangsun Technology Co., Ltd. Structure and method of aggregation index for improving aggregation query efficiency

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7805411B2 (en) 2003-09-06 2010-09-28 Oracle International Corporation Auto-tuning SQL statements
US20080059492A1 (en) * 2006-08-31 2008-03-06 Tarin Stephen A Systems, methods, and storage structures for cached databases
US7933932B2 (en) * 2006-11-14 2011-04-26 Microsoft Corporation Statistics based database population
FR2921783A1 (en) * 2007-09-28 2009-04-03 France Telecom METHOD FOR REDUCING SERVER CHARGE, TERMINAL, DEVICE, AND CORRESPONDING COMPUTER PROGRAM PRODUCT
US20090112533A1 (en) * 2007-10-31 2009-04-30 Caterpillar Inc. Method for simplifying a mathematical model by clustering data
US8458167B2 (en) * 2009-04-01 2013-06-04 International Business Machines Corporation Client-based index advisor
US8161017B2 (en) * 2009-04-03 2012-04-17 International Business Machines Corporation Enhanced identification of relevant database indices
JP2011129978A (en) * 2009-12-15 2011-06-30 Renesas Electronics Corp Digital-to-analog converter
CN112632105B (en) * 2020-01-17 2021-09-10 华东师范大学 System and method for verifying correctness of large-scale transaction load generation and database isolation level

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761438A (en) * 1993-08-31 1998-06-02 Canon Kabushiki Kaisha Apparatus for measuring the amount of traffic of a network at a predetermined timing and compressing data in the packet without changing the size of the packet
US5974457A (en) * 1993-12-23 1999-10-26 International Business Machines Corporation Intelligent realtime monitoring of data traffic
US20020161566A1 (en) * 2001-04-30 2002-10-31 Mustafa Uysal Method and apparatus for morphological modeling of complex systems to predict performance

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5822749A (en) * 1994-07-12 1998-10-13 Sybase, Inc. Database system with methods for improving query performance with cache optimization strategies
US5960423A (en) * 1997-08-15 1999-09-28 Microsoft Corporation Database system index selection using candidate index selection for a workload
US6223171B1 (en) * 1998-08-25 2001-04-24 Microsoft Corporation What-if index analysis utility for database systems
US6529901B1 (en) * 1999-06-29 2003-03-04 Microsoft Corporation Automating statistics management for query optimizers
US6363371B1 (en) * 1999-06-29 2002-03-26 Microsoft Corporation Identifying essential statistics for query optimization for databases
US6598038B1 (en) * 1999-09-17 2003-07-22 Oracle International Corporation Workload reduction mechanism for index tuning
US8090730B2 (en) * 2001-12-04 2012-01-03 University Of Southern California Methods for fast progressive evaluation of polynomial range-sum queries on real-time datacubes

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761438A (en) * 1993-08-31 1998-06-02 Canon Kabushiki Kaisha Apparatus for measuring the amount of traffic of a network at a predetermined timing and compressing data in the packet without changing the size of the packet
US5974457A (en) * 1993-12-23 1999-10-26 International Business Machines Corporation Intelligent realtime monitoring of data traffic
US20020161566A1 (en) * 2001-04-30 2002-10-31 Mustafa Uysal Method and apparatus for morphological modeling of complex systems to predict performance

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645425B1 (en) * 2004-02-25 2014-02-04 Teradata Us, Inc. Guiding the development of workload group definition classifications
US20060099555A1 (en) * 2004-10-15 2006-05-11 Andris Talbergs Goal tracking device and method of tracking the completion of goals
US7587396B2 (en) * 2004-11-24 2009-09-08 Oracle International Corporation Encoding data to be sorted
US20090282040A1 (en) * 2004-11-24 2009-11-12 Oracle International Corporation Encoding data to be sorted
US20090282069A1 (en) * 2004-11-24 2009-11-12 Oracle International Corporation Encoding data to be sorted
US8631020B2 (en) 2004-11-24 2014-01-14 Oracle International Corporation Encoding data to be sorted
US20060112097A1 (en) * 2004-11-24 2006-05-25 Callaghan Mark D Encoding data to be sorted
US8825674B2 (en) * 2004-11-24 2014-09-02 Oracle International Corporation Encoding data to be sorted
US7680791B2 (en) * 2005-01-18 2010-03-16 Oracle International Corporation Method for sorting data using common prefix bytes
US20060161546A1 (en) * 2005-01-18 2006-07-20 Callaghan Mark D Method for sorting data
US20070083500A1 (en) * 2005-10-07 2007-04-12 Bez Systems, Inc. Method of incorporating DBMS wizards with analytical models for DBMS servers performance optimization
US8200659B2 (en) * 2005-10-07 2012-06-12 Bez Systems, Inc. Method of incorporating DBMS wizards with analytical models for DBMS servers performance optimization
US20080195447A1 (en) * 2007-02-09 2008-08-14 Eric Bouillet System and method for capacity sizing for computer systems
US20090006071A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Methods for Definition and Scalable Execution of Performance Models for Distributed Applications
US9607023B1 (en) 2012-07-20 2017-03-28 Ool Llc Insight and algorithmic clustering for automated synthesis
US9336302B1 (en) 2012-07-20 2016-05-10 Zuci Realty Llc Insight and algorithmic clustering for automated synthesis
US10318503B1 (en) 2012-07-20 2019-06-11 Ool Llc Insight and algorithmic clustering for automated synthesis
US11216428B1 (en) 2012-07-20 2022-01-04 Ool Llc Insight and algorithmic clustering for automated synthesis
US20150235160A1 (en) * 2014-02-20 2015-08-20 Xerox Corporation Generating gold questions for crowdsourcing
US10452630B2 (en) 2016-10-17 2019-10-22 Salesforce.Com, Inc. Techniques and architectures for reverse mapping of database queries
US10762085B2 (en) * 2016-10-17 2020-09-01 Salesforce.Com, Inc. Automated database query tuning
US11205103B2 (en) 2016-12-09 2021-12-21 The Research Foundation for the State University Semisupervised autoencoder for sentiment analysis
US11226963B2 (en) * 2018-10-11 2022-01-18 Varada Ltd. Method and system for executing queries on indexed views
US20220164345A1 (en) * 2018-10-11 2022-05-26 Varada Ltd. Managed query execution platform, and methods thereof
US11347740B2 (en) * 2018-10-11 2022-05-31 Varada Ltd. Managed query execution platform, and methods thereof
US20220350807A1 (en) * 2020-05-21 2022-11-03 Zhejiang Bangsun Technology Co., Ltd. Structure and method of aggregation index for improving aggregation query efficiency
US11928113B2 (en) * 2020-05-21 2024-03-12 Zhejiang Bangsun Technology Co., Ltd. Structure and method of aggregation index for improving aggregation query efficiency

Also Published As

Publication number Publication date
US20050102305A1 (en) 2005-05-12
US7293036B2 (en) 2007-11-06
US6912547B2 (en) 2005-06-28

Similar Documents

Publication Publication Date Title
US6912547B2 (en) Compressing database workloads
Chaudhuri et al. Compressing sql workloads
US7472107B2 (en) Integrating horizontal partitioning into physical database design
US20040249810A1 (en) Small group sampling of data for use in query processing
Siddiqui et al. Cost models for big data query processing: Learning, retrofitting, and our findings
US6480836B1 (en) System and method for determining and generating candidate views for a database
US6850925B2 (en) Query optimization by sub-plan memoization
US6529901B1 (en) Automating statistics management for query optimizers
Morfonios et al. ROLAP implementations of the data cube
US7366716B2 (en) Integrating vertical partitioning into physical database design
US7958114B2 (en) Detecting estimation errors in dictinct page counts
US20050223019A1 (en) Block-level sampling in statistics estimation
US7778996B2 (en) Sampling statistics in a database system
US20060212429A1 (en) Answering top-K selection queries in a relational engine
US8745037B2 (en) Exploiting partitioning, grouping, and sorting in query optimization
US20090094191A1 (en) Exploiting execution feedback for optimizing choice of access methods
Aboulnaga et al. Automated statistics collection in DB2 UDB
US7120624B2 (en) Optimization based method for estimating the results of aggregate queries
US20040002956A1 (en) Approximate query processing using multiple samples
US6714938B1 (en) Query planning using a maxdiff histogram
Chaudhuri et al. A pay-as-you-go framework for query execution feedback
Gibbons et al. Aqua project white paper
Paradies et al. How to juggle columns: an entropy-based approach for table compression
Spiegel et al. TuG synopses for approximate query answering
Wang et al. Automatic Storage Structure Selection for hybrid Workload

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHAUDHURI, SURAJIT;GUPTA, ASHISH KUMAR;NARASAYYA, VIVEK;AND OTHERS;REEL/FRAME:013183/0338;SIGNING DATES FROM 20020724 TO 20020727

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0477

Effective date: 20141014

FPAY Fee payment

Year of fee payment: 12