Generic Database Cost Models for Hierarchical Memory Systems

From MonetDB
Jump to navigationJump to search

One Author: Stefan Manegold

Overview[edit]

A query is given to a query optimiser, which must choose the best algorithms and corresponding implementations to execute the query. These choices are based on the physical costs of the implementations. The paper outlines a method for the calculation of the cost of a query, by giving a cost analysis of the various relational operators, and showing how these cost functions can be combined to build up a query.

Memory and its Cost of Useage[edit]

With the current speed of CPUs, a significant length of time must elapse between requesting and receiving data from the various caches, and this time is now especially long when requesting data from the main memory. Hence main memory plays a significant part in the cost of an operation. This is especially so with main memory databases.

Memory is seen as separated in a hierarchy, starting from the L1 cache on the CPU, ending at main memory.

The higher in the memory hierarchy, the cheaper it is to request data. The main cost in requesting data from the caches is when a cache miss occurs. That is, when a given logical address is not associated to some data in a cache. This data must then be requested from main memory, taking a certain length of time which is longer than if the data had been present in the cache.

How the Cache Works[edit]

line
Data in a cache is stored in lines. That is, data at a given address is transferred from the main memory to a cache along with data subsequent to it, the total size of the data equal to a single line. The hope is that subsequent requests to data will be inside this line.
bandwidth
Amount of data per unit time
latency
Time to get one line

Bandwidth and latency vary based on whether requests are sequential or random.

Translation Lookaside Buffer (TLB)
Buffer translating logical addresses to memory addresses

Calculation of Cost[edit]

Predict the number and kinds of cache misses for all levels of memory. A cache miss is the main factor in the time that it takes to return data.

Model[edit]

Regions
D is a set of "regions". A region consists of R.n, its size, and R.w, its width. This is analogous to a table, with n being the number of rows. Its measure is ||R||, which is the product of the size and width.
Access Patterns
Each relational operator, like select and join, has its own access pattern. Select can be seen as a single sequential traversal of the input data, followed by a single sequential traversal of the output data. A list of traversal patterns is provided by the paper, each taking as parameters at least a region R, along with the number of bytes of each data item which is to be used. This number of bytes corresponds to a certain field in a tuple.
Example Access Patterns
  1. single sequential traversal: s_tra(R[,u]), traverse each item of R in sequence
  2. single random traversal: r_tra(R[,u]), traverse each item of R exactly once
  3. interleaved multi-cursor access: divide into partitions which can possibly be traversed sequentially

Combining Access Patterns[edit]

Access patterns are combined to represent single operators.

Access patterns are executed either sequentially with (+), or concurrently with (.).

Example access pattern of select: s_tra(U) (.) s_tra(W)

Costs of the Access Patterns[edit]

A calculation of the costs of each pattern is provided, taking into account cache sizes and the width of regions in relation to line sizes.

Combination of the cost functions[edit]

Cost functions are combined either sequentially or concurrently. In the sequential case, one function may use the data that the first function used. This is accounted for.

In concurrent execution, two functions are competing for the same cache. The two functions will interfere with the operations in the cache of the other, resulting in a higher number of cache misses.

The concurrent execution of a pattern is modelled by its "footprint" F, which is the number of cache lines that it potentially revisits. Single sequential execution clearly does not revisit cache lines.

Questions[edit]

Is latency defined between one level to the next level only?

What is partitioning in this context?

Where do different cache levels come into the calculations?