There are many different ways to influence performance in MonetDB. Some of them are generic SQL features others are more specific to MonetDB. Here you will find an overview of features and tips that can be used to tweak performance.
Define a primary key for each table and foreign keys in tables referencing primary keys. These primary key definitions will implicitly create an internal hash index and speed up joins between 2 tables when joined on their primary and foreign key columns.
NOT NULL for columns which may not be
When a table is very large and your queries often need only parts of that table consider to partition the table into multiple partition tables (and one merge table to logically combine the partition tables). See: Data Partitioning, Updatable Merge Tables, CREATE MERGE TABLE
If your queries do some complex or expensive computations multiple times (or in multiple queries), it may be faster to first store the computation result in an added column of a table and next query the added column to eliminate the computation overhead in the queries. Use: alter table statements to add an extra column and update statements to update it with the computed values. You may also add triggers to keep the extra column values updated when new rows are added or updated.
ANALYZE statement to update the table's statistics after you have changed a table's data,
see also Table Statistics
To view the relational execution plan of a specific query run it with the
PLAN statement modifier explained on this page.
To view the MAL execution program of a specific query run it with the
EXPLAIN statement modifier, see page.
To view the performance trace of the MAL statements of a specific query run it with the
TRACE statement modifier, see page.
You may change the Optimizer pipeline to experiment with different optimizer pipeline steps.
For loading large data sets use
COPY INTO ... FROM file_name instead of
INSERT INTO ... VALUES commands.
See also table updates and Loading Bulk Data.
If you need to load large data sets via an API such as ODBC, JDBC, or DBI use prepared insert statement instead of
regular insert statements. See the prepare and execute statement.
It is important to execute
SET sys.optimizer = 'minimal_pipe'; before loading the data via the (prepared) insert statements.
Also disable autocommit mode and use manual commit after many (> 10000) or all rows are inserted into the table.
Also disable all existing
FOREIGN KEY table constraints by temporarily dropping them before the data load and
adding these constraints back again after the data load via the proper alter table drop/add constraint statements.
See: Loading Bulk Data.