SQL Reference Manual
SQL Reference Manual mk Wed, 03/17/2010 - 23:36The de facto language for database applications is SQL. It evolved through several phases of standardization to the version currently known as SQL-2008. The SQL standard provides an ideal language framework, in terms of standardization committee viewpoints. It is, however, hardly met by any of the existing (commercial) implementations. This is largely due to software legacy, backward compatibility requirements from their client base, and introduction of new features. See for instance the wikipedia on-line article on SQL standards and an overview of SQL features of main-stream systems.
In 2002 the first version of the SQL front end for MonetDB emerged. This late development made it possible to immediately start from the SQL'99 definition. As soon as the SQL'03 specifications became available, its content was taken as the frame of reference. The SQL development strategy is driven by immediate needs of the user base, so that less-frequently used features end up low on the development stack. Features beyond the standard are marked as non-standard SQL in their description.
The architecture is based on a compiler, which translates SQL statements into MAL (MonetDB Assembly Language) statements. In this process common optimization heuristics, specific to the relational algebra are performed. There are bindings for SQL with e.g. JDBC, PHP, Ruby on Rails, Python and C, to integrate seamlessly in the existing developments environments.
The remainder of this manual presents a synopsis of the language features. It provides a quick intro on the scope of the current implementation. Proficiency in elementary use of SQL is assumed. If you are new to this world then pick up any of the introduction books and study it carefully, e.g. J. Melton and A.R. Simon, SQL:1999 Understanding Relational Language Components, ISBN 1558604561.
The language features recognized are presented in simplified BNF notation. All reserved words are depicted in capitals. Language (non) terminals are depicted in lower case italics. Optional constructs are enclosed by squary brackets ('[',']') with alternative choices separated by a bar ('|'), at most one can be choosen. A set of options is enclosed by curly braces '{','}' and one must be choosen. Repetition of language structures is depicted by list structures, e.g. A ',' ... denotes a sequence of at least one A separated by commas.
Identifiers are optional qualified with a semantic tag, e.g. authid means a name for an authorised user.
SQL Syntax Overview
SQL Syntax Overview dinther Fri, 05/25/2018 - 16:35Data Definition Language (DDL)
Data Manipulation Language (DML)
SELECT | SELECT INTO | WITH SELECT |
INSERT VALUES | INSERT SELECT | |
UPDATE | ||
DELETE | ||
TRUNCATE | ||
PREPARE | EXECUTE | |
CALL | ||
SET | SET TIME ZONE | SET CURRENT_TIMEZONE |
Bulk Data Import Commands
COPY INTO FROM |
COPY INTO FROM STDIN |
COPY BINARY INTO FROM |
COPY LOADER INTO FROM |
Bulk Data Export Commands
COPY INTO |
COPY INTO STDOUT |
Authorisation, Data Access Control Language
Transaction Control Commands
START TRANSACTION | COMMIT | ROLLBACK |
SET TRANSACTION | SET LOCAL TRANSACTION | |
SAVEPOINT | RELEASE SAVEPOINT | ROLLBACK TO SAVEPOINT |
Commands to Analyse / Optimize SQL Execution
EXPLAIN |
PLAN |
TRACE |
DEBUG |
SET OPTIMIZER |
ANALYZE |
Pseudo Columns
CURRENT_SCHEMA | ||
CURRENT_USER | USER | SESSION_USER |
CURRENT_ROLE | ||
CURRENT_DATE | NOW | |
CURRENT_TIME | ||
CURRENT_TIMESTAMP | ||
CURRENT_TIMEZONE | ||
LOCALTIME | ||
LOCALTIMESTAMP | ||
NEXT VALUE FOR |
Details
CREATE SCHEMA
CREATE SCHEMA [ IF NOT EXISTS ] <name> [ AUTHORISATION <auth name> ]
Define a new schema
Example:
CREATE SCHEMA tst; SET SCHEMA tst; SELECT CURRENT_SCHEMA;
Note: The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
For details see: Documentation/SQLreference/Schema
See also: SET SCHEMA CURRENT_SCHEMA COMMENT ON SCHEMA DROP SCHEMA
Associated system table: sys.schemas
CREATE SEQUENCE
CREATE SEQUENCE [ <schema name> . ] <name> [ AS datatype ] [ START WITH bigint ] [ INCREMENT BY bigint ] [ MINVALUE bigint | NO MINVALUE ] [ MAXVALUE bigint | NO MAXVALUE ] [ CACHE bigint ] [ [ NO ] CYCLE ]
Define a new integer number sequence generator
For details see: Documentation/Manuals/SQLreference/SerialTypes
See also: ALTER SEQUENCE COMMENT ON SEQUENCE NEXT VALUE FOR DROP SEQUENCE
Associated system table: sys.sequences
CREATE TABLE
CREATE TABLE [ IF NOT EXISTS ] [ <schema name> . ] <name> ( <column definition(s) and optional table-constraints list> )
Define a new table including data integrity constraints
Note: WARNING: Column CHECK constraint definitions are accepted but not enforced! They are also not stored in the data dictionary, so will be lost when using msqldump.
The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
For details see: Documentation/Manuals/SQLreference/Tables
and: Documentation/SQLreference/TableIdentityColumn
See also: ALTER TABLE ADD COLUMN ALTER TABLE ADD CONSTRAINT COMMENT ON TABLE COMMENT ON COLUMN DROP TABLE
Associated system table: sys.tables where type = 0
CREATE TABLE AS
CREATE TABLE [ IF NOT EXISTS ] [ <schema name> . ] <name> [ ( <column name(s) list> ) ] AS <SELECT query> [ WITH [NO] DATA ]
Define a new table from the results of a query. By default the table will be populated with the data of the query. Specify WITH NO DATA to only create the table.
Note: Default behavior is WITH DATA.
The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
For details see: Documentation/Manuals/SQLreference/Tables
See also: ALTER TABLE ADD COLUMN ALTER TABLE ADD CONSTRAINT COMMENT ON TABLE COMMENT ON COLUMN DROP TABLE
Associated system table: sys.tables where type = 0
CREATE TABLE FROM LOADER
CREATE TABLE [ IF NOT EXISTS ] [ <schema name> . ] <name> FROM LOADER <loader function name> ( [ <arguments> ] ) [ WITH [NO] DATA ]
Define a new table from the results of a (Phyton) loader function. By default the table will be populated with the data of the loader function. Specify WITH NO DATA to only create the table.
Note: This command is MonetDB specific SQL. It is introduced in Dec2016 release.
The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
For details see: blog/monetdbpython-loader-functions
See also: CREATE LOADER COPY LOADER INTO FROM ALTER TABLE ADD COLUMN ALTER TABLE ADD CONSTRAINT COMMENT ON TABLE COMMENT ON COLUMN DROP TABLE
Associated system table: sys.tables where type = 0
CREATE MERGE TABLE
CREATE MERGE TABLE [ IF NOT EXISTS ] [ <schema name> . ] <name> ( <column definition(s) and optional table-constraints list> )
Define a new merge table. A merge table combines the data of multiple other tables (partitions added via ALTER TABLE union_t1 ADD TABLE part_n) which all must have the exact same table definition
Note: This command is MonetDB specific SQL.
The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
A merge table is removed using the ordinary DROP TABLE statement. There is no DROP MERGE TABLE statement.
For details see: Documentation/Cookbooks/SQLrecipes/DataPartitioning
and: Documentation/Manuals/SQLreference/Tables
See also: ALTER TABLE ADD TABLE ALTER TABLE DROP TABLE COMMENT ON TABLE COMMENT ON COLUMN DROP TABLE
Associated system table: sys.tables where type = 3
CREATE REMOTE TABLE
CREATE REMOTE TABLE [ IF NOT EXISTS ] [ <schema name> . ] <name> ( <column definition(s)> ) ON <remote table URL> [ WITH [USER 'username'] [ [ENCRYPTED] PASSWORD 'password'] ]
Define a alias for a new remote table. The remote table must be an existing table on a running MonetDB server
Note: This command is MonetDB specific SQL.
The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
The "WITH USER ... PASSWORD ..." option is supported from release Aug2018 onwards. See this blog post for more information.
A remote table is removed using the ordinary DROP TABLE statement. There is no DROP REMOTE TABLE statement.
For details see: Documentation/Cookbooks/SQLrecipes/DistributedQueryProcessing
and: Documentation/Manuals/SQLreference/Tables
See also: COMMENT ON TABLE COMMENT ON COLUMN DROP TABLE
Associated system table: sys.tables where type = 5
CREATE REPLICA TABLE
CREATE REPLICA TABLE [ IF NOT EXISTS ] [ <schema name> . ] <name> ( <column definition(s) and optional table-constraints list> )
Define a new replica table
Note: This command is MonetDB specific SQL.
The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
A replica table is removed using the ordinary DROP TABLE statement. There is no DROP REPLICA TABLE statement.
For details see: Documentation/Manuals/SQLreference/Tables
and: Documentation/SQLreference/TableIdentityColumn
See also: COMMENT ON TABLE COMMENT ON COLUMN DROP TABLE
Associated system table: sys.tables where type = 6
CREATE STREAM TABLE
CREATE STREAM TABLE [ IF NOT EXISTS ] [ <schema name> . ] <name> ( <column definition(s) and optional table-constraints list> )
Define a new stream table
Example:
CREATE STREAM TABLE room_event (ts timestamp, room integer, payload decimal(8,2));
Note: This command is MonetDB specific SQL.
The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
A stream table is removed using the ordinary DROP TABLE statement. There is no DROP STREAM TABLE statement.
For details see: Documentation/Extensions/Streams/Catalog
See also: COMMENT ON TABLE COMMENT ON COLUMN DROP TABLE
Associated system table: sys.tables where type = 4
CREATE TEMPORARY TABLE
CREATE [ LOCAL | GLOBAL ] { TEMPORARY | TEMP } TABLE [ IF NOT EXISTS ] [ <schema name> . ] <name> { ( <column definition(s) and optional table-constraints list> ) | [ ( <column name(s)> ) ] AS <SELECT query> [ WITH [NO] DATA ] } [ ON COMMIT { DELETE ROWS | PRESERVE ROWS | DROP } ]
Define a new temporary table. The visibility of the table to other session users can be controlled by using GLOBAL. Default is LOCAL. A temporary table will be removed automatically after the user session is terminated.
When ON COMMIT clause is not specified then the default behavior is ON COMMIT DELETE ROWS, complying to the SQL standard.
When using AS SELECT ... the default is WITH DATA.
Example:
CREATE TEMP TABLE names (id int NOT NULL PRIMARY KEY, name VARCHAR(99) NOT NULL UNIQUE) ON COMMIT PRESERVE ROWS; -- Note that temporary tables are implicitly assigned to schema: tmp INSERT INTO tmp.names VALUES (1, 'one'); INSERT INTO tmp.names VALUES (2, 'two'); INSERT INTO tmp.names VALUES (2, 'one'); -- Error: INSERT INTO: PRIMARY KEY constraint 'names.names_id_pkey' violated INSERT INTO tmp.names VALUES (3, 'two'); -- Error: INSERT INTO: UNIQUE constraint 'names.names_name_unique' violated INSERT INTO tmp.names VALUES (3, 'drie'); SELECT * FROM tmp.names; -- shows 3 rows DROP TABLE tmp.names; CREATE GLOBAL TEMP TABLE tmp.name_lengths AS SELECT DISTINCT name, LENGTH(name) as length FROM sys.ids ORDER BY 1 WITH DATA ON COMMIT PRESERVE ROWS; SELECT * FROM tmp.name_lengths WHERE name ILIKE '%id%'; SELECT COUNT(*) AS count_names, AVG(length) AS avg_length FROM tmp.name_lengths; DROP TABLE tmp.name_lengths;
IMPORTANT: specify ON COMMIT PRESERVE ROWS to keep rows when working in autocommit mode, because default behavior is ON COMMIT DELETE ROWS.
Note: The "IF NOT EXISTS" option is supported from release Jul2017 onwards.
It is not possible to add comments on temporary tables or columns thereof.
A temporary table is removed using the ordinary DROP TABLE statement. There is no DROP TEMPORARY TABLE statement.
For details see: Documentation/Manuals/SQLreference/Tables
and: Documentation/SQLreference/TableIdentityColumn
See also: ALTER TABLE ADD COLUMN ALTER TABLE ADD CONSTRAINT DROP TABLE
Associated system table: sys.tables where type in (20, 30)
CREATE INDEX
CREATE [ UNIQUE ] INDEX <index name> ON [ <schema name> . ] <table name> ( <column name list> )
Define a new secondary index on one or more columns of a specific table
Note: Although CREATE INDEX commands are accepted by the MonetDB parser for SQL compliance purposes, it currently does not create a physical secondary index via this SQL command. Instead MonetDB internally decides which column search accelerator(s) to create, persist and use during SQL query execution.
The index name must be unique within the schema of the table. You can not specify the schema for the index. It will be placed in the same schema as the table.
WARNING: the UNIQUE keyword does NOT enforce a uniqueness constraint. To create a unique constraint use ALTER TABLE ADD CONSTRAINT UNIQUE instead.
For details see: Documentation/Manuals/SQLreference/Indices
See also: CREATE ORDERED INDEX ALTER TABLE ADD CONSTRAINT COMMENT ON INDEX DROP INDEX
Associated system table: sys.idxs
CREATE IMPRINTS INDEX
CREATE IMPRINTS INDEX <index name> ON [ <schema name> . ] <table name> ( <column name> )
Define a new imprints index (a simple but efficient cache conscious secondary index) on one numerical column of a specific table. This index is a special single column index (for numeric columns only) which stores meta data (min, max, nulls) on segments of column data. An imprint is used during query evaluation to limit data access and thus minimize memory traffic. It can speedup queries which have a column selection condition (such as: AGE IS NULL OR AGE BETWEEN 25 AND 65). The storage overhead is just a few percent over the size of the columns being indexed.
Example:
CREATE IMPRINTS INDEX my_impr_idx ON myschema.mytable ( my_num_column );
Note: This command is MonetDB specific SQL. It is introduced in the Dec2016 feature release (v11.25.3).
The imprints index is a new type of experimental column index.
Limitations are: Only 1 column can be indexed per index. Only columns of fixed size data type (so not on: char, varchar, clob, blob, url, json, inet and uuid) can be indexed.
The index name must be unique within the schema of the table. You can not specify the schema for the index. It will be placed in the same schema as the table.
Warning: The index is not maintained automatically and will become inactive after inserts, deletes or updates are done on the column data.
A imprints index is removed using the ordinary DROP INDEX statement. There is no DROP IMPRINTS INDEX statement.
For details see: Documentation/Manuals/SQLreference/Indices
and: Home/ScienceLibrary
See also: CREATE ORDERED INDEX CREATE INDEX COMMENT ON INDEX DROP INDEX
Associated system table: sys.idxs sys.storage
CREATE ORDERED INDEX
CREATE ORDERED INDEX <index name> ON [ <schema name> . ] <table name> ( <column name> )
Define a new ordered index on one column of a specific table. This index is a special single column index where the values are stored in ascending order. It can speedup queries which have a column selection condition (such as: AGE >=18 or AGE BETWEEN 18 AND 30) or need sorting such as when GROUP BY is used.
Example:
CREATE ORDERED INDEX my_ord_idx ON myschema.mytable ( my_column );
Note: This command is MonetDB specific SQL. It is introduced in the Dec2016 feature release (v11.25.3).
The ordered index is a new type of experimental column index.
Limitations: Only 1 column can be indexed per index.
The index name must be unique within the schema of the table. You can not specify the schema for the index. It will be placed in the same schema as the table.
Before Mar2018 release only columns of fixed size data type (so not on: char, varchar, clob, blob, url, json, inet and uuid) could be indexed. This limitation has been removed in Mar2018 release.
Warning: The index is not maintained automatically and will become inactive after inserts, deletes or updates are done on the column data.
A ordered index is removed using the ordinary DROP INDEX statement. There is no DROP ORDERED INDEX statement.
For details see: Documentation/Manuals/SQLreference/Indices
See also: CREATE IMPRINTS INDEX CREATE INDEX COMMENT ON INDEX DROP INDEX
Associated system table: sys.idxs sys.storage
CREATE VIEW
CREATE [ OR REPLACE ] VIEW [ <schema name> . ] <view name> [ ( <column name list> ) ] AS <SELECT query without LIMIT> [ WITH CHECK OPTION ]
Define a new SQL view. A view can be seen as a stored SELECT query. It is useful to reduce query complexity as it can include joins, unions and aggregations, increase standardisation and reusability and support server-based reporting.
Note: Limitations: Views are NOT updatable.
An ORDER BY clause is supported from release Jul2017 onwards.
The "OR REPLACE" option is supported from release Mar2018 onwards.
For details see: Documentation/Manuals/SQLreference/Views
See also: COMMENT ON VIEW COMMENT ON COLUMN DROP VIEW
Associated system table: sys.tables where type = 1
CREATE AGGREGATE EXTERNAL
CREATE [ OR REPLACE ] AGGREGATE [ <schema name> . ] <name> ( [ <arguments> ] ) RETURNS <data type> EXTERNAL NAME <MAL name>
Define a new user-defined aggregation function for which the implementation is done externally
Example:
CREATE AGGREGATE quantile(val bigint, q double) RETURNS bigint EXTERNAL NAME "aggr"."quantile";
Note: This command is MonetDB specific SQL. External implies language MAL.
To allow other users to invoke a user-defined aggregate function, you must grant the other users (or PUBLIC) EXECUTE privilege for the aggregate function.
The "OR REPLACE" option is supported from release Jul2017 onwards.
For details see: Documentation/Manuals/SQLreference/Functions
See also: COMMENT ON AGGREGATE GRANT PRIVILEGES DROP AGGREGATE DROP ALL AGGREGATE
Associated system table: sys.functions where type = 3 and language = 1
CREATE AGGREGATE LANGUAGE
CREATE [ OR REPLACE ] AGGREGATE [ <schema name> . ] <name> ( [ <arguments> ] ) RETURNS <data type> LANGUAGE [ C | CPP | R | PYTHON | PYTHON_MAP | PYTHON2 | PYTHON2_MAP | PYTHON3 | PYTHON3_MAP ] { <function body> }
Define a new user-defined aggregation function for which the body implementation is specified in the specific script language
Example:
CREATE AGGREGATE python_aggregate(val INTEGER) RETURNS INTEGER LANGUAGE PYTHON { unique = numpy.unique(aggr_group) x = numpy.zeros(shape=(unique.size)) for i in range(0, unique.size): x[i] = numpy.sum(val[aggr_group==unique[i]]) return(x) };
Note: This command is MonetDB specific SQL. Supported languages are: C, C++, R and Python.
If your Python code requires Python 2 to work correctly use PYTHON2 instead of PYTHON. If your Python code requires Python 3 to work correctly use PYTHON3 instead of PYTHON. If your Python code can be executed in parallel (using multiple threads) without side effects, use PYTHON_MAP instead of PYTHON. Idem for PYTHON2_MAP and PYTHON3_MAP.
For languages C and CPP a C/C++ compiler must be available on the deployment server and the MonetDB server started with option: --set embedded_c=true
.
For language R the R script interpreter software must be available on the deployment server and the MonetDB server started with option: --set embedded_r=true
.
For language PYTHON the Python script interpreter software must be available on the deployment server and the MonetDB server started with option: --set embedded_py=true
.
To allow other users to invoke a user-defined aggregate function, you must grant the other users (or PUBLIC) EXECUTE privilege for the aggregate function.
The "OR REPLACE" option is supported from release Jul2017 onwards.
For details see: JIT C/C++ UDFs and embedded-pythonnumpy-monetdb and embedded-r-monetdb
See also: COMMENT ON AGGREGATE GRANT PRIVILEGES DROP AGGREGATE CREATE FUNCTION LANGUAGE
Associated system table: sys.functions where type = 3 and language > 2
CREATE FUNCTION
CREATE [ OR REPLACE ] FUNCTION [ <schema name> . ] <name> ( [ <arguments> ] ) RETURNS <data type> BEGIN <function logic> END
Define a new user-defined function
Example:
CREATE FUNCTION heapspace(tpe string, i bigint, w int) returns bigint begin if tpe <> 'varchar' and tpe <> 'clob' then return 0; end if; return 10240 + i * w; end;
Note: Besides returning a scalar value (type = 1), functions can also be defined to return a Table as data type (type = 5).
To allow other users to invoke a user-defined function, you must grant the other users (or PUBLIC) EXECUTE privilege for the function.
The "OR REPLACE" option is supported from release Jul2017 onwards.
For details see: Documentation/Manuals/SQLreference/Functions
See also: COMMENT ON FUNCTION GRANT PRIVILEGES DROP FUNCTION DROP ALL FUNCTION
Associated system table: sys.functions where type in (1,5) and language = 2
CREATE FUNCTION EXTERNAL
CREATE [ OR REPLACE ] FUNCTION [ <schema name> . ] <name> ( [ <arguments> ] ) RETURNS <data type> EXTERNAL NAME <MAL name>
Define a new user-defined function for which the implementation is done externally
Example:
CREATE FUNCTION isa_uuid(s string) returns boolean external name uuid."isaUUID";
Note: This command is MonetDB specific SQL. External implies language MAL.
Besides returning a scalar value (type = 1), functions can also be defined to return a Table as data type (type = 5).
To allow other users to invoke a user-defined function, you must grant the other users (or PUBLIC) EXECUTE privilege for the function.
The "OR REPLACE" option is supported from release Jul2017 onwards.
For details see: Documentation/Manuals/SQLreference/Functions
and: Documentation/Cookbooks/SQLrecipes/UserDefinedFunction
See also: COMMENT ON FUNCTION GRANT PRIVILEGES DROP FUNCTION CREATE FUNCTION CREATE FUNCTION LANGUAGE
Associated system table: sys.functions where type in (1,5) and language = 1
CREATE FUNCTION LANGUAGE
CREATE [ OR REPLACE ] FUNCTION [ <schema name> . ] <name> ( [ <arguments> ] ) RETURNS <data type> LANGUAGE [ C | CPP | R | PYTHON | PYTHON_MAP | PYTHON2 | PYTHON2_MAP | PYTHON3 | PYTHON3_MAP ] { <function body> }
Define a new user-defined function for which the implementation is specified in a specific script language
Note: This command is MonetDB specific SQL. Supported languages are: C, C++, R and Python.
If your Python code requires Python 2 to work correctly use PYTHON2 instead of PYTHON. If your Python code requires Python 3 to work correctly use PYTHON3 instead of PYTHON. If your Python code can be executed in parallel (using multiple threads) without side effects, use PYTHON_MAP instead of PYTHON. Idem for PYTHON2_MAP and PYTHON3_MAP.
For languages C and CPP a C/C++ compiler must be available on the deployment server and the MonetDB server started with option: --set embedded_c=true
.
For language R the R script interpreter software must be available on the deployment server and the MonetDB server started with option: --set embedded_r=true
.
For language PYTHON the Python script interpreter software must be available on the deployment server and the MonetDB server started with option: --set embedded_py=true
.
Besides returning a scalar value (type = 1), functions can also be defined to return a Table as data type (type = 5).
To allow other users to invoke a user-defined function, you must grant the other users (or PUBLIC) EXECUTE privilege for the function.
The "OR REPLACE" option is supported from release Jul2017 onwards.
For details see: JIT C/C++ UDFs and embedded-pythonnumpy-monetdb and embedded-r-monetdb and voter-classification-using-monetdbpython
See also: COMMENT ON FUNCTION GRANT PRIVILEGES DROP FUNCTION CREATE AGGREGATE LANGUAGE CREATE LOADER
Associated system table: sys.functions where type in (1,5) and language > 2
CREATE FILTER FUNCTION EXTERNAL
CREATE [ OR REPLACE ] FILTER FUNCTION [ <schema name> . ] <name> ( [ <arguments> ] ) RETURNS <data type> EXTERNAL NAME <MAL name>
Define a new user-defined filter function for which the implementation is done externally
Example:
CREATE FILTER FUNCTION "re_like"(val string, pat string) external name algebra."rexpr_like";
Note: This command is MonetDB specific SQL. External implies language MAL.
To allow other users to invoke a user-defined filter function, you must grant the other users (or PUBLIC) EXECUTE privilege for the filter function.
The "OR REPLACE" option is supported from release Jul2017 onwards.
See also: COMMENT ON FILTER FUNCTION GRANT PRIVILEGES DROP FILTER FUNCTION DROP ALL FILTER FUNCTION
Associated system table: sys.functions where type = 4 and language = 1
CREATE LOADER
CREATE [ OR REPLACE ] LOADER [ <schema name> . ] <name> ( [ <arguments> ] ) LANGUAGE PYTHON { <phyton _emit.emit() function code> }
Define a new user-defined loader function for which the implementation is done in Python language. A loader function can for instance read data from an external file is a specific format such as XML, json, bson, ods, xlsx, etc.
Example:
CREATE LOADER json_loader(filename STRING) LANGUAGE PYTHON { import json f = open(filename) _emit.emit(json.load(f)) f.close() };
Note: This command is MonetDB specific SQL. It is introduced in Dec2016 release.
For language PYTHON the Python script interpreter software must be available on the deployment server and the MonetDB server started with option: --set embedded_py=true
.
To allow other users to invoke a user-defined loader function, you must grant the other users (or PUBLIC) EXECUTE privilege for the loader function.
The "OR REPLACE" option is supported from release Jul2017 onwards.
For details see: blog/monetdbpython-loader-functions
See also: COMMENT ON LOADER GRANT PRIVILEGES DROP LOADER CREATE TABLE FROM LOADER COPY LOADER INTO FROM
Associated system table: sys.functions where type = 7 and language > 2
CREATE PROCEDURE
CREATE [ OR REPLACE ] PROCEDURE [ <schema name> . ] <name> ( [ <arguments> ] ) BEGIN <procedure logic> END
Define a new user-defined procedure
Note: To allow other users to call and execute a user-defined procedure, you must grant the other users (or PUBLIC) EXECUTE privilege for the procedure.
The "OR REPLACE" option is supported from release Jul2017 onwards.
For details see: Documentation/Manuals/SQLreference/Procedures
See also: COMMENT ON PROCEDURE DECLARE CALL GRANT PRIVILEGES DROP PROCEDURE DROP ALL PROCEDURE
Associated system table: sys.functions where type = 2 and language = 2
CREATE PROCEDURE EXTERNAL
CREATE [ OR REPLACE ] PROCEDURE [ <schema name> . ] <name> ( [ <arguments> ] ) EXTERNAL NAME <MAL name>
Define a new user-defined procedure for which the implementation is done externally
Example:
CREATE PROCEDURE sys.createorderindex(sys string, tab string, col string) external name sql.createorderindex;
Note: This command is MonetDB specific SQL. External implies language MAL.
To allow other users to call and execute a user-defined procedure, you must grant the other users (or PUBLIC) EXECUTE privilege for the procedure.
The "OR REPLACE" option is supported from release Jul2017 onwards.
For details see: Documentation/Manuals/SQLreference/Procedures
See also: COMMENT ON PROCEDURE CALL GRANT PRIVILEGES DROP PROCEDURE DROP ALL PROCEDURE
Associated system table: sys.functions where type = 2 and language = 1
CREATE TYPE EXTERNAL
CREATE TYPE [ <schema name> . ] <type name> EXTERNAL NAME <MAL name>
Declare a new user-defined data type. The implementation (structure, operators & functions both scalar and bulk) must be done externally in compiled C code and MAL script.
Note: This command is MonetDB specific SQL. External implies language MAL. It adds a new user-defined type which is implemented in external C code. For examples see the C implementation of types: inet, json, url and uuid.
For details see: Documentation/Manuals/SQLreference/Userdefinedtypes
See also: DROP TYPE CREATE TABLE
Associated system table: sys.types where eclass = 16
CREATE TRIGGER
CREATE [ OR REPLACE ] TRIGGER <trigger name> { BEFORE | AFTER } [ INSERT ] [ UPDATE [ OF <column name list> ] ] [ DELETE ] [ TRUNCATE ] ON [ <schema name> . ] <table name> [ FOR [ EACH ] { ROW | STATEMENT } ] BEGIN <trigger action logic> END
Define a new trigger
Note: The "OR REPLACE" option is supported from release Mar2018 onwards.
The TRUNCATE event is supported from release Mar2018 onwards.
For details see: Documentation/SQLreference/Triggers
See also: DROP TRIGGER DECLARE GRANT PRIVILEGES
Associated system table: sys.triggers
COMMENT ON SCHEMA
COMMENT ON SCHEMA <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a schema
Example:
COMMENT ON SCHEMA prj4 IS 'schema of new project 4'; COMMENT ON SCHEMA prj0 IS ''; COMMENT ON SCHEMA prjX IS NULL;
Note: By specifying IS NULL or IS '' you remove the comment for the schema.
If the schema is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE SCHEMA DROP SCHEMA
Associated system table: sys.comments
COMMENT ON TABLE
COMMENT ON TABLE [ <schema name> . ] <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a table
Example:
COMMENT ON TABLE sys.comments IS 'contains comments on all db objects'
Note: By specifying IS NULL or IS '' you remove the comment for the table object.
If the table is dropped, the associated comment (including the comments for the columns) is also removed.
This command is supported from release Mar2018 onwards.
Note: it is not allowed or possible to add comments for temporary tables or objects in schema "tmp".
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE TABLE DROP TABLE
Associated system table: sys.comments
COMMENT ON VIEW
COMMENT ON VIEW [ <schema name> . ] <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a view
Example:
COMMENT ON VIEW mysch.articles_aggr IS 'view lists aggregated info on articles'
Note: By specifying IS NULL or IS '' you remove the comment for the view.
If the view is dropped, the associated comment (including the comments for the columns) is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE VIEW DROP VIEW
Associated system table: sys.comments
COMMENT ON COLUMN
COMMENT ON COLUMN [ <schema name> . ] <table or view name> . <column name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a column of a table or view
Example:
COMMENT ON COLUMN sys.comments.remark IS 'contains description text'
Note: By specifying IS NULL or IS '' you remove the comment for the column.
If the column (or table or view) is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
Note: it is not allowed or possible to add comments for columns of temporary tables or objects in schema "tmp".
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: ALTER TABLE ADD COLUMN ALTER TABLE DROP COLUMN
Associated system table: sys.comments
COMMENT ON INDEX
COMMENT ON INDEX [ <schema name> . ] <index name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for an index
Example:
COMMENT ON INDEX mysch.article_id IS 'unique index of id key column of article table'
Note: By specifying IS NULL or IS '' you remove the comment for the index.
If the index is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE INDEX DROP INDEX
Associated system table: sys.comments
COMMENT ON SEQUENCE
COMMENT ON SEQUENCE [ <schema name> . ] <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a sequence
Example:
COMMENT ON SEQUENCE mysch.article_id_seq IS 'sequence for article id column'
Note: By specifying IS NULL or IS '' you remove the comment for the sequence.
If the sequence is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE SEQUENCE DROP SEQUENCE
Associated system table: sys.comments
COMMENT ON FUNCTION
COMMENT ON FUNCTION [ <schema name> . ] <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a function
Example:
COMMENT ON FUNCTION sys.upper IS 'converts text into uppercase'
Note: By specifying IS NULL or IS '' you remove the comment for the function.
If the function is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE FUNCTION DROP FUNCTION
Associated system table: sys.comments
COMMENT ON PROCEDURE
COMMENT ON PROCEDURE [ <schema name> . ] <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a procedure
Example:
COMMENT ON PROCEDURE mysch.load_articles IS 'proc which reloads the articles from external file article.csv'
Note: By specifying IS NULL or IS '' you remove the comment for the procedure.
If the procedure is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE PROCEDURE DROP PROCEDURE
Associated system table: sys.comments
COMMENT ON AGGREGATE
COMMENT ON AGGREGATE [ <schema name> . ] <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for an aggregate function
Example:
COMMENT ON AGGREGATE sys.std_dev IS 'computes the standard deviation of a group of numeric values'
Note: By specifying IS NULL or IS '' you remove the comment for the aggregate function.
If the aggregate function is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE AGGREGATE LANGUAGE DROP AGGREGATE
Associated system table: sys.comments
COMMENT ON FILTER FUNCTION
COMMENT ON FILTER FUNCTION [ <schema name> . ] <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a filter function
Example:
COMMENT ON FILTER FUNCTION sys."ilike"(clob, clob) IS 'case insensitive pattern matching'; COMMENT ON FILTER FUNCTION sys."ilike"(clob, clob, clob) IS 'case insensitive pattern matching with user specified escape character';
Note: By specifying IS NULL or IS '' you remove the comment for the filter function.
If the filter function is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE FILTER FUNCTION EXTERNAL DROP FILTER FUNCTION
Associated system table: sys.comments
COMMENT ON LOADER
COMMENT ON LOADER [ <schema name> . ] <name> IS { 'comment text' | NULL | '' }
Add or update or remove a comment for a loader function
Example:
COMMENT ON LOADER mysch.load_xls_data IS 'custom loader to extract data from external xls file'
Note: By specifying IS NULL or IS '' you remove the comment for the loader function.
If the loader function is dropped, the associated comment is also removed.
This command is supported from release Mar2018 onwards.
For details see: bugzilla/show_bug.cgi?id=6109#c5
See also: CREATE LOADER DROP LOADER
Associated system table: sys.comments
DECLARE
DECLARE <variable name> <datatype>
Example:
DECLARE ts1 timestamp; SET ts1 = now(); SELECT ts1; SELECT * FROM sys.var() WHERE name NOT IN (SELECT var_name FROM sys.var_values);
Note: The declared variable is not persistent. It will be lost after closing the connection or session.
For details see: Documentation/Manuals/SQLreference/Variables
See also: SET SELECT
Associated system table: sys.var()
ALTER SEQUENCE
ALTER SEQUENCE [ <schema name> . ] <name> [ AS datatype ] [ RESTART [WITH bigint ] ] [ INCREMENT BY bigint ] [ MINVALUE bigint | NO MINVALUE ] [ MAXVALUE bigint | NO MAXVALUE ] [ CACHE bigint ] [ [ NO ] CYCLE ]
Change definition of a sequence generator
For details see: Documentation/Manuals/SQLreference/SerialTypes
See also: DROP SEQUENCE CREATE SEQUENCE
Associated system table: sys.sequences
ALTER TABLE ADD COLUMN
ALTER TABLE [ <schema name> . ] <name> ADD [ COLUMN ] <new column name> <column spec>
Add a column to a table
For details see: Documentation/SQLreference/Alter
See also: ALTER TABLE ADD CONSTRAINT ALTER TABLE ALTER COLUMN ALTER TABLE DROP COLUMN
Associated system table: sys.columns
ALTER TABLE ALTER COLUMN
ALTER TABLE [ <schema name> . ] <name> ALTER [ COLUMN ] <column name> { SET DEFAULT value | SET [NOT] NULL | SET STORAGE {string | NULL} | DROP DEFAULT }
Change column default value or nullability or storage
Note: Change of column name or data type is not supported. Instead use command sequence:
ALTER TABLE tbl ADD COLUMN new_column <column spec>;
UPDATE tbl SET new_column = CAST(old_column AS <new_column data type>);
ALTER TABLE tbl DROP COLUMN old_column RESTRICT;
For details see: Documentation/SQLreference/Alter
See also: ALTER TABLE ADD COLUMN ALTER TABLE DROP COLUMN ALTER TABLE DROP CONSTRAINT
Associated system table: sys.columns
ALTER TABLE DROP COLUMN
ALTER TABLE [ <schema name> . ] <name> DROP [ COLUMN ] <column name> [ RESTRICT | CASCADE ]
Remove a column from a table
Note: You cannot drop a column if it is referenced (e.g. from a view, an index, a merge table, a trigger, a foreign key constraint, a function or procedure or another db object except comments). Use option CASCADE to specify to also drop those referencing objects.
For details see: Documentation/SQLreference/Alter
See also: ALTER TABLE DROP CONSTRAINT ALTER TABLE ALTER COLUMN DROP TABLE
Associated system table: sys.columns
ALTER TABLE ADD CONSTRAINT
ALTER TABLE [ <schema name> . ] <name> ADD CONSTRAINT <constraint name> { PRIMARY KEY | UNIQUE | FOREIGN KEY } <table constraint specification>
Add a table constraint to a table
Note: Only one PRIMARY KEY constraint can be defined per table. When a primary key constraint is added, all the primary key columns will become NOT NULLable implicitly.
The CHECK constraint is not (yet) supported.
For details see: Documentation/SQLreference/Alter
See also: ALTER TABLE DROP CONSTRAINT ALTER TABLE ALTER COLUMN
Associated system table: sys.keys
ALTER TABLE DROP CONSTRAINT
ALTER TABLE [ <schema name> . ] <name> DROP CONSTRAINT <constraint name> [ RESTRICT | CASCADE ]
Remove a table constraint from a table
Note: You cannot drop a constraint if it is referenced (e.g. from a foreign key constraint). Use option CASCADE to specify to also drop those referencing objects.
For details see: Documentation/SQLreference/Alter
See also: ALTER TABLE ADD CONSTRAINT ALTER TABLE ALTER COLUMN
Associated system table: sys.keys
ALTER TABLE ADD TABLE
ALTER TABLE [ <schema name> . ] <merge table name> ADD TABLE <table name>
Add a table reference to a merge table set
Note: This command is MonetDB specific SQL.
Limitations: The added table must have the same column definitions and layout as the merge table.
Only user defined tables and merge tables can be added to a merge table set.
For details see: Documentation/Cookbooks/SQLrecipes/DataPartitioning
and: Documentation/SQLreference/Alter
See also: CREATE MERGE TABLE ALTER TABLE DROP TABLE
Associated system table: sys.tables where type = 3
ALTER TABLE DROP TABLE
ALTER TABLE [ <schema name> . ] <merge table name> DROP TABLE <table name> [ RESTRICT | CASCADE ]
Remove a table reference from a merge table set
Note: This command is MonetDB specific SQL.
For details see: Documentation/Cookbooks/SQLrecipes/DataPartitioning
and: Documentation/SQLreference/Alter
See also: CREATE MERGE TABLE ALTER TABLE ADD TABLE
Associated system table: sys.tables where type = 3
ALTER TABLE SET INSERT ONLY
ALTER TABLE [ <schema name> . ] <name> SET INSERT ONLY
Change access of a table to allow only inserts
For details see: Documentation/SQLreference/Alter
See also: ALTER TABLE SET READ ONLY ALTER TABLE SET READ WRITE
Associated system table: sys.tables where type in (0, 3, 4, 5, 6)
ALTER TABLE SET READ ONLY
ALTER TABLE [ <schema name> . ] <name> SET READ ONLY
Change access of a table to allow only reads (select queries)
For details see: Documentation/SQLreference/Alter
See also: ALTER TABLE SET READ WRITE ALTER TABLE SET INSERT ONLY
Associated system table: sys.tables
ALTER TABLE SET READ WRITE
ALTER TABLE [ <schema name> . ] <name> SET READ WRITE
Restore access of a table to allow inserts, updates, deletes and reads
For details see: Documentation/SQLreference/Alter
See also: ALTER TABLE SET READ ONLY ALTER TABLE SET INSERT ONLY
Associated system table: sys.tables where type in (0, 3, 4, 5, 6)
DROP SCHEMA
DROP SCHEMA [ IF EXISTS ] <schema name> [ RESTRICT | CASCADE ]
Remove a database schema, potentially including all its dependent objects
Example:
DROP SCHEMA IF EXISTS "my_project" RESTRICT;
Note: You cannot drop a schema if it contains objects (e.g. tables, views, functions or procedures, except comments). Use option CASCADE to specify to also drop those referencing objects. The default behavior is RESTRICT.
System schemas (sys, tmp, profiler, json) can not be dropped as they are needed by the system.
Warning: in releases before Mar2018 the default behavior was CASCADE always. The option RESTRICT was recognised but ignored without warning. This has been corrected in release Mar2018.
The "IF EXISTS" option is supported from release Jul2017 onwards.
For details see: Documentation/SQLreference/Schema
See also: CREATE SCHEMA CURRENT_SCHEMA SET SCHEMA
Associated system table: sys.schemas
DROP SEQUENCE
DROP SEQUENCE [ <schema name> . ] <sequence name>
Remove an existing sequence generator
Note: You cannot drop a sequence if it is referenced (e.g. from a column, a function or procedure).
For details see: Documentation/Manuals/SQLreference/SerialTypes
and: Documentation/Manuals/SQLreference/DropStatement
See also: ALTER SEQUENCE CREATE SEQUENCE
Associated system table: sys.sequences
DROP TABLE
DROP TABLE [ IF EXISTS ] [ <schema name> . ] <table name> [ RESTRICT | CASCADE ]
Remove a table, potentially including all its dependent objects
Example:
DROP TABLE IF EXISTS "my_project"."my_Table" CASCADE;
Note: You cannot drop a table if it is referenced (e.g. from a view, a merge table, a trigger, a foreign key constraint, a function, a procedure or another db object except indices and comments). Use option CASCADE to specify to also drop those referencing objects.
System tables can not be dropped as they are needed by the system.
The "IF EXISTS" option is supported from release Jul2017 onwards.
For details see: Documentation/Cookbooks/SQLrecipes/DataPartitioning
and: Documentation/Manuals/SQLreference/DropStatement
See also: CREATE TABLE
Associated system table: sys.tables
DROP INDEX
DROP INDEX [ <schema name> . ] <index name>
Remove an existing index
Example:
DROP TABLE "my_project"."my_Index";
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: CREATE INDEX CREATE ORDERED INDEX CREATE IMPRINTS INDEX
Associated system table: sys.idxs
DROP VIEW
DROP VIEW [ IF EXISTS ] [ <schema name> . ] <view name> [ RESTRICT | CASCADE ]
Remove a view
Example:
DROP VIEW IF EXISTS "my_project"."my_View" CASCADE;
Note: You cannot drop a view if it is referenced (e.g. from another view, a function or procedure or another db object except comments). Use option CASCADE to specify to also drop those referencing objects.
System views can not be dropped as they are needed by the system.
The "IF EXISTS" option is supported from release Jul2017 onwards.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: CREATE VIEW
Associated system table: sys.tables where type = 1
DROP AGGREGATE
DROP AGGREGATE [ IF EXISTS ] [ <schema name> . ] <aggregate function name> [ ( [ arg1_type [ , argn_type ] ] ) ] [ RESTRICT | CASCADE ]
Remove a specific user-defined aggregation function. If multiple aggregate functions exist with the same name, supply the full signature.
Example:
DROP AGGREGATE testaggr(int, double);
Note: This command is MonetDB specific SQL.
Use option CASCADE to specify to also drop referencing objects.
System aggregates can not be dropped as they are part by the system.
The "IF EXISTS" option is supported from release Mar2018 onwards.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: DROP ALL AGGREGATE CREATE AGGREGATE LANGUAGE CREATE AGGREGATE EXTERNAL
Associated system table: sys.functions where type = 3
DROP ALL AGGREGATE
DROP ALL AGGREGATE [ <schema name> . ] <aggregate function name> [ RESTRICT | CASCADE ]
Remove all user-defined aggregation functions which share the same name but have different signatures.
Example:
DROP ALL AGGREGATE testaggr;
Note: This command is MonetDB specific SQL.
Use option CASCADE to specify to also drop referencing objects.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: DROP AGGREGATE CREATE AGGREGATE LANGUAGE CREATE AGGREGATE EXTERNAL
Associated system table: sys.functions where type = 3
DROP FUNCTION
DROP FUNCTION [ IF EXISTS ] [ <schema name> . ] <function name> [ ( [ arg1_type [ , argn_type ] ] ) ] [ RESTRICT | CASCADE ]
Remove a specific user-defined function. If multiple functions exist with the same name, supply the full signature.
Example:
DROP FUNCTION testfun(int, double);
Note: Use option CASCADE to specify to also drop referencing objects.
System functions can not be dropped as they are part by the system.
The "IF EXISTS" option is supported from release Mar2018 onwards.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: DROP ALL FUNCTION CREATE FUNCTION CREATE FUNCTION LANGUAGE
Associated system table: sys.functions where type in (1,5)
DROP ALL FUNCTION
DROP ALL FUNCTION [ <schema name> . ] <function name> [ RESTRICT | CASCADE ]
Remove all user-defined functions which share the same name but have different signatures.
Example:
DROP ALL FUNCTION testfun;
Note: This command is MonetDB specific SQL.
Use option CASCADE to specify to also drop referencing objects.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: DROP FUNCTION CREATE FUNCTION CREATE FUNCTION LANGUAGE
Associated system table: sys.functions where type in (1,5)
DROP FILTER FUNCTION
DROP FILTER FUNCTION [ IF EXISTS ] [ <schema name> . ] <function name> [ ( [ arg1_type [ , argn_type ] ] ) ] [ RESTRICT | CASCADE ]
Remove a specific user-defined filter function. If multiple filter functions exist with the same name, supply the full signature.
Example:
DROP FILTER FUNCTION my_like(string, varchar(100));
Note: This command is MonetDB specific SQL.
System filter functions can not be dropped as they are part by the system.
Use option CASCADE to specify to also drop referencing objects.
The "IF EXISTS" option is supported from release Mar2018 onwards.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: DROP ALL FILTER FUNCTION CREATE FILTER FUNCTION EXTERNAL
Associated system table: sys.functions where type = 4
DROP ALL FILTER FUNCTION
DROP ALL FILTER FUNCTION [ <schema name> . ] <function name> [ RESTRICT | CASCADE ]
Remove all user-defined filter functions which share the same name but have different signatures.
Example:
DROP ALL FILTER FUNCTION my_like;
Note: This command is MonetDB specific SQL.
Use option CASCADE to specify to also drop referencing objects.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: DROP FILTER FUNCTION CREATE FILTER FUNCTION EXTERNAL
Associated system table: sys.functions where type = 4
DROP LOADER
DROP LOADER [ IF EXISTS ] [ <schema name> . ] <loader name> [ ( [ arg1_type [ , argn_type ] ] ) ]
Remove a specific user-defined loader function. If multiple loader functions exist with the same name, supply the full signature.
Note: This command is MonetDB specific SQL. It is introduced in Dec2016 release.
The "IF EXISTS" option is supported from release Mar2018 onwards.
For details see: blog/monetdbpython-loader-functions
See also: CREATE LOADER CREATE FUNCTION LANGUAGE
Associated system table: sys.functions where type = 7 and language > 2
DROP PROCEDURE
DROP PROCEDURE [ IF EXISTS ] [ <schema name> . ] <procedure name> [ ( [ arg1_type [ , argn_type ] ] ) ] [ RESTRICT | CASCADE ]
Remove a specific user-defined procedure. If multiple procedures exist with the same name, supply the full signature.
Example:
DROP PROCEDURE testproc(int, double);
Note: Use option CASCADE to specify to also drop referencing objects.
System procedures can not be dropped as they are part by the system.
The "IF EXISTS" option is supported from release Mar2018 onwards.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: DROP ALL PROCEDURE CREATE PROCEDURE CREATE PROCEDURE EXTERNAL
Associated system table: sys.functions where type = 2
DROP ALL PROCEDURE
DROP ALL PROCEDURE [ <schema name> . ] <procedure name> [ RESTRICT | CASCADE ]
Remove all user-defined procedures which share the same name but have different signatures.
Example:
DROP ALL PROCEDURE testproc;
Note: This command is MonetDB specific SQL.
Use option CASCADE to specify to also drop referencing objects.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: DROP PROCEDURE CREATE PROCEDURE CREATE PROCEDURE EXTERNAL
Associated system table: sys.functions where type = 2
DROP TYPE
DROP TYPE [ <schema name> . ] <type name> [ RESTRICT | CASCADE ]
Remove a user-defined type
For details see: Documentation/Manuals/SQLreference/Userdefinedtypes
See also: CREATE TYPE EXTERNAL
Associated system table: sys.types
DROP TRIGGER
DROP TRIGGER [ IF EXISTS ] [ <schema name> . ] <trigger name>
Remove an existing trigger
Note: The "IF EXISTS" option is supported from release Mar2018 onwards.
For details see: Documentation/Manuals/SQLreference/DropStatement
See also: CREATE TRIGGER
Associated system table: sys.triggers
SELECT
SELECT <selection list> [ FROM <table_view_subselect_function list> [<alias>] ] [ WHERE <condition list> ] [ GROUP BY <grouping list> ] [ HAVING <group conditon list> ] [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <select query> ] [ ORDER BY <ordering list [ASC | DESC] > ] [ LIMIT <pos nr> ] [ OFFSET <pos nr> ] [ SAMPLE <sample size> ]
Execute a database query and return the resultset data rows
Example:
-- query to lists all user created tables: SELECT tbl.id, sch.name AS "Schema", tbl.name as "Table", (select table_type_name from sys.table_types where table_type_id = type) as table_type FROM sys."tables" as tbl join sys."schemas" as sch on sch.id = tbl.schema_id WHERE query is NULL and NOT tbl.system ORDER by "Schema", "Table";
Note: It is also possible to select data without a FROM-clause, such as: SELECT CURRENT_DATE, (1+2) * 3, pi();
For details see: Documentation/SQLreference/TableExpressions
and: Documentation/Cookbooks/SQLrecipes/Sampling
Associated system table: sys.queue and sys.querylog_history
SELECT INTO
SELECT <selection list> INTO <variable list> [ FROM <table_view_subselect_function list> [<alias>] ] [ WHERE <condition list> ] [ GROUP BY <grouping list> ] [ HAVING <group conditon list> ] [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <select query> ] [ ORDER BY <ordering list [ASC | DESC] > ] [ LIMIT <pos nr> ] [ OFFSET <pos nr> ] [ SAMPLE <sample size> ]
Execute a database query and store the resultset data in the named variables
For details see: Documentation/SQLreference/TableExpressions
and: Documentation/Cookbooks/SQLrecipes/monitor
Associated system table: sys.queue and sys.querylog_history
WITH SELECT
WITH <alias name> AS <select query without LIMIT> SELECT <selection list> [ FROM <alias_table_view_subselect_function list> [<alias>] ] [ WHERE <condition list> ] [ GROUP BY <grouping list> ] [ HAVING <group conditon list> ] [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] <select query> ] [ ORDER BY <ordering list [ASC | DESC] > ] [ LIMIT <pos nr> ] [ OFFSET <pos nr> ] [ SAMPLE <sample size> ]
Execute a database query using the results of the queries defined in the WITH clause and return the resultset data rows
Note: Recursive queries are NOT supported
For details see: Documentation/SQLreference/TableExpressions
Associated system table: sys.queue and sys.querylog_history
INSERT VALUES
INSERT INTO [ <schema name> . ] <table name> [ ( <column name list> ) ] VALUES ( <values list> ) [, ( <values list> ) ]
Add data row(s) to an existing table
Example:
INSERT INTO "mySch"."myTab" (id, name) VALUES (1, 'a'), (2, 'b'), (3, 'c');
Note: It is also possible to add multiple rows of values in one insert statement by adding extra ", (<values list>)" parts.
For details see: Documentation/SQLreference/Updates
and: Documentation/Cookbooks/SQLrecipes/LoadingBulkData
Associated system table: sys.querylog_history
INSERT SELECT
INSERT INTO [ <schema name> . ] <table name> [ ( <column name list> ) ] SELECT <select query>
Execute a database query and add the resultset data rows into the existing table
For details see: Documentation/SQLreference/Updates
Associated system table: sys.querylog_history
UPDATE
UPDATE [ <schema name> . ] <table name> SET <column name> = <expression value> [ , <column name> = <expression value>] [ WHERE <condition expression> ]
Change values of specific column(s) and row(s) of an existing table. If the optional WHERE clause is specified only the rows which match the condition(s) are changed.
Example:
UPDATE employee SET salary = salary * 1.07, bonus = 1200 WHERE id = 12345;
For details see: Documentation/SQLreference/Updates
Associated system table: sys.querylog_history
DELETE
DELETE FROM [ <schema name> . ] <table name> [ WHERE <condition expression> ]
Remove data row(s) from an existing table. If the optional WHERE clause is specified only the rows which match the condition(s) are removed.
Example:
DELETE FROM web.log WHERE "timestamp" <= '2016-12-23 23:59:59';
For details see: Documentation/SQLreference/Updates
See also: TRUNCATE
Associated system table: sys.querylog_history
TRUNCATE
TRUNCATE [ TABLE ] [ <schema name> . ] <table name> [ CONTINUE IDENTITY | RESTART IDENTITY ] [ RESTRICT | CASCADE ]
Remove all data row(s) from an existing table quickly
Example:
TRUNCATE TABLE mysch.imp_article CONTINUE IDENTITY CASCADE;
Note: The option RESTART IDENTITY can be used to reset the sequence start value of an identity column for new data inserts after the truncate. Default behavior is to CONTINUE IDENTITY sequence numbering.
The CASCADE option instructs to truncate referencing table(s) also if the referencing table(s) have foreign key references to this table. The default behavior is RESTRICT.
This command is supported from release Mar2018 onwards.
Note: it is possible to use TRUNCATE statements in a transaction and thus to rollback the effects of a truncate.
For details see: Documentation/SQLreference/Updates
See also: DELETE
Associated system table: sys.querylog_history
CALL
CALL [ <schema name> . ] <procedure name> ( [ <argument list> ] )
Execute a stored procedure
Example:
CALL "sys"."settimeout"(3000);
Note: The user must have EXECUTE privilege for the called procedure.
For details see: Documentation/SQLreference/Flowofcontrol
See also: CREATE PROCEDURE GRANT PRVILEGES
Associated system table: sys.querylog_calls
PREPARE
PREPARE <SQL DML statement with optional question mark parameter markers>
Compiles a SQL statement into its execution plan
Example:
PREPARE SELECT id, name, schema_id FROM tables WHERE name like ?; EXECUTE 3('%type%'); EXECUTE 3('%id%'); PREPARE INSERT INTO towns (name, country, citizens) VALUES (?, ?, ?); EXECUTE 5('Amsterdam', 'NL', 856123); EXECUTE 5('Berlin', 'DE', 3715930);
Note: You can prepare any DML command, such as SELECT, INSERT, UPDATE, DELETE, CALL, etc.
The prepared statement will be given a system identifier number which can be used in the EXECUTE command.
Instead of PREPARE you may also use PREP.
For details see: Documentation/Manuals/SQLreference/PrepareExec
and: Documentation/Cookbooks/SQLrecipes/LoadingBulkData
See also: EXECUTE
Associated system table: sys.querylog_catalog
EXECUTE
EXECUTE <prepared SQL nr> ( [ <parm1_val> [, ... ] ] )
Execute the prepared statement with parameter values
Example:
EXECUTE 5('c-data'); EXECUTE 7('Amsterdam', 'NL', 856123);
Note: Instead of EXECUTE you may also use EXEC.
WARNING: when an error occurs in the user session, all the prepared statement(s) are lost!
For details see: Documentation/Manuals/SQLreference/PrepareExec
See also: PREPARE
Associated system table: sys.querylog_calls
SET
SET <variable name> = <new value>
Change the value of a declared variable
For details see: Documentation/Manuals/SQLreference/Variables
See also: DECLARE SELECT
Associated system table: sys.var()
SET CURRENT_TIMEZONE
SET CURRENT_TIMEZONE = <minutes>
Change the current timezone offset from GMT
Example:
SELECT CURRENT_TIMEZONE; SET CURRENT_TIMEZONE = 3600;
For details see: Documentation/Manuals/SQLreference/Variables
See also: CURRENT_TIMEZONE
Associated system table: sys.var_values
SET TIME ZONE
SET TIME ZONE { LOCAL | INTERVAL '<hh>:<mi>' HOUR TO MINUTE }
Change the current timezone
Example:
SELECT CURRENT_TIMEZONE; SET TIME ZONE LOCAL; SELECT CURRENT_TIMEZONE; SET TIME ZONE INTERVAL '+02:00' HOUR TO MINUTE; SET TIME ZONE INTERVAL '3' HOUR; SET TIME ZONE INTERVAL '240' MINUTE; SET TIME ZONE INTERVAL '-3600' SECOND;
For details see: Documentation/Manuals/SQLreference/Variables Documentation/SQLreference/Temporal
CREATE ROLE
CREATE ROLE <role name> [ WITH ADMIN <grantor name> ]
Define a new role
Example:
CREATE ROLE controller;
For details see: Documentation/SQLreference/Roles
and: Documentation/SQLreference/Permissions
See also: DROP ROLE GRANT PRIVILEGES GRANT ROLE SET ROLE
Associated system table: sys.roles and sys.auths
CREATE USER
CREATE USER <login name> WITH [ ENCRYPTED | UNENCRYPTED ] PASSWORD <login pass phrase> NAME <full name> SCHEMA <schema name>
Define a new database login account
Example:
CREATE USER "Donald" WITH UNENCRYPTED PASSWORD 'Secret' NAME 'Donald Duck' SCHEMA "app1";
For details see: Documentation/SQLreference/Users
and: Documentation/SQLreference/Permissions
See also: ALTER USER RENAME ALTER USER SET PASSWORD GRANT PRIVILEGES DROP USER
Associated system table: sys.users and sys.auths where name in (select name from sys.users)
ALTER USER RENAME
ALTER USER <login name> RENAME TO <new login name>
Change the login name of a database login account
For details see: Documentation/SQLreference/Users
See also: ALTER USER SET PASSWORD ALTER USER SET SCHEMA GRANT PRIVILEGES
Associated system table: sys.users
ALTER USER SET PASSWORD
ALTER USER <login name> SET [ ENCRYPTED | UNENCRYPTED ] PASSWORD <new pass phrase> USING OLD PASSWORD <old pass phrase>
Change the password of a database login account
For details see: Documentation/SQLreference/Users
See also: ALTER USER SET SCHEMA ALTER USER RENAME GRANT PRIVILEGES
Associated system table: sys.users
ALTER USER SET SCHEMA
ALTER USER <login name> [ WITH [ ENCRYPTED | UNENCRYPTED ] PASSWORD <pass phrase> ] SET SCHEMA <new schema name>
Change the default schema of a database login account
For details see: Documentation/SQLreference/Users
See also: ALTER USER SET PASSWORD ALTER USER RENAME GRANT PRIVILEGES
Associated system table: sys.users and sys.auths
DROP ROLE
DROP ROLE <role name> [ WITH ADMIN <grantor name> ]
Remove an existing role
Example:
DROP ROLE controller;
Note: Associated role privileges will also be removed
For details see: Documentation/SQLreference/Roles
and: Documentation/Manuals/SQLreference/DropStatement
See also: CREATE ROLE GRANT ROLE SET ROLE
Associated system table: sys.roles and sys.user_role
DROP USER
DROP USER <login name>
Remove an existing database login account
Note: Associated user privileges and user roles will also be removed
For details see: Documentation/SQLreference/Users
and: Documentation/Manuals/SQLreference/DropStatement
See also: CREATE USER ALTER USER RENAME
Associated system table: sys.users and sys.user_role and sys.auths where name in (select name from sys.users)
GRANT PRIVILEGES
GRANT <ALL or specific privileges> ON [ TABLE | FUNCTION | AGGREGATE ] <table or function or procedure or aggregate name> TO <user/role name(s) or PUBLIC > [ WITH GRANT OPTION ]
Add privileges on a table or function or procedure or aggregate function for a user or role or to PUBLIC.
For tables the privileges can be: INSERT, DELETE, UPDATE, SELECT, REFERENCES, TRUNCATE.
For aggregates, functions and procedures the privilege can be: EXECUTE.
The privileges COPY INTO, COPY FROM enable execution of those commands to users other than the monetdb system user.
Example:
GRANT SELECT ON TABLE web_event TO PUBLIC; GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE ON TABLE web_event TO jan, piet, controller; GRANT EXECUTE ON FUNCTION refresh_func TO PUBLIC; GRANT COPY INTO, COPY FROM ON TABLE new_facts TO mrdata;
Note: The current user must have admin rights.
This COPY INTO and COPY FROM privileges are MonetDB specific SQL. They are introduced in Jun2016 release.
For details see: Documentation/SQLreference/Permissions
See also: REVOKE PRIVILEGES GRANT ROLE CREATE USER
Associated system table: sys.privileges
REVOKE PRIVILEGES
REVOKE [ GRANT OPTION FOR ] <ALL or specific priviliges> ON [ TABLE | FUNCTION | AGGREGATE ] <table or function or procedure or aggregate name> FROM <user/role name(s) or PUBLIC>
Remove privileges on a table or function or procedure or aggregate function from a user or role or PUBLIC
Example:
REVOKE INSERT, DELETE, TRUNCATE ON TABLE web_event FROM jan; REVOKE EXECUTE ON FUNCTION refresh_func FROM piet;
Note: The current user must have admin rights.
This COPY INTO and COPY FROM privileges are MonetDB specific SQL. They are introduced in Jun2016 release.
For details see: Documentation/SQLreference/Permissions
See also: GRANT PRIVILEGES REVOKE ROLE DROP USER
Associated system table: sys.privileges
GRANT ROLE
GRANT <role> TO <user name> [ WITH ADMIN OPTION ] [ WITH ADMIN <grantor> ]
Add role privileges to a user
Example:
GRANT controller TO jan WITH ADMIN OPTION;
For details see: Documentation/SQLreference/Permissions
and: Documentation/SQLreference/Roles
See also: REVOKE ROLE GRANT PRIVILEGES CREATE ROLE
Associated system table: sys.user_role
REVOKE ROLE
REVOKE [ ADMIN OPTION FOR ] <role> FROM <user name(s)>
Remove role privileges from a user
Example:
REVOKE controller FROM jan;
For details see: Documentation/SQLreference/Permissions
and: Documentation/SQLreference/Roles
See also: GRANT ROLE DROP ROLE
Associated system table: sys.user_role
SET SCHEMA
SET SCHEMA <schema name>
Change the current schema
Example:
SELECT CURRENT_SCHEMA; SET SCHEMA profiler; SELECT CURRENT_SCHEMA;
For details see: Documentation/SQLreference/Schema
and: Documentation/Manuals/SQLreference/Variables
See also: CURRENT_SCHEMA
Associated system table: sys.var_values
SET ROLE
SET ROLE <role>
Change the current role
Example:
SELECT CURRENT_ROLE; SET ROLE controller; SELECT CURRENT_ROLE;
For details see: Documentation/SQLreference/Roles
and: Documentation/Manuals/SQLreference/Variables
See also: CURRENT_ROLE GRANT ROLE
Associated system table: sys.var_values
SET USER
SET USER = <login name>
Change the current user to the specified user
Example:
SELECT USER; SET USER = jan; SELECT USER;
Note: USER is a synonym for CURRENT_USER
For details see: Documentation/SQLreference/Users
and: Documentation/Manuals/SQLreference/Variables
See also: USER GRANT PRIVILEGES
Associated system table: sys.var_values
SET CURRENT_USER
SET CURRENT_USER = <user name>
Change the current user to the specified user
Example:
SELECT CURRENT_USER; SET CURRENT_USER = piet; SELECT CURRENT_USER;
For details see: Documentation/SQLreference/Users
and: Documentation/Manuals/SQLreference/Variables
See also: CURRENT_USER
Associated system table: sys.var_values
SET SESSION_USER
SET SESSION_USER = <user name>
Change the session user to the specified user
Example:
SET SESSION_USER = isabel; SELECT SESSION_USER;
For details see: Documentation/SQLreference/Users
and: Documentation/Manuals/SQLreference/Variables
See also: SESSION_USER
Associated system table: sys.var_values
SET SESSION AUTHORIZATION
SET SESSION AUTHORIZATION <role>
Change the session authorization to the specified role
For details see: Documentation/SQLreference/Roles
and: Documentation/Manuals/SQLreference/Variables
COPY INTO FROM
COPY [ int_val [ OFFSET int_val ] RECORDS ] INTO [ <schema name> . ] <table name> [ '(' column_name ',' ... ')' ] FROM <file name> [ ',' <file name> [ ... ] ] [ '(' header [ STRING ] ',' ... ')' ] [ [ USING ] DELIMITERS field_separator [ ',' record_separator [ ',' string_quote ] ] ] [ NULL [ AS ] null_string ] [ LOCKED ] [ BEST EFFORT ] [ NO CONSTRAINT ]
Read data from an external csv/txt file and bulk insert it into an existing table
Note: This command is MonetDB specific SQL.
The file must be accesssable from the server where mserver5 process is running.
It can also read directly from a compressed file if the file name has extension: .bz2 or .gz or .xz or .lz4. Extension .zip is not recognised, use extension .gz instead.
The default field_separator is the pipe '|' character.
The default record_separator is the newline '
' character.
The default string_quote character is a double quote: ".
IMPORTANT: All character string data in the csv/txt file must be stored in UTF-8 character encoding. Other character encodings are not supported.
For details see: Documentation/Cookbooks/SQLrecipes/CSV_bulk_loads
and: Documentation/Cookbooks/SQLrecipes/LoadingBulkData
Associated system table: sys.querylog_history
COPY INTO FROM STDIN
COPY [ int_val [ OFFSET int_val ] RECORDS ] INTO [ <schema name> . ] <table name> [ '(' column_name ',' ... ')' ] FROM STDIN [ '(' header [ STRING ] ',' ... ')' ] [ [ USING ] DELIMITERS field_separator [ ',' record_separator [ ',' string_quote ] ] ] [ NULL [ AS ] null_string ] [ LOCKED ] [ BEST EFFORT ] [ NO CONSTRAINT ]
Read data from console (stdin) and bulk insert it into an existing table
Note: This command is MonetDB specific SQL.
The default field_separator is the pipe '|' character.
The default record_separator is the newline '
' character.
The default string_quote character is a double quote: ".
For details see: Documentation/Cookbooks/SQLrecipes/CSV_bulk_loads
and: Documentation/Cookbooks/SQLrecipes/LoadingBulkData
Associated system table: sys.querylog_history
COPY BINARY INTO FROM
COPY BINARY INTO [ <schema name> . ] <table name> FROM ( <file name> [ ',' <file name> [ ... ] ] )
Attach column data from an external binary file(s) to very quickly make it available in the RDBMS
Note: This command is MonetDB specific SQL.
The files must be accesssable from the server where mserver5 process is running.
IMPORTANT: not all datatypes are supported. Also a NULL value must be represented as a special value, which is different per datatype.
For details see: Documentation/Cookbooks/SQLrecipes/BinaryBulkLoad
Associated system table: sys.querylog_history
COPY LOADER INTO FROM
COPY LOADER INTO [ <schema name> . ] <table name> FROM <loader function name>( [ <args> ] )
Read data via a loader function and bulk insert it into an existing table
Example:
COPY LOADER INTO tbl FROM partial_loader();
Note: This command is MonetDB specific SQL. It is introduced in Dec2016 release.
For a loader function defined in language PYTHON the Python script interpreter software must be available on the deployment server.
For details see: blog/monetdbpython-loader-functions
See also: CREATE LOADER CREATE TABLE FROM LOADER
Associated system table: sys.querylog_history
COPY INTO
COPY <data query> INTO <file name> [ [USING] DELIMITERS field_separator [ ',' record_separator [ ',' string_quote ] ] ] [ NULL AS null_string ]
Write query result data in bulk to an external csv/txt file
Note: This command is MonetDB specific SQL.
The file path must be accesssable from the server where mserver5 process is running.
The default field_separator is the pipe '|' character.
The default record_separator is the newline '
' character.
The default string_quote character is a double quote: ".
Character data will be exported in UTF-8 encoding. Other encodings are not supported.
The file can also be written directly as compressed file if the file name has extension: .bz2 or .gz or .xz or .lz4. Extension .zip is not recognised, use extension .gz instead.
For details see: Documentation/Cookbooks/SQLrecipes/ExportingBulkData
and: Documentation/Manuals/SQLreference/CopyInto
Associated system table: sys.querylog_history
COPY INTO STDOUT
COPY <data query> INTO STDOUT [ [USING] DELIMITERS field_separator [ ',' record_separator [ ',' string_quote ] ] ] [ NULL AS null_string ]
Write query result data to console (stdout)
Note: This command is MonetDB specific SQL.
The default field_separator is the pipe '|' character.
The default record_separator is the newline '
' character.
The default string_quote character is a double quote: ".
For details see: Documentation/Cookbooks/SQLrecipes/ExportingBulkData
and: Documentation/Manuals/SQLreference/CopyInto
Associated system table: sys.querylog_history
SET TRANSACTION
SET TRANSACTION [ READ ONLY | READ WRITE | [ ISOLATION LEVEL [ READ UNCOMMITTED | READ COMMITTED | REPEATABLE READ | SERIALIZABLE ] ] ]
Start a transaction (turn auto-commit off) with optionally a transaction mode or isolation level
Example:
SET TRANSACTION; ROLLBACK; SET TRANSACTION READ ONLY; ROLLBACK; SET TRANSACTION READ WRITE; ROLLBACK; SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; ROLLBACK; SET TRANSACTION ISOLATION LEVEL READ COMMITTED; ROLLBACK; SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; ROLLBACK; SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; ROLLBACK;
SET LOCAL TRANSACTION
SET LOCAL TRANSACTION [ READ ONLY | READ WRITE | [ ISOLATION LEVEL [ READ UNCOMMITTED | READ COMMITTED | REPEATABLE READ | SERIALIZABLE ] ] ]
Start a local transaction (turn auto-commit off) with optionally a transaction mode or isolation level
START TRANSACTION
START TRANSACTION
Start a user controlled transaction
Note: A transaction can also include data definition (DDL) commands such as CREATE, ALTER, DROP.
For details see: Documentation/Manuals/SQLreference/Transactions
and: blog/monetdb-sql-transaction-management-scheme
See also: COMMIT ROLLBACK
COMMIT
COMMIT
Make all changes done since the start of the transaction persistent
For details see: Documentation/Manuals/SQLreference/Transactions
and: blog/monetdb-sql-transaction-management-scheme
See also: ROLLBACK START TRANSACTION
ROLLBACK
ROLLBACK
Undo all changes done since the start of the transaction
For details see: Documentation/Manuals/SQLreference/Transactions
and: blog/monetdb-sql-transaction-management-scheme
See also: COMMIT START TRANSACTION
SAVEPOINT
SAVEPOINT <svpt name>
Add a savepoint within a transaction
Example:
START TRANSACTION; SAVEPOINT sp1;
Note: You need to start a transaction first before you can use savepoints
See also: START TRANSACTION RELEASE SAVEPOINT ROLLBACK TO SAVEPOINT
ROLLBACK TO SAVEPOINT
ROLLBACK TO SAVEPOINT <svpt name>
Undo all changes done since the start of the savepoint
Example:
ROLLBACK TO SAVEPOINT sp1;
See also: SAVEPOINT RELEASE SAVEPOINT
RELEASE SAVEPOINT
RELEASE SAVEPOINT <svpt name>
Add all changes done since the start of the savepoint to the current transaction
Example:
RELEASE SAVEPOINT sp1;
See also: SAVEPOINT ROLLBACK TO SAVEPOINT
SET OPTIMIZER
SET optimizer = <(name of) optimizer pipeline>
Change the execution optimizer pipeline
Example:
SELECT optimizer; SELECT * FROM sys.optimizers; SET optimizer = 'sequential_pipe'; SELECT optimizer; SET optimizer = 'default_pipe'; SELECT optimizer; SET optimizer = 'ceci_nest_pas_une_pipe'; SELECT optimizer;
For details see: Documentation/Cookbooks/SQLrecipes/OptimizerPipelines
and: Documentation/Manuals/MonetDB/Optimizers
See also: PLAN EXPLAIN
Associated system table: sys.optimizers and sys.environment where name = 'sql_optimizer';
EXPLAIN
EXPLAIN <The SQL command>
Show the MAL execution plan for the SQL command
For details see: Documentation/Manuals/SQLreference/Explain
and: Documentation/Cookbooks/SQLrecipes/OptimizerPipelines
PLAN
PLAN <The SQL command>
Show the relational execution plan for the SQL command
Note: This command is MonetDB specific SQL.
For details see: Documentation/Manuals/SQLreference/PlanSQL
and: Documentation/Cookbooks/SQLrecipes/OptimizerPipelines
TRACE
TRACE <The SQL command>
Executes the SQL command and return a table with the execution ticks of the MAL instructions
For details see: Documentation/Manuals/SQLreference/Trace
Associated system table: sys.tracelog()
DEBUG
DEBUG <The SQL command>
Executes the SQL command in DEBUG mode
For details see: Documentation/Manuals/SQLreference/Debug
ANALYZE
ANALYZE <schemaname> [ '.' <tablename> [ '( '<columnname> [, <columnname>] ')' ] ] [SAMPLE size] [MINMAX]
Gather statistics on column(s) of table(s) in a specific schema and store or update them in the system table sys.statistics for analysis
Example:
-- derive the statistics for table sys.statistics ANALYZE sys.statistics; SELECT * FROM sys.statistics; -- update statistics of specific columns of sys.statistics ANALYZE sys.statistics ("sample", count, "unique", minval, maxval) SAMPLE 1000; SELECT * FROM sys.statistics WHERE column_id IN (SELECT id FROM sys.columns WHERE table_id IN (SELECT id FROM sys.tables WHERE schema_id IN (SELECT id FROM sys.schemas WHERE name = 'sys') AND name = 'statistics')); -- remove all the statistics DELETE FROM sys.statistics;
Note: The statistics information can be used by the query optimizer to choose the optimal execution plan. Rerun the ANALYZE command after many table manipulations (insert/update/delete).
For large tables this command can take some time. Specify a SAMPLE size to reduce the amount of data values to analyze. Add MINMAX to exclude the expensive unique count operation.
For details see: Documentation/Cookbooks/SQLrecipes/statistics
Associated system table: sys.statistics
USER
USER
Example:
SELECT USER;
Note: USER is a synonym for CURRENT_USER
See also: SET USER
Associated system table: sys.var_values
CURRENT_USER
CURRENT_USER
Example:
SELECT CURRENT_USER;
See also: SET CURRENT_USER
Associated system table: sys.var_values
SESSION USER
SESSION_USER
Example:
SELECT SESSION_USER;
See also: SET SESSION_USER
Associated system table: sys.var_values
CURRENT_ROLE
CURRENT_ROLE
Example:
SELECT CURRENT_ROLE;
See also: SET ROLE
Associated system table: sys.var_values
CURRENT_SCHEMA
CURRENT_SCHEMA
Example:
SELECT CURRENT_SCHEMA;
See also: SET SCHEMA
Associated system table: sys.var_values
CURRENT_TIMEZONE
CURRENT_TIMEZONE
Example:
SELECT CURRENT_TIMEZONE;
For details see: Documentation/Cookbooks/SQLrecipes/DateTimeFunctionality
See also: SET CURRENT_TIMEZONE
Associated system table: sys.var_values
NOW
NOW
Example:
SELECT NOW;
For details see: Documentation/Cookbooks/SQLrecipes/DateTimeFunctionality
CURRENT_DATE
CURRENT_DATE
Example:
SELECT CURRENT_DATE;
For details see: Documentation/Cookbooks/SQLrecipes/DateTimeFunctionality
CURRENT_TIME
CURRENT_TIME
Example:
SELECT CURRENT_TIME;
For details see: Documentation/Cookbooks/SQLrecipes/DateTimeFunctionality
CURRENT_TIMESTAMP
CURRENT_TIMESTAMP
Example:
SELECT CURRENT_TIMESTAMP;
For details see: Documentation/Cookbooks/SQLrecipes/DateTimeFunctionality
LOCALTIME
LOCALTIME
Example:
SELECT LOCALTIME;
For details see: Documentation/Cookbooks/SQLrecipes/DateTimeFunctionality
LOCALTIMESTAMP
LOCALTIMESTAMP
Example:
SELECT LOCALTIMESTAMP;
For details see: Documentation/Cookbooks/SQLrecipes/DateTimeFunctionality
NEXT VALUE FOR
NEXT VALUE FOR <sequencename>
Example:
CREATE SEQUENCE "my_test_seq" as integer START WITH 2; CREATE TABLE test (t int DEFAULT NEXT VALUE FOR "my_test_seq", v varchar); INSERT INTO test(v) VALUES ('ab'); SELECT * FROM test;
For details see: Documentation/Manuals/SQLreference/SerialTypes
Lexical structure
Lexical structure mk Sat, 03/27/2010 - 22:27Comments
Comments can be added to query scripts for documentation purposes. MonetDB/SQL supports two forms of comments. Any text beginning with '- -' and up to the end of line is ignored. Furthermore, C-style comments ( /* this is ignored */ ) can be injected in a SQL query where a language token is expected. A language token can be a keyword, identifier, a literal constant or a special character. Tokens should be separated by 'white space' (tab, space, newlines) unless there juxtaposition does not create an ambiguous interpretation. Comments are considered equivalent to white spaces.
Identifiers and Keywords
SQL comes with a large collection of keywords, i.e. names reserved by the committee to designate language constructs. Keywords in MonetDB/SQL are case in-sensitive and adhere to the general convention for identifier denotation in programming languages.
Users can overrule the interpretation of an identifier as a keyword by encapsulation with double quotes, e.g. select denotes a keyword, while "select" denotes a user defined name. This scheme also permits inclusion of white space in the names. However, it is general advisable to limit the number of the escaped keywords.
Names are used to designate database objects. In that role, they are case in-sensitive. The terminal identifier classes distinguished are denoted with the suffix _name.
Literal Constants
Numeric constants follow the convention of most programming languages. A numeric constant that contains neither a decimal point or exponential is considered of type integer. The underlying machine determines whether it will be interpreted as a 32- or 64-bit value, although most likely it will be that latter case these days.
String constants in SQL are embraced with single quotes ('\”). They may be split over multiple lines with blanc space in between.
Often strings can be cast to other types using the CAST( expr AS type) expression, provided the typed coercion routine is available. For example:
CAST ( '#ccddee' AS integer)
CAST ( '0.3' AS double)
illustrates conversion of a color value in hexadecimal string notation into an integer, and a value into a double precision floating point number.
Temporal Constants
The DATE and TIMESTAMP data types come with a straightforward string literal conversion structure, e.g. SELECT DATE '2014-02-03', TIMESTAMP '2014-02-03'; produces a single properly typed row result. It is equivalent to SELECT CAST('2014-02-03' AS DATE), CAST ('2014-02-03' AS TIMESTAMP);
Special Characters
String literals may contain the traditional escape characters: '\n' for new lines, '\t' for tab, '\r' for return, and '\\' for backslash. The conventions '\ddd` where d a digit stands for a number denoted in octal.
Operator Precedences
Most operators in SQL have the same precedence and are left-associative. Parenthesis can be used to disambiguate the precedence order. The operator definitions can not be overloaded, nor can you define new operators.
. | left | table/column name separator |
- | right | unary minus |
* / % | left | multiplication, division, modulo |
+ - | left | unary addition, subtraction |
IS, IS TRUE, IS FALSE, IS UNKNOWN | ||
IS NULL, NOT NULL | test for (not) null | |
IN | set membership | |
BETWEEN | range containment | |
OVERLAPS | time interval overlap | |
LIKE, ILIKE | string pattern matching | |
< > | less than, greater than | |
= | right | equality, assignment |
NOT | right | logical negation |
AND | left | logical conjunction |
OR | left | logical disjunction |
^ | left | logical exclusive disjunction |
Value Expressions
The language is built around value- and table- expressions. Value expressions encompass denotation of literal constants, type casts, column references, operator and function invocation, and scalar returning subqueries.
The column references take the form [correlation_name '.'] column_name. The correlation_name is either a table name or an alias introduced in a from clause. The correlation name may be omitted if the column name uniquely identifies a column within the scope of current query.
The table references take the form [schema_name '.'] table_or_view_name [AS alias_name]. Table expressions produce a relational table. It is internally referenced through a correlation name, which supports attribute selection using the '.' denotation.
Data types
Data types mk Sat, 03/27/2010 - 22:30SQL prescribes a large collection of built-in types, most of which are directly supported. The MonetDB database kernel can also be extended with user defined types to better support application domains.
Some domain specific data types (e.g. GeoSpatial) are not part of the MonetDB core, but are provided by extensions. For more information check the Spatial data types article.
Builtin SQL types
Builtin SQL types mk Thu, 04/01/2010 - 22:20CHAR [ '(' length ')' ] | CHARACTER [ '(' length ')' ] |
UTF-8 character string with length upperbound limit. CHAR or CHARACTER without the "( length )" specification are treated as CHAR(1). Note: currently no spaces are padded at the end |
VARCHAR '(' length ')' | CHARACTER VARYING '(' length ')' |
UTF-8 character string with length upperbound limit |
TEXT | CLOB | CHARACTER LARGE OBJECT | STRING | UTF-8 character string with unbounded length |
BLOB | BINARY LARGE OBJECT | bytes string with unbounded length |
BOOLEAN | BOOL | logic values: true or false |
TINYINT | 8 bit signed integer between -127 and 127 |
SMALLINT | 16 bit signed integer between -32767 and 32767 |
INT | INTEGER | 32 bit signed integer between -2147483647 and 2147483647 |
BIGINT | 64 bit signed integer between -9223372036854775807 and 9223372036854775807 |
HUGEINT | 128 bit signed integer between -2127 +1 and +2127 -1 (±170141183460469231731687303715884105727) Note: HUGEINT is only available on platforms with a C-compiler that supports the __int128 or __int128_t data type (e.g., recent gcc, clang, & icc on Linux or MacOS X) and from Jul2015 release onwards |
DECIMAL '(' Prec ',' Scale ')' | DEC '(' Prec ',' Scale ')' | NUMERIC '(' Prec ',' Scale ')' |
Exact decimal number with precision Prec and scale Scale. Prec must be between 1 and 18 (or 38 when HUGEINT is also supported). Scale must be between 0 and Prec |
DECIMAL '(' Prec ')' | DEC '(' Prec ')' | NUMERIC '(' Prec ')' |
Exact decimal number with precision Prec and scale 0. Prec must be between 1 and 18 (or 38 when HUGEINT is also supported). |
DECIMAL | DEC | NUMERIC | Exact decimal number with precision 18 and scale 3. |
REAL | 32 bit floating point approximate number |
FLOAT | DOUBLE | DOUBLE PRECISION | 64 bit floating point approximate number |
FLOAT '(' Prec ')' | floating point approximate number with binary precision Prec. Prec must be between 1 and 53. FLOAT(24) is same as REAL, FLOAT(53) is same as DOUBLE |
All scalar types include a NULL value, which is internally represented as a valid domain value. For numerical types, this is the smallest value in the type's domain (i.e., the one omitted in the ranges given above). Arithmetic expressions that overflow may lead to returning the NULL instead.
The integer types align with the storage of 1, 2, 4, 8 and 16 bytes.
The numeric and decimal types are represented as fixed length integers, whose decimal point is produced during result rendering.
The types REAL, FLOAT and DOUBLE map to the underlying implementation system. No special attention is given to the value NaN.
Temporal types
Temporal types mk Thu, 06/30/2011 - 09:46MonetDB/SQL supports a rich set of time-related operations within the kernel. The starting point are SQL 92 and the ODBC time-related data types. There are six basic types and operations on them:
DATE | a date in the Gregorian calendar YYYY-MM-DD, e.g. 1999-12-31 |
TIME [ '(' posint ')' ] | time of day (HH.MI.SS.sss) with precision (default 0) |
TIME [ '(' posint ')' ] WITH TIME ZONE | time of day (HH.MI.SS.sss) with precision (default 0) and time zone information |
TIMESTAMP [ '(' posint ')' ] | date concatenated with unique time, precision |
TIMESTAMP [ '(' posint ')' ] WITH TIME ZONE | date concatenated with unique time, precision and time zone information |
INTERVAL interval_qualifier | a temporal interval such as: INTERVAL SECOND or INTERVAL MONTH |
interval_qualifier : start_field : end_field: time_precision : |
A timestamp is a combination of date and time, indicating an exact point in time (GMT). GMT is the time at the Greenwich meridian without a daylight savings time (DST) regime. Absence of DST means that hours are consecutive (no jumps) which makes it easy to perform time difference calculations.
The local time is often different from GMT (even at Greenwich in summer, as the UK also has DST). Therefore, whenever a timestamp is composed from a local date and time a timezone should be specified in order to translate the local time to GMT (and vice versa if a timestamp is to be decomposed in a local date and time). To adjust the local time can issue a command such as SET TIME ZONE INTERVAL '1' HOUR TO MINUTE.
We provide predefined timezone objects for a number of timezones (see below). Also, there is one timezone called the local timezone, which can be set to one global value in a running MonetDB server, that is used if the timezone parameter is omitted from a command that needs it (if not set, the default value of the local timezone is plain GMT).
The value ranges and lexical denotations are defined as follows:
Min and max years. The maximum and minimum dates and timestamps that can be stored are in the years 5,867,411 and -5,867,411, respectively. Interestingly, the year 0 is not a valid year. The year before 1 is called -1.
Dates. Fall in a valid year, and have a month and day that is valid in that year. The first day in the year is January 1, the last December 31. Months with 31 days are January, March, May, July, August, October, and December, while April, June, September and November have 30 days. February has 28 days, expect in a leap year, when it has 29. A leap year is a year that is an exact multiple of 4. Years that are a multiple of 100 but not of 400 are an exception; they are no leap years.
Time. The smallest time is 00:00:00.000 and the largest 23:59:59.999 (the hours in a time range between [0,23], minutes and seconds between [0,59] and milliseconds between [0:999] ). Time identifies a valid time-of-day, not an amount of time (for denoting amounts of time, or time differences, we use here concepts like "number of days" or "number of seconds" denoted by some value of a standard integer type).
Timestamp. A valid timestamp is formed by a combination of a valid date and valid daytime. When creating a timestamp from a date and daytime, a timezone should be specified (if timezone is omitted, the local timezone is assumed). If a timezone is specified, it is used to convert the date and time in that timezone to GMT, which is the internal timestamp representation. One problem here is that the first hour after DST has ended (some Sunday night in autumn, generally), the time is set back one hour, so the same times occur twice. Hence two translations to a timestamp are possible for such date and time combinations. In those case, we act as if it was the first occurrence (still within DST).
For difference calculations between dates (in numbers of days) we use signed integer. Hence, the valid range for difference calculations is between -2147483647 and 2147483647 days (which corresponds to roughly -5,867,411 and 5,867,411 years).
For difference between timestamps (in numbers of milliseconds) we use 64-bit bigint. These are large integers of maximally 19 digits, which therefore impose a limit of about 106,000,000,000 years on the maximum time difference used in computations.
datetime_func: datetime_field: YEAR | MONTH | DAY | HOUR | MINUTE | SECOND |
Gregorian dates.
The basics of the Gregorian calendar stem from the time of Julius Caesar, when the concept of a solar year as consisting of 365.25 days (365 days plus once in 4 years one extra day) was introduced. However, this Julian Calendar, made a year 11 minutes long, which subsequently accumulated over the ages, causing a shift in seasons. In medieval times this was noticed, and in 1582 Pope Gregory XIII issued a decree, skipped 11 days. This measure was not adopted in the whole of Europe immediately, however. For this reason, there were many regions in Europe that upheld different dates.
It was only on September 14, 1752 that some consensus was reached and more countries joined the Gregorian Calendar, which also was last modified at that time. The modifications were twofold: first, 12 more days were skipped. Second, it was determined that the year starts on January 1 (in England, for instance, it had been starting on March 25). Other parts of the world have adopted the Gregorian Calendar even later.
MonetDB implements the Gregorian Calendar in all its regularity. This means that values before the year 1752 probably do not correspond with the dates that people really used in times before that (what they did use, however, was very vague anyway, as explained above). In solar terms, however, this calendar is reasonably accurate (see the "correction seconds" note below).
Timezones
The basic timezone regime was established on November 1, 1884 in the International Meridian Conference held in Greenwich (UK). Before that, a different time held in almost any city. The conference established 24 different time zones defined by regular longitude intervals that all differed by one hour. Not for long it was that national and political interest started to erode this nicely regular system. Timezones now often follow country borders, and some regions (like the Guinea areas in Latin America) have times that differ with a 15 minute grain from GMT rather than an hour or even half-an-hour grain.
An extra complication became the introduction of daylight saving time (DST), which causes a time jump in spring, when the clock is skips one hour and in autumn, when the
clock is set back one hour (so in a one hour span, the same times occur twice). The DST regime is a purely political decision made on a country-by-country basis. Countries in the same timezone can have different DST regimes. Even worse, some countries have DST in some years, and not in other years.
To avoid confusion, this temporal type module stores absolute points of time in GMT only (GMT does not have a DST regime). When storing local times in the database, or retrieving local times from absolute timestamps, a correct timezone object should be used for the conversion.
Applications that do not make correct use of timezones, will produce irregular results on e.g. time difference calculations.
Correction seconds
Once every such hundred years, a correction second is added on new year's night. This rule would seriously complicate the temporal type module (as then the duration of a day, which is now the fixed number of 24*60*60*1000 milliseconds, becomes parametrized by the date), it is not implemented. Hence these seconds are lost, so time difference calculations in milliseconds (rather than in days) have a small error if the time difference spans many hundreds of years.
We cannot handle well changes in the timezone rules (e.g., DST only exists since 40 years, and some countries make frequent changes to the DST policy). To accommodate this we should make timezone_local a function with a year parameter. The tool should maintain and access a timezone database. Lookup of the correct timezone would be dynamic in this structure. The timezone_setlocal would just set the string name of the timezone.
Serial datatypes
Serial datatypes mk Thu, 04/01/2010 - 22:21As of 2003 the SQL standard supports serial types (sequences). They are of particular use in auto-generating key values.A serial type is defined as a primary database object over any of the built-in data types. The NEXT VALUE operation generates the next value and can be used anywhere a value expression is allowed. Its name should be unique within the current schema. It can only be dropped when the tables mentioning it have previously been dropped.
generated_column: sequence_commands: seq_params: alter_seq_params: |
Example. The example shown below introduces the column count, which is incremented with each row being added. It is conceptually identical to the value expression max(count)+1in each insert. The column info is a limited range with wrap around.The serial type as found in PostgreSQL and the auto_increment flag as found in MySQL are both mapped onto a sequence type in MonetDB/SQL.
CREATE TABLE test_serial (
d DATE,
id SERIAL, -- this will implicitly create a PKey. Use BIGSERIAL if you want the id to be of type bigint instead of int.
count INT AUTO_INCREMENT,
info INT GENERATED ALWAYS AS
IDENTITY (
START WITH 100 INCREMENT BY 2
NO MINVALUE MAXVALUE 1000
CACHE 2 CYCLE
) );
Much like other primary database objects, the sequence type can be altered at any time as illustrated below.
sql>CREATE SEQUENCE "my_test_seq" as integer START WITH 2;
sql>CREATE TABLE test (t int DEFAULT NEXT VALUE FOR "my_test_seq", v char);
sql>INSERT INTO test(v) VALUES ('a');
Rows affected 1
sql>INSERT INTO test VALUES (10, 'b');
Rows affected 1
sql>ALTER SEQUENCE "my_test_seq" RESTART WITH (SELECT MAX(t) + 1 FROM test);
sql>INSERT INTO test(v) VALUES ('c');
Rows affected 1
sql>SELECT * FROM test;
+----+---+
| t | v |
+====+===+
| 2 | a |
| 10 | b |
| 11 | c |
+----+---+
The functions sys.get_value_for('<schema name>', '<sequence name>') and sys.next_value_for('<schema name>', '<sequence name>') can be used to query the current value of a sequence. The difference is that next_value_for() also advances the current value of a sequence to the next value. The SQL statement SELECT NEXT VALUE FOR <schemal name>.<sequence name> is an equivalent of sys.next_value_for('<schema name>', '<sequence name>'). The following queries demonstrate how these functions and statement work:
sql>CREATE SEQUENCE "myseq" AS INTEGER;
operation successful
sql>SELECT get_value_for('sys', 'myseq');
+------+
| L2 |
+======+
| 1 |
+------+
1 tuple
sql>SELECT next_value_for('sys', 'myseq');
+------+
| L2 |
+======+
| 1 |
+------+
1 tuple
sql>SELECT NEXT VALUE FOR myseq;
+------+
| L2 |
+======+
| 2 |
+------+
1 tuple
JSON datatype
JSON datatype mk Thu, 01/23/2014 - 08:44JSON has become the de facto light weight data interchange format for many web applications. It has a simple hierarchical structure and supports a limited set of value types. JSON is a natural representation of data for the C family of programming languages.
JSON is supported in MonetDB as a subtype over type VARCHAR, which ensures that only valid JSON strings are added to the database.
Example: CREATE TABLE json_example (c1 JSON, c2 JSON(512) NOT NULL);
MonetDB supports most of the JSON path expressions defined in [ref]. It can be used to decompose the values into regular tables, which then act as an index. A limited set of operators are predefined in the SQL catalogue.
json.filter(J, Pathexpr) | Extracts the component from J that satisfied the Pathexpr |
json.filter(J, Number) | Extracts a indexed component from J |
json.text(J, [Sep]) | Glue together the values separated by Sep character (default space) |
json.number(J) | Turn a number, singleton array value, or singleton object tag into a double |
json."integer"(J) | Turn a number, singleton array value, or singleton object element into an integer |
json.isvalid(StringExpr) | Checks the string for JSON compliance. Returns boolean. |
json.isobject(StringExpr) | Checks the string for JSON object compliance. Returns boolean. |
json.isarray(StringExpr) | Checks the string for JSON array compliance. Returns boolean. |
json.length(J) | Returns the number of top-level components of J. |
json.keyarray(J) | Returns a list of key tags for the top-level components of J. |
json.valuearray(J) | Returns a list of values for the top-level components of J. |
JSON path expressions always refer to a single JSON structure. The root of this structure is identified by the identifier '$', which is implicitly assumed in most expressions. Components of the JSON structure are addressed through the dot notation, i.e. representing child steps and array element access. The wild card can be used for child names and undetermined array indices.
JSON path | Description | Example |
---|---|---|
"$" | The root object | json.filter(v, '$') = { "store":...}" |
"." childname | The child step operator | json.filter(v, '$.store.bicycle') = {"color": "red", "price": 19.95} |
".." childname | Recursive child step | json.filter(v, '$..price') = [8.95,12.99,8.99,22.99,19.95] |
"*" | Child name wildcard | json.filter(v, '$.store.bicycle.*') = {"color": "red", "price": 19.95} |
"[" nr "]" | Array element access | json.filter(v, '$.store.book.[1]') = the second book |
"[" * "]" | Any array element access | |
E1 "," E2 | Union path expressions | json.filter(v, '$.store.bicycle,$..price') |
An example JSON object used for these expressions [ref], more examples in the testsuite.
{ "store": {
"book": [
{ "category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95
},
{ "category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"price": 12.99
}
],
"bicycle": {
"color": "red",
"price": 19.95
}
}
}
URL datatype
URL datatype mk Wed, 02/26/2014 - 15:14The related URL data type is a subdomain over strings and represent valid Uniform Resource Locators.
Example: CREATE TABLE URL_example (c1 URL, c2 URL(512) NOT NULL);
A collection of functions are provided to create, validate and extract portions.
Description | |
---|---|
sys.getAnchor(theUrl url) returns string | |
sys.getBasename(theUrl url) returns string | |
sys.getContent(theUrl url) returns string | |
sys.getContext(theUrl url) returns string | |
sys.getDomain(theUrl url) returns string | |
sys.getExtension(theUrl url) returns string | |
sys.getFile(theUrl url) returns string | |
sys.getHost(theUrl url) returns string | |
sys.getPort(theUrl url) returns string | |
sys.getProtocol(theUrl url) returns string | |
sys.getQuery(theUrl url) returns string | |
sys.getUser(theUrl url) returns string | |
sys.getRobotURL(theUrl url) returns string | |
sys.isaURL(theUrl string) returns boolean | |
sys.newurl(protocol string, hostname string, "port" int, file string) returns url | |
sys.newurl(protocol string, hostname string, file string) returns url |
UUID datatype
UUID datatype mk Sat, 01/14/2012 - 14:40The data type UUID stores a Universally Unique Identifiers (UUID) as defined by RFC 4122, ISO/IEC 9834-8:2005, and related standards. It can be used as a global unique 128-bit identifier. UUIDs are generated by an algorithm that ensures an extreme low probability that two calls to the same create function will ever produce the same value. They are often used in distributed (web-based) systems for this reason.
A UUID is written as a sequence of lower-case hexadecimal digits, in several groups separated by hyphens, specifically a group of 8 digits followed by three groups of 4 digits followed by a group of 12 digits, for a total of 32 digits representing the 128 bits. An example of a UUID in this standard form is:
select uuid(); +--------------------------------------+ | uuid | +======================================+ | 65950c76-a2f6-4543-660a-b849cf5f2453 | +--------------------------------------+
The system function sys.uuid() generates a new uuid and returns the uuid.
The system function sys.isauuid(string) checks whether the string value satisfies the grammatical UUID structure and returns a boolean value.
You can use cast() or convert() to convert a valid uuid string to a uuid type.
select cast('26d7a80b-7538-4682-a49a-9d0f9676b765' as uuid);
select convert('83886744-d558-4e41-a361-a40b2765455b', uuid);
Network Address Type
Network Address Type mk Sat, 01/14/2012 - 14:49The SQL type inet describes IPv4 network addresses. The inet module contains a collection of functions that operate on IPv4 addresses. The most relevant functions are the `containment' functions that deal with subnet masks.
Description | |
---|---|
sys.broadcast(p inet) returns inet | |
sys.host(p inet) returns clob | |
sys.masklen(p inet) returns int | |
sys.setmasklen(p inet, mask int) returns inet | |
sys.netmask(p inet) returns inet | |
sys.hostmask(p inet) returns inet | |
sys.network(p inet) returns inet | |
sys.text(p inet) returns clob | |
sys.abbrev(p inet) returns clob | |
sys."left_shift"(i1 inet, i2 inet) returns boolean | |
sys."right_shift"(i1 inet, i2 inet) returns boolean | |
sys."left_shift_assign"(i1 inet, i2 inet) returns boolean | |
sys."right_shift_assign"(i1 inet, i2 inet) returns boolean |
The functionality of this module is greatly inspired by the PostgreSQL inet atom. It should be extended to also support IPv6.
User defined types
User defined types mk Thu, 04/01/2010 - 22:22The MonetDB kernel supports creation of user defined types, e.g. geometric types. The relationship between SQL and MAL world is expressed using an external name.
CREATE TYPE [schema_name '.'] type_name EXTERNAL NAME type_identification DROP TYPE [schema_name '.'] type_name [RESTRICT | CASCADE] |
The implementation of new atomary types is best postponed until there is no other performance-wise acceptable solution. Addition of an atomary type in the kernel would be beneficial if it is also complemented with bulk-operations and when type specific optimizers could exploit their semantics.
Data definition
Data definition mk Thu, 06/30/2011 - 09:49The MonetDB/SQL supports the full range of object definitions.
Table definitions
Table definitions mk Sat, 03/27/2010 - 22:34The CREATE TABLE statement conforms to the full SQL standard. Tables are assigned to the current schema unless the schema name is explicitly given as part of the table name. Table names should be unique amongst those mentioned within the same schema and distinct from view names.
table_def: table_content_source: table_element_list: |
The derived (temporary) tables are either filled upon creation or automatically upon use within queries.
Temporary tables are stored automatically under the schema 'tmp'. Temporary local tables are limited to the client session. The qualifiers denote the actions taken during transaction commit over a temporary table. If the ON COMMIT clause is omitted then all tuples are dropped while retaining the structure. In most cases you would use: ON COMMIT PRESERVE ROWS
For using Loader functions we support the MonetDB specific CREATE table FROM LOADER syntax.
For merging partitioned table data we support the MonetDB specific CREATE MERGE table syntax.
For replicating table data we support the MonetDB specific CREATE REPLICA table syntax.
For distributed query processing we support the MonetDB specific CREATE REMOTE table syntax.
For data stream processing we support the MonetDB specific CREATE STREAM table syntax.
Table elements
Table elements mk Sun, 03/28/2010 - 13:46table_element: column_name data_type [ column_option ... ] | column_name WITH OPTIONS '(' column_option ','... ')' | column_name { SERIAL | BIGSERIAL } | LIKE table_name | [ CONSTRAINT constraint_name ] table_constraint_type |
column_option: DEFAULT default_value | [ constraint_name ] column_constraint_type | GENERATED ALWAYS AS IDENTITY [ '(' serial_parameters ')' ] | AUTO_INCREMENT |
Identity columns take their values from a sequence generator. The MySQL (auto_increment) and PostgreSQL (serial data type) syntax are also supported. The SERIAL type is an identity column which is mapped to an INTEGER NOT NULL PRIMARY KEY. Similarly the BIGSERIAL type is an identity column which is mapped to a BIGINT NOT NULL PRIMARY KEY.
Column and Table constraints are both supported. Besides the simple not null check also unique, primary key and foreign key constraints are supported. A table can have at most one primary key definition. We currently check constraints directly on insert, update and delete, so immediate. This limitation stems from the missing triggers. The null matching on foreign keys is limited to the simple match type (null values satisfy the constraint). The full and partial match types are not supported. The referential action is currently limited to restrict, i.e. an update fails if other columns have references to it.
Note: The column CHECK constraint definitions are accepted by the parser (for ease of migration) but currently not enforced nor recorded in a data dictionary table.
column_constraint_type: NOT NULL | NULL | UNIQUE | PRIMARY KEY | CHECK '(' column_logical_expression ')' | REFERENCES table_name [ '(' column_name ',' ... ')' ] [ match_options ] [ ref_actions ] match_options: ref_actions: |
table_constraint_type: UNIQUE '(' column_name ',' ... ')' | PRIMARY KEY '(' column_name ',' ... ')' | FOREIGN KEY '(' column_name ',' ... ')' REFERENCES table_name [ '(' column_name ',' ... ')' ] [ match_options ] [ ref_actions ] |
A table constraint definition is not tied to a particular column, but rather to a column group. A column constraint is a notational convenience when the constraint only affects one column.
A constraint over a referenced value can be either a FULL, PARTIAL or SIMPLE (default) match. A ful match requires all column values to be not null unless all are null.The simple match is more relaxed; any of the keys may be null. The partial match is consider noise for the time being.
The serial columns are a slight extension over the serial types defined earlier. In particular, we can designate RESTART option when the sequence is exhausted. This may involve a subquery over the database.
serial_parameters: [ START WITH nonzero-bigint ] | [ RESTART | RESTART WITH subquery | RESTART WITH nonzero-bigint ] | [INCREMENT BY nonzero-bigint ] | [MINVALUE nonzero-bigint | NOMINVALUE] | [MAXVALUE nonzero-bigint | NOMAXVALUE ] | [CACHE nonzero-bigint ] | [CYCLE | NOCYCLE] |
Index definitions
Index definitions mk Sat, 03/27/2010 - 22:46The index statements in the SQL standard are recognized, but their implementation is different from competitive products. MonetDB/SQL interprets these statements as an advice and often freely neglects it, relying on its own decision to create and maintain indexes for fast access. Also the UNIQUE qualifier is not honored or enforced. To add a uniqueness constraint for a table or column use UNIQUE in the CREATE TABLE or ALTER TABLE statement.
index_def: CREATE [ UNIQUE ] INDEX ident ON [ schema name . ] table name '(' ident ','... ')' |
MonetDB supports two special kinds of secondary indices: IMPRINTS and ORDERED INDEX, introduced in release Dec2016 (v11.25.3). These index types are experimental and have some limitations: Only 1 column can be indexed per index. Only columns of numeric data type (tinyint, smallint, integer, bigint, hugeint, float, real, double, decimal) can be indexed. The index is not maintained automatically and will become inactive when inserts, deletes or updates are done on the column data. Since creating these indices can be expensive, they are only used when explicitly created. They are useful for large static (or read only) tables with numeric columns.
index_def: CREATE IMPRINTS INDEX ident ON [ schema name . ] table name '(' numeric column name ')' |
Creates a new imprints index on one numerical column of a specific table. This index stores meta data (min, max, nulls) on segments of column data. It can speed-up queries which have a column selection condition (such as: AGE IS NULL OR AGE >= 18). It is a very compact index.
See also: Documentation/Manuals/MonetDB/Kernel/Modules/Imprints
index_def: CREATE ORDERED INDEX ident ON [ schema name . ] table name '(' numeric column name ')' |
Creates a new ordered index on one numerical column of a specific table. This index is a special single column index where the numeric values are stored in ascending order. It can speed-up queries which have a column selection condition (such as: AGE BETWEEN 18 AND 30) or need sorting such as when used in a GROUP BY clause.
Alter statement
Alter statement mk Sat, 03/27/2010 - 22:39The ALTER statement can be used to change the TABLE properties, it requires authorization to do so. Addition of a column follows the same syntax and functionality as the CREATE TABLE statement. This form can be used to remove individual columns from a table. If the table is used in a foreign key relationship the actions to be take can be further specified as RESTRICT to forbid it if there are external references to the column values. The CASCADE nullifies all references to the values being removed.
alter_statement:
ALTER TABLE qname ADD [COLUMN] { column_def | table_constraint } |
An individual table can be protected using the READ ONLY and INSERT ONLY mode. All attempts to update are flagged as a SQL error. The reverse operation is ALTER TABLE qname READ WRITE, which makes the table accessible for all update operations.
| ALTER TABLE qname SET { { READ | INSERT } ONLY | READ WRITE } |
The ALTER statement has been extended with ADD TABLE and DROP TABLE options to allow adding/removing partition tables to/from a MERGE TABLE.
| ALTER TABLE qname ADD TABLE qname |
View definitions
View definitions mk Fri, 04/02/2010 - 10:39Regular view specifications are supported.
view_def: CREATE [ OR REPLACE ] VIEW view_name [ '(' column_name [ ',' ...')' ] ] AS select_query_without_limit [ WITH CHECK OPTION ] |
A view can be seen as a stored SELECT query. It is useful to reduce query complexity as it can include joins, unions and aggregations, increase standardisation and reusability and support server-based reporting.
Note: Recursive views and reference-able views are not supported.
Note: You can't insert, update, delete or truncate data from a view. Updatable views are not supported.
Schema definitions
Schema definitions mk Sat, 03/27/2010 - 22:50A SCHEMA is a container for tables, views, indices, triggers, functions and procedures. Schema access and modification is strictly controlled using the user role and authorisation scheme.
Every SCHEMA belongs to a database. One reason for organizing the logically separated data in schemas rather than databases is that in this way there will still be just one MonetDB process running.
schema: |
The DEFAULT CHARACTER SET and PATH options are here for compatibility reasons with the the SQL standard, however they are not yet implemented. The default character set is UTF-8.
schema_name_clause:
ident | [ ident ] AUTHORIZATION ident
The AUTHORIZATION option allows specifying the name of the user or the role that will own the schema. If omitted, the user who has executed the query will be the owner. The owner of the schema is allowed to create, alter and drop tables. With the AUTHORIZATION option, an explicit name for the schema is optional. If omitted, the schema automatically gets the name of the authorised user/role as its name.
Notes on schema creation:
- The ownership of a schema can be assigned to only one user/role, and it can not be modified after its creation. Therefore, to share the ownership of a schema, one must assign the ownership of a schema to a role at the creation of the schema. Subsequently, the role can be granted to multiple users to own the schema.
- Only the 'monetdb' user and the 'sysadmin' role can create a new schema. Therefore, to allow other users other to create schema, the 'monetdb' user should assign the 'sysadmin' role to the intended users.
schema_element:
grant | revoke | create_statement | drop_statement | alter_statement
schema:
| DROP SCHEMA qname drop_action
drop_action:
RESTRICT | CASCADE
The drop_action option is supported for compatibility with the SQL standard, however it is not implemented yet. Currently it runs with the CASCADE option, meaning that once a schema is dropped every object inside the schema is dropped as well, such as tables and functions. However objects that are dependent on the schema, such as users will not automatically be dropped and will stop the schema from being dropped as well. One can either ALTER the user and give it a different default schema, or to simply drop the user if it is no longer needed.
schema:
| SET SCHEMA ident
When opening the database, by default the “sys” schema is set. Another automatically created schema is “tmp”, used for temporally local tables. It is not possible for a user to access the “tmp” schema of another user.
One can create a new schema using the CREATE command and change to it, by using the SET command. When creating a table without specifying the schema, the table will be created in the schema that is currently in usage. In order to create a table in a different schema, use the schema name as a prefix.
Drop statement
Drop statement mk Mon, 05/24/2010 - 07:58The objects created can be removed provided the authorization permissions are set. Note: If you do not specify the full signature of the function the DROP query will successfully execute if there is only one function with this name, if not the query is aborted. The DROP ALL is used to drop all the functions with the name specified in the query.
drop_statement: DROP TABLE qname [ RESTRICT | CASCADE ] | DROP VIEW qname [ RESTRICT | CASCADE ] | DROP [ FILTER ] FUNCTION qname [ '(' [ data_type ','... ] ')' ] [ RESTRICT | CASCADE ] | DROP PROCEDURE qname [ '(' [ data_type ','... ] ')' ] [ RESTRICT | CASCADE ] | DROP AGGREGATE qname [ '(' [ data_type ','... ] ')' ] [ RESTRICT | CASCADE ] | DROP LOADER qname [ '(' [ data_type ','... ] ')' ] | DROP ALL [ FILTER ] FUNCTION qname [ RESTRICT | CASCADE ] | DROP ALL PROCEDURE qname [ RESTRICT | CASCADE ] | DROP ALL AGGREGATE qname [ RESTRICT | CASCADE ] | DROP INDEX qname | DROP TRIGGER qname | DROP SEQUENCE qname | DROP ROLE qname | DROP USER ident | DROP TYPE qname | DROP SCHEMA name [ RESTRICT | CASCADE ] |
Privileges
Privileges mk Wed, 02/26/2014 - 15:41Users
Users mk Sat, 03/27/2010 - 22:54All interactions with the database server are attributed to a particular user known to the system. Therefore, a user login name should be created upfront, much like you would have to on any computer system, Your user name determines the schemas you have permission to access and your role in managing them. Much like your computers file system, it is essential to restrict access to your valuable database content via grant and revoke privileges and/or roles.
To create, modify/alter or drop a user in the database you can use the following syntax:
user_stmt: |
The user name, password and default schema can be changed using the different alter user statements.
Note that for a user (including the administrator user: monetdb) to change its own password, the ALTER USER SET form has to be used where the current password is given. When the ENCRYPTED clause is given for PASSWORD, a hashed form of the password has to be supplied. The type of this hash depends on how the server was configured, but defaults to SHA512.
You may view the list of users in the database via query:
SELECT * FROM sys.users;
The user name, password and database name are needed to authenticate yourself against tools such as mclient and all APIs, to gain access to the particular database. Once connected the current schema is the one as specified for the user.
Every user plays a certain role. So, when a user is created, a role with the same name is automatically created as well. The difference is that, the user has all corresponding permissions, e.g., the user has all permissions on the table it creates. However, the role with the same name remains empty until permissions have been granted to it explicitly.
Roles
Roles mk Mon, 05/24/2010 - 07:51Every user takes a role with respect to the privileges on a database scheme. A user can be granted multiple roles and a role can be granted to multiple users.
By the creation of a new user, a role with the same name is automatically created, which is also the default role the user takes. However, only the role can be granted to other users/roles.
Privileges cannot be inherited via another role. For example Alice can only SELECT, Bob can only INSERT, and Alice is granted to Bob. If Bob in turn is granted to Charlie, then Charlie can only INSERT, but not SELECT.
role_statement: grantor: |
A user can only assume a role after he/she has logged in. The user assumes a role by using the SET ROLE command. Then the user can use all the right given to that role. Two roles cannot be assumed at once.
Grant and revoke
Grant and revoke mk Fri, 04/02/2010 - 12:47Qualified users can grant roles and other users various privileges on tables or execute privilege on functions and procedures or global privileges. The table privileges are any combination of SELECT, INSERT, DELETE, TRUNCATE, UPDATE and REFERENCES. ALTER and INDEX privileges cannot be granted. Only the owner has the right to alter a table definition. Any privilege can be later revoked, which takes any combination of the previously granted privileges.
grant: revoke: privileges: object_privileges: table-operation: global_privilege: grantees: [ PUBLIC | authid ] ','... grantor: |
Bulk I/O permissions
For reasons of security, by default, MonetDB only allows the special user monetdb to execute the bulk I/O statements COPY INTO (copy data from the database into a file) and COPY FROM (copy data from a file into the database). As of the June2016 release MonetDB supports granting a user/role the privilege of using the bulk I/O statements. These privileges are global privileges, i.e., they apply to all tables on which the user/role has the INSERT permission.
Examples
1. Grant permissions per table
It is possible to grant one privilege at a time to a user, per table:
GRANT SELECT ON bookSchema.toOrderTable TO libraryUser;
GRANT INSERT ON bookSchema.tOrderTable TO libraryUser;
However, if you want the user to be able to SELECT, INSERT, UPDATE DELETE and TRUNCATE then it can be done in one go:
GRANT ALL ON bookSchema.orderTable TO libraryWorker;
2. Parallel bulk load in shared schema
The following queries grant both the ownership of a schema and the bulk data loading permission (i.e., COPY FROM) to multiple users. Then, those users can log in and bulk load the data into different tables (of the same schema) in parallel.
-- 'monetdb' user
CREATE ROLE copy_role;
GRANT COPY FROM TO copy_role;
CREATE SCHEMA copy_schema AUTHORIZATION copy_role;
CREATE USER user1 WITH PASSWORD 'user1' NAME 'copy user' SCHEMA copy_schema;
CREATE USER user2 WITH PASSWORD 'user2' NAME 'copy user' SCHEMA copy_schema;
GRANT copy_role TO user1;
GRANT copy_role TO user2;
CREATE TABLE copy_schema.t1 (id INT, val VARCHAR(128));
CREATE TABLE copy_schema.t2 (id INT, num INT);
-- normal user user1
SET ROLE copy_role;
COPY INTO t1 FROM '/[path-to]/input.csv' USING DELIMITERS ',';
-- normal user user2
SET ROLE copy_role;
COPY INTO t2 FROM '/[path-to]/input.csv' USING DELIMITERS ',';
Data manipulation
Data manipulation mk Sat, 03/27/2010 - 22:42Table updates
Table updates mk Thu, 06/30/2011 - 16:01Data insertions
A table can be populated using an insert statement. It takes a table name and a value expression list. The expression result types should align with the columns in the table definition. Otherwise the column-value association should be explicitly defined using a column name list. Multiple rows can be inserted in a single statement.
The result of a query can be inserted into a table, provided both source and destination are type compatible. Insertion of a table into itself effectively doubles its content, provided non of the table constraints is violated.
INSERT INTO [ schema_name '.' ] table_name [ '(' column_name [ ',' ... ] ')' ] value_expression : { scalar_exp | DEFAULT | NULL } |
You must have INSERT privilege for the table. The creator/owner of the table will have INSERT privilege automatically.
MonetDB/SQL doest not support data insertions on views.
Data updates
The update statement syntax follows the SQL standard, but its semantics for bulk updates on keys may be slightly different than expected from other systems. In particular, the update implementation ensures that you can freely update any column without the danger of run-away values. MonetDB/SQL doest not support updates through views.
UPDATE [ schema_name '.' ] table_name assignment : column_name '=' { scalar_exp | search_condition | NULL | DEFAULT } |
You must have UPDATE privilege for the table or column(s). The creator/owner of the table will have UPDATE privilege automatically.
MonetDB/SQL doest not support data updates on views.
Data deletions
DELETE FROM [ schema_name '.' ] table_name |
You must have DELETE privilege for the table. The creator/owner of the table will have DELETE privilege automatically.
MonetDB/SQL doest not support data deletions on views.
To quickly delete all rows in a table use TRUNCATE TABLE.
TRUNCATE [ TABLE ] [ schema_name '.' ] table_name |
You must have TRUNCATE privilege for the table. The creator/owner of the table will have TRUNCATE privilege automatically.
A 'CONTINUE IDENTITY' or 'RESTART IDENTITY' clause can be passed to restart or not an identity sequence if present in the table. Default is to CONTINUE IDENTITY sequence numbering.
The 'CASCADE' option instructs to truncate referencing table(s) also if the referencing table(s) have foreign key references to this table. The default behavior is 'RESTRICT'.
Note: it is possible to use TRUNCATE statements in a transaction and thus to roll back the effects of a truncate.
MonetDB/SQL doest not support truncations of data on views.
Table expressions
Table expressions mk Sat, 03/27/2010 - 22:45When a query is created a table can be referenced in different ways, sometimes by its name or by a select query or a join result. Here is the syntax to refer a table.
table_reference: simple_table joined_table: '(' joined_table ')' join_spec : ON search_condition | USING '(colunm_name ',' ...' )' join_type: INNER | { LEFT | RIGHT | FULL } [OUTER] |
The SQL framework for table expressions is based on the select-from-where construct.
select_query: |
The with clause provides the mechanism to introduce in-line view definitions:
with_clause : WITH with_element ',' ... select_query with_element: table_name '(' column_name ',' ... ')' AS '(' select_query ')' |
The pattern matching operations are used over predicates. It is possible to compare them, see the differences between them, if a predicate is a sub-predicate of another, etc. The following syntax description cover all the operations supported by MonetDB.
predicate: comparison_predicate: test_for_null: in_predicate: existence_test: atom_expr: |
string_funcs: |
Window functions
Window functions mk Sun, 10/13/2013 - 13:52SQL provides a method to aggregate over a series of related tuples. They are called window functions and always come with an OVER() clause which determines how tuples are split up over the window functions.The PARTITION BY clause within OVER divides the rows into groups that share the same values of the PARTITION BY expression(s). For each row, the window function is computed overall rows participating in the group. The order within a partition can be used as well.
The ROW_NUMBER() returns the position of the tuple currently in the result set.
The RANK() function produces the row number within a partition, starting at 1.
The DENSE_RANK() produces the rank of the current row without gaps, it counts peer groups.
window_function: {RANK '(' ')' | DENSE_RANK '(' ')' | PERCENT_RANK '(' ')' | CUME_DIST '(' ')' } OVER window_name | window_bound_start: UNBOUNDED PRECEDING | value PRECEDING | CURRENT ROW window_bound: window_bound_start | UNBOUNDED FOLLOWING | value FOLLOWING |
The snippet below (taken from the test suite) illustrate the functionality provided.
create table ranktest ( id int, k string);
select ROW_NUMBER() over () as foo from ranktest;
select ROW_NUMBER() over (PARTITION BY id) as foo, id from ranktest;
select ROW_NUMBER() over (PARTITION BY id ORDER BY id) as foo, id from ranktest;
select ROW_NUMBER() over (ORDER BY id) as foo, id from ranktest;
select RANK() over () as foo from ranktest;
select RANK() over (PARTITION BY id) as foo, id from ranktest;
select RANK() over (PARTITION BY id ORDER BY id) as foo, id from ranktest;
select RANK() over (ORDER BY id) as foo, id from ranktest;
select RANK() over () as foo, id, k from ranktest;
select RANK() over (PARTITION BY id) as foo, id, k from ranktest;
select RANK() over (PARTITION BY id ORDER BY id, k) as foo, id, k from ranktest;
select RANK() over (ORDER BY id, k) as foo, id, k from ranktest;
select DENSE_RANK() over () as foo, id, k from ranktest order by k;
select DENSE_RANK() over (PARTITION BY id) as foo, id, k from ranktest order by k;
select DENSE_RANK() over (PARTITION BY id ORDER BY id, k) as foo, id, k from ranktest order by k;
select DENSE_RANK() over (ORDER BY id, k) as foo, id, k from ranktest order by k;
drop table ranktest;
Statistics
Statistics mk Tue, 11/19/2013 - 08:52The SQL implementation provides the well-known standard statistical aggregates COUNT, SUM, AVG, MIN and MAX over scalar types and groupings. In addition, a few important statistical aggregate functions: MEDIAN, QUANTILE, and correlation CORR are available. The percentile is a float value between 0.0 and 1.0. MEDIAN(<expr>) is equivalent to QUANTILE(<expr>,0.5).
STDDEV_SAMP computes the cumulative sample standard deviation and returns the square root of the sample variance. STDDEV_POP computes the population standard deviation and returns the square root of the population variance. Both functions take takes as an argument any numeric datatype.
Likewise, VAR_SAMP and VAR_POP function returns the sample variance (/n-1) of a set of numbers and the biased variance (/n) of a set of numbers, respectively.
stat_funcs: QUANTILE '(' scalar_expr ',' scalar_expr ')' | STDDEV_SAMP '(' scalar_expr ')' | STDDEV_POP '(' scalar_expr ')' | VAR_SAMP '(' scalar_expr ')' | VAR_POP '(' scalar_expr ')' | MEDIAN '(' scalar_expr ')' | CORR '(' scalar_expr',' scalar_expr ')' |
Bulk input/output
Bulk input/output mk Sun, 03/28/2010 - 00:48The COPY INTO command enables fast insertion of multiple tuples from an text file or standard input. Each tuple in the input file is terminated by a record-separator (default '\n') and fields are separated by the field separator (default '|'). The field values should adhere to the syntax for value literals. Alternative separators should adhere to the SQL lexical syntax for string values. A different NULL value representation can be passed using the NULL as null_string option. Furthermore, the fields are optionally enclosed with a user defined quote character. The text file should use UTF-8 encoding if specified as file_name, and the same encoding as mclient is using if read using FROM STDIN.
It is strongly advised to announce the maximum number of records to be inserted. It avoids guessing by the server and subsequent re-allocation of table space which may involve potentially expensive copying. A portion of the input file can be skipped using the offset feature. The offset value specifies the record at which loading should commence, the first record having offset 1.
As of the Aug2018-SP2 release, if using FROM STDIN and the number of records is not specified, reading the input stops at an empty line. This means that if a one column table is being read, there may be confusion as to whether an empty line is an empty (string) value or a NULL (if NULL AS '' is specified). The end of input takes precedence here.
The file read from must be accessible by the server. Therefore, they must reside on or be accessible to the database server machine and they must be identified with an absolute path name. The STDIN file designator reads data streaming from the client application. An empty record determines the end of sequence.
COPY [ int_val [ OFFSET int_val ] RECORDS ] INTO table_name |
COPY [ int_val RECORDS ] OFFSET int_val INTO table_name |
The input syntax should comply to the following grammar: [ [ [quote] [[escape]char] * [quote]] feldspar] * record separator. Quote characters in quoted fields may be escaped with a backslash. Field and record separators can be embedded in quoted fields.
LOCKED mode
In many bulk loading situations, the original file can be saved as a backup or recreated for disaster handling. This reliefs the database system from having to prepare for recovery as well and to safe significant storage space. The LOCKED qualifier can be used in this situation (and in single user mode!) to skip the logging operation normally performed.
WARNING It is advised to add integrity constraints to the table after the file has been loaded. The ALTER statements perform bulk integrity checking and perform these checks often more efficiently.
For more see the CSV bulk load recipe.
Copy into File
The COPY INTO command with a file name argument allows for fast dumping of a result set into an ASCII file. The file must be accessible by the server and a full path name may be required. The file STDOUT can be used to direct the result to the primary output channel.
The delimiters and NULL AS arguments provide control over the layout required.
COPY subquery INTO file_name [ [USING] DELIMITERS field_separator [',' record_separator [ ',' string_quote ]]] [ NULL AS null_string ] |
For both the input and output version of the COPY INTO command one can specify a file name ending with 'gz' or 'bz2' to use the appropriate compression library (if available).
For more see the Exporting bulk data recipe.
Copying binary files
Migration of tables between MonetDB/SQL instances can be sped up using the binary COPY INTO/FROM format.
For more see the Binary bulk data load recipe.
Aggregate into a single string
Aggregate into a single string pedro Fri, 10/05/2018 - 11:41In the Aug2018 release of MonetDB, we added a new aggregation function "group_concat", which aggregates an input column into a single string as output. We provide two versions of this aggregate: group_concat(string) and group_concat(string, string). In both versions, the first parameter corresponds to the input string column to be concatenated. In the former the default delimiter is the ',' character, in the latter the second parameter indicates the separator to be used. If either a group in has a NULL value, or the delimiter is NULL, the output will be NULL.
sql>create table demo (a int, b clob); operation successful sql>insert into demo values (1, 'chair'), (1, 'desk'), (2, 'room'), (1, 'decoration'), (2, 'window'), (2, 'sofa'); 6 affected rows sql>select '[' || group_concat(a) || ']' from demo; +---------------+ | L3 | +===============+ | [1,1,2,1,2,2] | +---------------+ 1 tuple sql>select a, group_concat(b) from demo group by a; +------+-----------------------+ | a | L4 | +======+=======================+ | 1 | chair,desk,decoration | | 2 | room,window,sofa | +------+-----------------------+ 2 tuples sql>select a, group_concat(b, '|') from demo group by a; +------+-----------------------+ | a | L5 | +======+=======================+ | 1 | chair|desk|decoration | | 2 | room|window|sofa | +------+-----------------------+ 2 tuples sql>insert into demo values (3, 'car'), (3, NULL); 2 affected rows sql>select '[' || group_concat(b, '-') || ']' from demo group by a; +-------------------------+ | L6 | +=========================+ | [chair-desk-decoration] | | [room-window-sofa] | | null | +-------------------------+ 3 tuples
Programming SQL
Programming SQL mk Thu, 06/30/2011 - 16:39SQL comes with imperative programming concepts, such as variable declarations, compound statements, while-loops, and conditional-statements. The are grouped into procedures, functions, and triggers, which strongly differ in their execution paradigm and semantics.
Procedure definition
Procedure definition mk Sat, 05/28/2011 - 21:42
func_def: routine_body: procedure_statement: |
Note: CREATE ... EXTERNAL NAME ... is a MonetDB language extension.
Function definitions
Function definitions mk Sat, 03/27/2010 - 22:47The SQL standard allows to create SQL functions. MonetDB supports this feature. The syntax to create a function is:
func_def: func_data_type: language_keyword: |
External functions provide a convenient bridge between SQL and MAL. This way also a bridge can be established with dynamically loaded functions written in C. Any SQL function signature can be bound to MAL function or command.
The example below illustrates a binding to the a function that produces a tabular overview of the BAT catalog.
CREATE FUNCTION bbp ()
RETURNS TABLE (id int, name string, htype string,
string, count BIGINT, refcnt int, lrefcnt int,
location string, heat int, dirty string, status string,
kind string)
EXTERNAL NAME sql.bbp;
A table returning function should be implemented as MAL function that returns a series of synchronized BATs.
FUNCTION bbp():bat[:str,:bat];
b := bat.new(:str,:bat,12);
ns := bbp.getNames();
ri := algebra.markT(ns, 0:oid);
...
kind := algebra.markH(ikind, 0:oid);
bat.insert(b, "kind", kind);
RETURN b;
END bbp;
Two useful Microsoft functions supported are 'STUFF' and 'ROUND'.
SQL> SELECT MS_ROUND(10.0, 1, 0);
The SQL definition for MS_ROUND is:
CREATE OR REPLACE FUNCTION MS_ROUND(num float, precision int, truncat int)
RETURNS float
BEGIN
IF (truncat = 0)
THEN RETURN ROUND(num, precision);
ELSE RETURN TRUNCATE(num, precision);
END IF;
END;
The SQL definition for MS_STUFF is:
CREATE FUNCTION MS_STUFF( s1 varchar(32), st int, len int, s3 varchar(32))
RETURNS varchar(32)
BEGIN
DECLARE res varchar(32), aux varchar(32);
DECLARE ofset int;
SET ofset = 0;
SET res = SUBSTRING(s1,ofset,st-1);
SET res = res || s3;
SET ofset = LENGTH(s1)-len;
SET aux = SUBSTRING(s1,ofset, len);
SET res = res || aux;
RETURN res;
END;
Note: CREATE FILTER FUNCTION is a MonetDB language extension. The SQL standard does not provide for user-defined filter functions such as my_like ... .
Note: CREATE AGGREGATE is a MonetDB language extension. The SQL standard does not provide for user-defined aggregate functions.
Note: CREATE ... EXTERNAL ... and CREATE ... LANGUAGE ... are MonetDB language extensions.
Trigger definitions
Trigger definitions mk Sun, 03/28/2010 - 23:32Triggers are a convenient programming abstraction. They are activated at transaction commit based on updates to the base tables.
trigger_def: CREATE [ OR REPLACE ] TRIGGER qname trigger_action_time trigger_event ON ident opt_referencing_list triggered_action trigger_action_time: BEFORE | AFTER trigger_event: INSERT | DELETE | TRUNCATE | UPDATE | UPDATE OF ident ','... opt_referencing_list: [ REFERENCING old_or_new_values_alias ... ] old_or_new_values_alias: triggered_action: opt_for_each: /* default is for each statement */ | FOR EACH ROW | FOR EACH STATEMENT triggered_statement: trigger_procedure_statement: |
Example The following example provides a glimpse of their functionality:
create table t1 (id int, name varchar(1024));
--test FOR EACH STATEMENT (default one)
insert into t1 values(10, 'monetdb');
insert into t1 values(20, 'monet');
create trigger test5
after update on t1
for each statement
when id >0 insert into t1 values(4, 'update_when_statement_true');
All trigger definitions are considered together at the transaction commit. There is no a priori defined order in which they run. Each may in turn activate new triggers, but each trigger definition is also executed only once per transaction commit.
Variables
Variables mk Sat, 03/27/2010 - 22:55MonetDB/SQL supports session variables declared by the user. They are indistinguishable from table and column names and can be used anywhere a literal constant is allowed.
declare_statement: DECLARE | DECLARE table_def variable_list: set_statement: user: [ USER | SESSION_USER | CURRENT_USER ] |
Examples
sql>DECLARE high integer;
sql>DECLARE n varchar(256);
sql>SET high=4000;
sql>SET n='monetdb'
sql>SET trace = 'ticks,stmt'
sql>SELECT count(*) from tables where id > high;
+--------+
| count_ |
+========+
| 2 |
+--------+
The SQL variables (and environment variables) can be accessed through predefined table returning functions var() and env().The debug variable settings are defined in the MonetDB config file. The current_* variables are SQL environment settings. The trace variables is defined in the TRACE command. History denotes a boolean to log all queries.
sql>select * from var();
+------------------+
| name |
+==================+
| debug |
| current_schema |
| current_user |
| current_role |
| optimizer |
| trace |
| current_timezone |
| cache |
| history |
+------------------+
Flow of control
Flow of control mk Fri, 04/02/2010 - 12:48The persistent stored module facility of SQL provides a method to encode complex algorithm using well-known programming features.
control_statement: case_statement: when_statement: WHEN scalar_exp THEN procedure_statement_list when_search_statement: WHEN search_condition THEN procedure_statement_list if_statement: IF search_condition THEN procedure_statement_list if_opt_else END IF if_opt_else: while_statement: return_statement: RETURN return_value return_value: |
Transactions
Transactions mk Sat, 03/27/2010 - 22:49MonetDB/SQL supports a multi-statement transaction scheme marked by START TRANSACTION and closed with either COMMIT or ROLLBACK. The session variable AUTOCOMMIT can be set to true (default) if each SQL statement should be considered an independent transaction.
In the AUTOCOMMIT mode, you can use START TRANSACTION and COMMIT/ROLLBACK to indicate transactions containing multiple SQL statements. In this case, AUTOCOMMIT is automatically disabled by a START TRANSACTION, and reenabled by a COMMIT or ROLLBACK.
If AUTOCOMMIT mode is OFF, the START TRANSACTION is implicit, and you should only use COMMIT/ROLLBACK.
WARNING. The transaction management scheme is based on optimistic concurrency control. It provides each transaction with a consistent view on the database, but updates are collected in an addendum processed on transaction commit. If at commit time it can be assured that the data prepared for update affects tables has not changed in the mean time, the results are merged. Otherwise the transaction is aborted.
This optimistic concurrency scheme is particularly useful for query dominant environments. It negatively affects long running transactions which concurrently are affected by updates on their underlying tables. The same holds for applications that tries to perform concurrent updates from multiple threads in a single application. They should be internally serialized by the application to avoid unexpected transaction aborts.
WARNING. Optimistic concurrency control may be confusing for those who built online-transaction applications, because the granularity of the concurrency control scheme will show higher then expected transaction failures. There is not a locking schema to avoid this. Applications may have to resort to serial execution.
WARNING. The tuples being deleted are only marked as such. They do not reduce the table size. You will even experience after many updates that your queries will run slower, because each query first has to establish a consistent private view on the database by inspection of the delete/update lists. It calls for a vacuum cleaning algorithm in the background, which is not yet available.
Runtime features
Runtime features mk Sun, 03/28/2010 - 23:51The SQL statement modifiers provide system specific information over e.g. relational query plan, MAL execution plan, performance, and access to the debugger.
EXPLAIN SQL STMT
EXPLAIN SQL STMT mk Sun, 03/28/2010 - 23:24The intermediate code produced by the SQL compiler can be made visible using the explain statement modifier. It gives a detailed description of the actions taken to produce the answer. The example below illustrates what you can expect when a simple query is starts with the explain modifier. The output strongly depends on the optimizer pipeline. The details of this program are better understood when you have read the MAL reference.
sql>select count(*) from tables; +------+ | L41 | +======+ | 93 | +------+ 1 tuple clk: 1.993ms sql>explain select count(*) from tables; +------------------------------------------------------------------------------------------------------------------------+ | mal | +========================================================================================================================+ | function user.s6_1():void; | | X_1:void := querylog.define("select count(*) from tables;":str, "default_pipe":str, 30:int); | | barrier X_104:bit := language.dataflow(); | | X_38:bat[:lng] := bat.new(nil:lng); | | X_4:int := sql.mvc(); | | X_18:bat[:sht] := sql.bind(X_4:int, "sys":str, "_tables":str, "type":str, 0:int); | | C_5:bat[:oid] := sql.tid(X_4:int, "sys":str, "_tables":str); | | C_89:bat[:oid] := algebra.thetaselect(X_18:bat[:sht], C_5:bat[:oid], 2:sht, "!=":str); | | (X_21:bat[:oid], X_22:bat[:sht]) := sql.bind(X_4:int, "sys":str, "_tables":str, "type":str, 2:int); | | C_90:bat[:oid] := algebra.thetaselect(X_22:bat[:sht], nil:bat[:oid], 2:sht, "!=":str); | | X_20:bat[:sht] := sql.bind(X_4:int, "sys":str, "_tables":str, "type":str, 1:int); | | C_92:bat[:oid] := algebra.thetaselect(X_20:bat[:sht], C_5:bat[:oid], 2:sht, "!=":str); | | C_27:bat[:oid] := sql.subdelta(C_89:bat[:oid], C_5:bat[:oid], X_21:bat[:oid], C_90:bat[:oid], C_92:bat[:oid]); | | X_8:bat[:int] := sql.bind(X_4:int, "sys":str, "_tables":str, "id":str, 0:int); | | (X_13:bat[:oid], X_14:bat[:int]) := sql.bind(X_4:int, "sys":str, "_tables":str, "id":str, 2:int); | | X_11:bat[:int] := sql.bind(X_4:int, "sys":str, "_tables":str, "id":str, 1:int); | | X_29:bat[:int] := sql.projectdelta(C_27:bat[:oid], X_8:bat[:int], X_13:bat[:oid], X_14:bat[:int], X_11:bat[:int]); | | X_31:lng := aggr.count(X_29:bat[:int]); | | X_37:bat[:lng] := sql.single(X_31:lng); | | C_32:bat[:oid] := sql.tid(X_4:int, "tmp":str, "_tables":str); | | X_34:bat[:int] := sql.bind(X_4:int, "tmp":str, "_tables":str, "id":str, 0:int); | | X_35:bat[:int] := algebra.projection(C_32:bat[:oid], X_34:bat[:int]); | | X_36:lng := aggr.count(X_35:bat[:int]); | | X_40:bat[:lng] := bat.append(X_38:bat[:lng], X_37:bat[:lng], true:bit); | | X_42:bat[:lng] := bat.append(X_40:bat[:lng], X_36:lng, true:bit); | | X_43:lng := aggr.sum(X_42:bat[:lng]); | | language.pass(C_5:bat[:oid]); | | exit X_104:bit; | | sql.resultSet(".L41":str, "L41":str, "bigint":str, 64:int, 0:int, 7:int, X_43:lng); | | end user.s6_1; | ... +------------------------------------------------------------------------------------------------------------------------+ 55 tuples clk: 3.942 ms
The SQL compiler maintains a cache of compiled (or prepared) queries. Each query is looked up in this cache based on an expression pattern match where the constants may take on different values. If it doesn't exist, the query is converted into a code block and stored in the module user.s0.
The call to the cached function is included in a wrapper function main, which is the only piece of code produced if the query is used more than once. The query cache disappears when the server is brought to a halt.
+----------------------------+ | function user.main():void; | | mdb.start(); | | user.s3_2(); | | mdb.stop(); | | end main; | +----------------------------+
TRACE SQL STMT
TRACE SQL STMT mk Sun, 03/28/2010 - 23:27A performance trace can be obtained using the TRACE statement modifier. It collects all the trace information in a table reproduced by tracelog(), which can be queried using ordinary SQL statements. The internal trace table is reset at each query being traced. Its definition is given below:
create function sys.tracelog()
returns table (
event integer, -- event counter
clk varchar(20), -- wallclock, no mtime in kernel
pc varchar(50), -- module.function[nr]
thread int, -- thread identifier
ticks bigint, -- time in microseconds
rrsMB bigint, -- resident memory in MB
vmMB bigint, -- virtual size in MB
reads bigint, -- number of blocks read
writes bigint, -- number of blocks written
minflt bigint, -- minor page faults
majflt bigint, -- major page faults
nvcsw bigint, -- non-volantary context switch
stmt string -- actual statement executed
)
external name sql.dump_trace;
Field | Description | Example | |
---|---|---|---|
1 | event | An event counter | 38 |
2 | clk | The wall-clock time in microseconds | "13:11:16.710180", |
3 | pc | Name of the query execution plan name; followed by the program counter in the execution plan (denoted in the square brackets ‘[‘ and ‘]’) indicating the position of the MAL instruction in its defining block; and finally a unique call identifier to distinguish recursion. | "user.s5_1[14]12", |
4 | thread | Id of the worker thread processing this MAL instruction. | 3 |
5 | usec | The actual execution time (at "done") for the MAL instruction, measured in microseconds | 207, |
6 | rssMB | Memory Resident Set Size (RSS), i.e., the portion of memory occupied by a process that is held in main memory, in MB. | 54 |
7 | vmMB | Estimated cumulative query plan footprint, in MB. For a query, the maximal value of “tmpspace” gives a fairly good estimation of the memory consumption of this query. | 0, |
8 | reads | The number of disk blocks read ** | 0, |
9 | writes | The number of disk blocks written ** | 0, |
10 | majflt | The number of major page faults | 0, |
11 | switch | The number of context switches | 0, |
12 | stmt | The MAL statement being executed, with argument statistics | "sql.exportResult(X_21=\"104d2\":streams,X_16=4:int);” |
* In addition, the “state” field denotes two special events. The “wait” event is received when the worker threads cannot find an eligible MAL instruction and have to wait for other worker threads to deliver results. The “ping” event provides a synopsis of the CPU processing loads.
** Please be aware that these values reflect system wide activities, thus they include not only MonetDB activities. Additionally, these I/O counts do not reflect the actual amount of data read/written from/to the hardware drives, as this information is generally not available to all users of a Linux system.
DEBUG SQL STMT
DEBUG SQL STMT mk Sun, 03/28/2010 - 23:26The SQL statements are translated into MAL programs, which are optimized and stored away in an user module. The generated code can be inspected with the MAL debugger.
The example below illustrates the start of such a session:
>debug select count(*) from tables;
# mdb.start()
mdb>next
# user.s1_0()
mdb>next
# _2:bat[:oid,:int] := sql.bind(_3="sys", _4="ptables", _5="id", _6=0)
mdb>next
# _8:bat[:oid,:int] := sql.bind(_3="sys", _4="ptables", _5="id", _9=1)
mdb> ...
mdb>help
next -- Advance to next statement
continue -- Continue program being debugged
catch -- Catch the next exception
break [<var>] -- set breakpoint on current instruction or <var>
delete [<var>] -- remove break/trace point <var>
debug <int> -- set kernel debugging mask
dot <obj> [<file>] -- generate the dependency graph for MAL block
step -- advance to next MAL instruction
module -- display the module signatures
atom -- show atom list
finish -- finish current call
exit -- terminate execution
quit -- turn off debugging
list <obj> -- list a MAL program block
list # [+#],-# -- list current program block slice
List <obj> [#] -- list MAL block with type information[slice]
list '['<step>']' -- list program block after optimizer step
List #[+#],-# -- list current program block slice
var <obj> -- print symbol table for module
optimizer <obj> -- display optimizer steps
print <var> -- display value of a variable
print <var> <cnt>[<first>] -- display BAT chunk
info <var> -- display bat variable properties
run -- restart current procedure
where -- print stack trace
down -- go down the stack
up -- go up the stack
trace <var> -- trace assignment to variables
help -- this message
PLAN SQL STMT
PLAN SQL STMT mk Fri, 12/26/2014 - 13:59The relational plan used in the SQL optimizer can be obtained by prepending the query with the keyword PLAN.
sql>plan select count(*) from tables;
+----------------------------------------------------------------------------+
| rel |
+============================================================================+
| project ( |
| | group by ( |
| | | union ( |
| | | | group by ( |
| | | | | project ( |
| | | | | | select ( |
| | | | | | | table(sys._tables) [ "_tables"."id", "_tables"."type" ] COUNT |
| | | | | | ) [ "_tables"."type" != smallint "2" ] |
| | | | | ) [ "_tables"."id" as "tables"."id" ] |
| | | | ) [ ] [ sys.count() NOT NULL as "L41"."L41" ], |
| | | | group by ( |
| | | | | project ( |
| | | | | | table(tmp._tables) [ "_tables"."id" ] COUNT |
| | | | | ) [ "_tables"."id" as "tables"."id" ] |
| | | | ) [ ] [ sys.count() NOT NULL as "L41"."L41" ] |
| | | ) [ "L41"."L41" ] |
| | ) [ ] [ sys.sum no nil ("L41"."L41") as "L41"."L41" ] |
| ) [ "L41"."L41" NOT NULL ] |
+----------------------------------------------------------------------------+
18 tuples
clk: 1.158 ms
PREPARE SQL STMT
PREPARE SQL STMT mk Sun, 10/13/2013 - 16:33The PREPARE statement compiles an SQL statement into its execution plan. The plan is given a name and stored in the query cache. A subsequent EXECUTE command retrieves it from the cache and executes it.
PREPARE statement executions can be given positional arguments to replace any literal constant in the query. Each argument is denoted with a '?'.
sql>prepare select * from tables where name = ?; execute prepared statement using: EXEC 15(...) clk: 4.123 ms +----------+---------+-------+--------+--------+---------------+ | type | digits | scale | schema | table | column | +==========+=========+=======+========+========+===============+ | int | 32 | 0 | | tables | id | | varchar | 1024 | 0 | | tables | name | | int | 32 | 0 | | tables | schema_id | | varchar | 2048 | 0 | | tables | query | | smallint | 16 | 0 | | tables | type | | boolean | 1 | 0 | | tables | system | | smallint | 16 | 0 | | tables | commit_action | | smallint | 16 | 0 | | tables | access | | tinyint | 8 | 0 | | tables | temporary | | varchar | 1024 | 0 | null | null | null | +----------+---------+-------+--------+--------+---------------+ 10 tuples clk: 4.123 ms sql>exec 15('_tables'); +------+---------+-----------+-------+------+--------+---------------+--------+-----------+ | id | name | schema_id | query | type | system | commit_action | access | temporary | +======+=========+===========+=======+======+========+===============+========+===========+ | 2067 | _tables | 2000 | null | 10 | true | 0 | 0 | 0 | | 2115 | _tables | 2114 | null | 10 | true | 2 | 0 | 0 | +------+---------+-----------+-------+------+--------+---------------+--------+-----------+ 2 tuples clk: 2.692 ms sql>exec 15('table_types'); +------+-------------+-----------+-------+------+--------+---------------+--------+-----------+ | id | name | schema_id | query | type | system | commit_action | access | temporary | +======+=============+===========+=======+======+========+===============+========+===========+ | 7322 | table_types | 2000 | null | 10 | true | 0 | 1 | 0 | +------+-------------+-----------+-------+------+--------+---------------+--------+-----------+ 1 tuple clk: 2.364 ms
Language bindings
Language bindings mk Sun, 03/28/2010 - 23:53MonetDB comes with JDBC, ODBC, PHP, Perl, Ruby and Python interface libraries. The JDBC, PHP, Perl, Ruby, Python and Node.js interfaces are native implementations and do not require installation of the MonetDB client/server code. The ODBC driver and setup library are available as a separate installer. The Mapi library is the lowest level C-interface to interacting with the server.
We rely on external documentation for the basics of the language bindings provided: PHP, Perl, Python, and the drivers JDBC and ODBC. Examples provided are used to illustrate their behavior in the context of MonetDB only.
JDBC | https://docs.oracle.com/javase/8/docs/technotes/guides/jdbc/ |
ODBC | https://docs.microsoft.com/en-us/sql/odbc/reference/odbc-programmer-s-reference |
Python | https://python.org |
R | http://www.r-project.org |
JavaScript Node.js | https://nodejs.org |
Ruby/Rails | http://rubyonrails.org |
Perl DBI | https://www.perl.org |
PHP | https://php.net |
Notes: The programming interface is based on a client-server architecture, where the client program connects to a server using a TCP/IP connection to exchange commands and receives answers. The underlying protocol uses plain UTF-8 data for ease of use and debugging. This leads to publicly visible information exchange over a network, which may be undesirable. Therefore, a private and secure channel can be set up with the Secure Socket Layer functionality.
In addition, a deeper integration of MonetDB with R is available on the website of the MonetDB.R connector. If you are looking for information about the embedded R support in MonetDB, follow check this blog post.
More information about using MonetDB on Node.js you can find here.
Mapi Library
Mapi Library mk Sun, 03/28/2010 - 23:55The Mapi Library
The easiest way to extend the functionality of MonetDB is to construct an independent application, which communicates with a running server using a database driver with a simple API and a textual protocol. The effectiveness of such an approach has been demonstrated by the wide use of database API implementations, such as Perl DBI, PHP, ODBC,...
Sample MAPI Application
The database driver implementation given in this document focuses on developing applications in C. The command collection has been chosen to align with common practice, i.e. queries follow a prepare, execute, and fetch_row paradigm. The output is considered a regular table. An example of a mini application below illustrates the main operations.
#include <mapi.h> #include <stdio.h> #include <stdlib.h> void die(Mapi dbh, MapiHdl hdl) { if (hdl != NULL) { mapi_explain_query(hdl, stderr); do { if (mapi_result_error(hdl) != NULL) mapi_explain_result(hdl, stderr); } while (mapi_next_result(hdl) == 1); mapi_close_handle(hdl); mapi_destroy(dbh); } else if (dbh != NULL) { mapi_explain(dbh, stderr); mapi_destroy(dbh); } else { fprintf(stderr, "command failed\n"); } exit(-1); } MapiHdl query(Mapi dbh, char *q) { MapiHdl ret = NULL; if ((ret = mapi_query(dbh, q)) == NULL || mapi_error(dbh) != MOK) die(dbh, ret); return(ret); } void update(Mapi dbh, char *q) { MapiHdl ret = query(dbh, q); if (mapi_close_handle(ret) != MOK) die(dbh, ret); } int main(int argc, char *argv[]) { Mapi dbh; MapiHdl hdl = NULL; char *name; char *age; dbh = mapi_connect("localhost", 50000, "monetdb", "monetdb", "sql", "demo"); if (mapi_error(dbh)) die(dbh, hdl); update(dbh, "CREATE TABLE emp (name VARCHAR(20), age INT)"); update(dbh, "INSERT INTO emp VALUES ('John', 23)"); update(dbh, "INSERT INTO emp VALUES ('Mary', 22)"); hdl = query(dbh, "SELECT * FROM emp"); while (mapi_fetch_row(hdl)) { name = mapi_fetch_field(hdl, 0); age = mapi_fetch_field(hdl, 1); printf("%s is %s\n", name, age); } mapi_close_handle(hdl); mapi_destroy(dbh); return(0); }
The mapi_connect()
operation establishes a communication channel with a running server. The query language interface is either "sql" or "mal".
Errors on the interaction can be captured using mapi_error()
, possibly followed by a request to dump a short error message explanation on a standard file location. It has been abstracted away in a function.
Provided we can establish a connection, the interaction proceeds as in many similar application development packages. Queries are shipped for execution using mapi_query()
and an answer table can be consumed one row at a time. In many cases these functions suffice.
The Mapi interface provides caching of rows at the client side. mapi_query()
will load tuples into the cache, after which they can be read repeatedly using mapi_fetch_row()
or directly accessed (mapi_seek_row()
). This facility is particularly handy when small, but stable query results are repeatedly used in the client program.
To ease communication between application code and the cache entries, the user can bind the C-variables both for input and output to the query parameters, and output columns, respectively. The query parameters are indicated by '?' and may appear anywhere in the query template.
The Mapi library expects complete lines from the server as answers to query actions. Incomplete lines leads to Mapi waiting forever on the server. Thus formatted printing is discouraged in favor of tabular printing as offered by the table.print()
commands.
The following action is needed to get a working program. Compilation of the application relies on libtool and the pkg-config programs that should be shipped with your installation. The application above can be compiled and linked as follows:
% libtool --mode=compile --tag=CC gcc -c `env PKG_CONFIG_PATH=$INSTALL_DIR/lib/pkgconfig pkg-config --cflags monetdb-mapi` test.c % libtool --mode=link --tag=CC gcc -o test `env PKG_CONFIG_PATH=$INSTALL_DIR/lib/pkgconfig pkg-config --libs monetdb-mapi` test.o % ./test
The example assumes you have set the variable INSTALL_DIR
to the prefix location given during configure of MonetDB. If you use a system installation, you can omit the 'env PKGCONFIG_PATH=.....
' part, or set INSTALL_DIR
to '/usr
'.
The compilation on Windows is slightly more complicated. It requires more attention towards the location of the include files and libraries.
Command Summary
The quick reference guide to the Mapi library is given below. More details on their constraints and defaults are given in the next section.
mapi_bind() | Bind string C-variable to a field |
mapi_bind_numeric() | Bind numeric C-variable to field |
mapi_bind_var() | Bind typed C-variable to a field |
mapi_cache_freeup() | Forcefully shuffle fraction for cache refreshment |
mapi_cache_limit() | Set the tuple cache limit |
mapi_cache_shuffle() | Set shuffle fraction for cache refreshment |
mapi_clear_bindings() | Clear all field bindings |
mapi_clear_params() | Clear all parameter bindings |
mapi_close_handle() | Close query handle and free resources |
mapi_connect() | Connect to a Mserver |
mapi_destroy() | Free handle resources |
mapi_disconnect() | Disconnect from server |
mapi_error() | Test for error occurrence |
mapi_execute() | Execute a query |
mapi_execute_array() | Execute a query using string arguments |
mapi_explain() | Display error message and context on stream |
mapi_explain_query() | Display error message and context on stream |
mapi_fetch_all_rows() | Fetch all answers from server into cache |
mapi_fetch_field() | Fetch a field from the current row |
mapi_fetch_field_len() | Fetch the length of a field from the current row |
mapi_fetch_field_array() | Fetch all fields from the current row |
mapi_fetch_line() | Retrieve the next line |
mapi_fetch_reset() | Set the cache reader to the beginning |
mapi_fetch_row() | Fetch row of values |
mapi_finish() | Terminate the current query |
mapi_get_dbname() | Database being served |
mapi_get_field_count() | Number of fields in current row |
mapi_get_host() | Host name of server |
mapi_get_query() | Query being executed |
mapi_get_language() | Query language name |
mapi_get_mapi_version() | Mapi version name |
mapi_get_monet_versionId() | MonetDB version identifier |
mapi_get_monet_version() | MonetDB version name |
mapi_get_motd() | Get server welcome message |
mapi_get_row_count() | Number of rows in cache or -1 |
mapi_get_last_id() | last inserted id of an auto_increment (or alike) column |
mapi_get_from() | Get the stream 'from' |
mapi_get_to() | Get the stream 'to' |
mapi_get_trace() | Get trace flag |
mapi_get_user() | Current user name |
mapi_log() | Keep log of client/server interaction |
mapi_next_result() | Go to next result set |
mapi_needmore() | Return whether more data is needed |
mapi_ping() | Test server for accessibility |
mapi_prepare() | Prepare a query for execution |
mapi_prepare_array() | Prepare a query for execution using arguments |
mapi_query() | Send a query for execution |
mapi_query_array() | Send a query for execution with arguments |
mapi_query_handle() | Send a query for execution |
mapi_quick_query_array() | Send a query for execution with arguments |
mapi_quick_query() | Send a query for execution |
mapi_quick_response() | Quick pass response to stream |
mapi_quote() | Escape characters |
mapi_reconnect() | Reconnect with a clean session context |
mapi_rows_affected() | Obtain number of rows changed |
mapi_seek_row() | Move row reader to specific location in cache |
mapi_setAutocommit() | Set auto-commit flag |
mapi_setAlgebra() | Use algebra backend |
mapi_stream_query() | Send query and prepare for reading tuple stream |
mapi_table() | Get current table name |
mapi_timeout() | Set timeout for long-running queries[TODO] |
mapi_output() | Set output format |
mapi_stream_into() | Stream document into server |
mapi_profile() | Set profile flag |
mapi_trace() | Set trace flag |
mapi_virtual_result() | Submit a virtual result set |
mapi_unquote() |
remove escaped characters |
Library Synopsis
The routines to build a MonetDB application are grouped in the library MonetDB Programming Interface, or shorthand Mapi.
The protocol information is stored in a Mapi interface descriptor (mid). This descriptor can be used to ship queries, which return a MapiHdl to represent the query answer. The application can set up several channels with the same or a different mserver
. It is the programmer's responsibility not to mix the descriptors in retrieving the results.
The application may be multi-threaded as long as the user respects the individual connections represented by the database handlers.
The interface assumes a cautious user, who understands and has experience with the query or programming language model. It should also be clear that references returned by the API point directly into the administrative structures of Mapi. This means that they are valid only for a short period, mostly between successive mapi_fetch_row()
commands. It also means that it the values are to retained, they have to be copied. A defensive programming style is advised.
Upon an error, the routines mapi_explain()
and mapi_explain_query()
give information about the context of the failed call, including the expression shipped and any response received. The side-effect is clearing the error status.
Error Message
Almost every call can fail since the connection with the database server can fail at any time. Functions that return a handle (either Mapi
or MapiHdl
) may return NULL on failure, or they may return the handle with the error flag set. If the function returns a non-NULL handle, always check for errors with mapi_error.
Functions that return MapiMsg indicate success and failure with the following codes.
MOK | No error |
MERROR | Mapi internal error. |
MTIMEOUT | Error communicating with the server. |
When these functions return MERROR or MTIMEOUT, an explanation of the error can be had by calling one of the functions mapi_error_str()
, mapi_explain()
, or mapi_explain_query()
.
To check for error messages from the server, call mapi_result_error()
. This function returns NULL if there was no error, or the error message if there was. A user-friendly message can be printed using map_explain_result()
. Typical usage is:
do { if ((error = mapi_result_error(hdl)) != NULL) mapi_explain_result(hdl, stderr); while ((line = mapi_fetch_line(hdl)) != NULL) /* use output */; } while (mapi_next_result(hdl) == 1);
Mapi Function Reference
Connecting and Disconnecting
- Mapi mapi_connect(const char *host, int port, const char *username, const char *password, const char *lang, const char *dbname)
Setup a connection with a Mserver at a host:port and login with username and password. If host == NULL, the local host is accessed. If host starts with a '/' and the system supports it, host is actually the name of a UNIX domain socket, and port is ignored. If port == 0, a default port is used. If username == NULL, the username of the owner of the client application containing the Mapi code is used. If password == NULL, the password is omitted. The preferred query language is any of {sql,mil,mal,xquery }. On success, the function returns a pointer to a structure with administration about the connection.
- MapiMsg mapi_disconnect(Mapi mid)
Terminate the session described by mid. The only possible uses of the handle after this call is mapi_destroy() and
mapi_reconnect()
. Other uses lead to failure. - MapiMsg mapi_destroy(Mapi mid)
Terminate the session described by mid if not already done so, and free all resources. The handle cannot be used anymore.
- MapiMsg mapi_reconnect(Mapi mid)
Close the current channel (if still open) and re-establish a fresh connection. This will remove all global session variables.
- MapiMsg mapi_ping(Mapi mid)
Test availability of the server. Returns zero upon success.
Sending Queries
- MapiHdl mapi_query(Mapi mid, const char *Command)
Send the Command to the database server represented by mid. This function returns a query handle with which the results of the query can be retrieved. The handle should be closed with
mapi_close_handle()
. The command response is buffered for consumption, c.f. mapi_fetch_row(). - MapiMsg mapi_query_handle(MapiHdl hdl, const char *Command)
Send the Command to the database server represented by hdl, reusing the handle from a previous query. If Command is zero it takes the last query string kept around. The command response is buffered for consumption, e.g.
mapi_fetch_row()
. - MapiHdl mapi_query_array(Mapi mid, const char *Command, char **argv)
Send the Command to the database server replacing the placeholders (?) by the string arguments presented.
- MapiHdl mapi_quick_query(Mapi mid, const char *Command, FILE *fd)
Similar to
mapi_query()
, except that the response of the server is copied immediately to the file indicated. - MapiHdl mapi_quick_query_array(Mapi mid, const char *Command, char **argv, FILE *fd)
Similar to
mapi_query_array()
, except that the response of the server is not analyzed, but shipped immediately to the file indicated. - MapiHdl mapi_stream_query(Mapi mid, const char *Command, int windowsize)
Send the request for processing and fetch a limited number of tuples (determined by the window size) to assess any erroneous situation. Thereafter, prepare for continual reading of tuples from the stream, until an error occurs. Each time a tuple arrives, the cache is shifted one.
- MapiHdl mapi_prepare(Mapi mid, const char *Command)
Move the query to a newly allocated query handle (which is returned). Possibly interact with the back-end to prepare the query for execution.
- MapiMsg mapi_execute(MapiHdl hdl)
Ship a previously prepared command to the backend for execution. A single answer is pre-fetched to detect any runtime error. MOK is returned upon success.
- MapiMsg mapi_execute_array(MapiHdl hdl, char **argv)
Similar to
mapi_execute
but replacing the placeholders for the string values provided. - MapiMsg mapi_finish(MapiHdl hdl)
Terminate a query. This routine is used in the rare cases that consumption of the tuple stream produced should be prematurely terminated. It is automatically called when a new query using the same query handle is shipped to the database and when the query handle is closed with
mapi_close_handle()
. - MapiMsg mapi_virtual_result(MapiHdl hdl, int columns, const char **columnnames, const char **columntypes, const int *columnlengths, int tuplecount, const char ***tuples)
Submit a table of results to the library that can then subsequently be accessed as if it came from the server. columns is the number of columns of the result set and must be greater than zero. columnnames is a list of pointers to strings giving the names of the individual columns. Each pointer may be NULL and columnnames may be NULL if there are no names. tuplecount is the length (number of rows) of the result set. If tuplecount is less than zero, the number of rows is determined by a NULL pointer in the list of tuples pointers. tuples is a list of pointers to row values. Each row value is a list of pointers to strings giving the individual results. If one of these pointers is NULL it indicates a NULL/nil value.
Getting Results
- int mapi_get_field_count(MapiHdl mid)
Return the number of fields in the current row.
- mapi_int64 mapi_get_row_count(MapiHdl mid)
If possible, return the number of rows in the last select call. A -1 is returned if this information is not available.
- mapi_int64 mapi_get_last_id(MapiHdl mid)
If possible, return the last inserted id of auto_increment (or alike) column. A -1 is returned if this information is not available. We restrict this to single row inserts and one auto_increment column per table. If the restrictions do not hold, the result is unspecified.
- mapi_int64 mapi_rows_affected(MapiHdl hdl)
Return the number of rows affected by a database update command such as SQL's INSERT/DELETE/UPDATE statements.
- int mapi_fetch_row(MapiHdl hdl)
Retrieve a row from the server. The text retrieved is kept around in a buffer linked with the query handle from which selective fields can be extracted. It returns the number of fields recognized. A zero is returned upon encountering end of sequence or error. This can be analyzed in using
mapi_error()
. - mapi_int64 mapi_fetch_all_rows(MapiHdl hdl)
All rows are cached at the client side first. Subsequent calls to
mapi_fetch_row()
will take the row from the cache. The number or rows cached is returned. - int mapi_quick_response(MapiHdl hdl, FILE *fd)
Read the answer to a query and pass the results verbatim to a stream. The result is not analyzed or cached.
- MapiMsg mapi_seek_row(MapiHdl hdl, mapi_int64 rownr, int whence)
Reset the row pointer to the requested row number. If whence is
MAPI_SEEK_SET
, rownr is the absolute row number (0 being the first row); if whence isMAPI_SEEK_CUR
, rownr is relative to the current row; if whence isMAPI_SEEK_END
, rownr is relative to the last row. - MapiMsg mapi_fetch_reset(MapiHdl hdl)
Reset the row pointer to the first line in the cache. This need not be a tuple. This is mostly used in combination with fetching all tuples at once.
- char **mapi_fetch_field_array(MapiHdl hdl)
Return an array of string pointers to the individual fields. A zero is returned upon encountering end of sequence or error. This can be analyzed in using
mapi_error()
. - char *mapi_fetch_field(MapiHdl hdl, int fnr)
Return a pointer a C-string representation of the value returned. A zero is returned upon encountering an error or when the database value is NULL; this can be analyzed in using
mapi_error()
. - size_t mapi_fetch_fiels_len(MapiHdl hdl, int fnr)
Return the length of the C-string representation excluding trailing NULL byte of the value. Zero is returned upon encountering an error, when the database value is NULL, of when the string is the empty string. This can be analyzed by using
mapi_error()
andmapi_fetch_field()
. - MapiMsg mapi_next_result(MapiHdl hdl)
Go to the next result set, discarding the rest of the output of the current result set.
Errors handling
- MapiMsg mapi_error(Mapi mid)
Return the last error code or 0 if there is no error.
- char *mapi_error_str(Mapi mid)
Return a pointer to the last error message.
- char *mapi_result_error(MapiHdl hdl)
Return a pointer to the last error message from the server.
- MapiMsg mapi_explain(Mapi mid, FILE *fd)
Write the error message obtained from
mserver
to a file. - MapiMsg mapi_explain_query(MapiHdl hdl, FILE *fd)
Write the error message obtained from
mserver
to a file. - MapiMsg mapi_explain_result(MapiHdl hdl, FILE *fd)
Write the error message obtained from
mserver
to a file.
Parameters
- MapiMsg mapi_bind(MapiHdl hdl, int fldnr, char **val)
Bind a string variable with a field in the return table. Upon a successful subsequent
mapi_fetch_row()
the indicated field is stored in the space pointed to by val. Returns an error if the field identified does not exist. - MapiMsg mapi_bind_var(MapiHdl hdl, int fldnr, int type, void *val)
Bind a variable to a field in the return table. Upon a successful subsequent
mapi_fetch_row()
, the indicated field is converted to the given type and stored in the space pointed to by val. The types recognized are {MAPI_TINY, MAPI_UTINY, MAPI_SHORT, MAPI_USHORT, MAPI_INT, MAPI_UINT, MAPI_LONG, MAPI_ULONG, MAPI_LONGLONG, MAPI_ULONGLONG, MAPI_CHAR, MAPI_VARCHAR, MAPI_FLOAT, MAPI_DOUBLE, MAPI_DATE, MAPI_TIME, MAPI_DATETIME
}. The binding operations should be performed after the mapi_execute command. Subsequently all rows being fetched also involve delivery of the field values in the C-variables using proper conversion. For variable length strings a pointer is set into the cache. - MapiMsg mapi_bind_numeric(MapiHdl hdl, int fldnr, int scale, int precision, void *val)
Bind to a numeric variable, internally represented by MAPI_INT Describe the location of a numeric parameter in a query template.
- MapiMsg mapi_clear_bindings(MapiHdl hdl)
Clear all field bindings.
- MapiMsg mapi_param(MapiHdl hdl, int fldnr, char **val)
Bind a string variable with the n-th placeholder in the query template. No conversion takes place.
- MapiMsg mapi_param_type(MapiHdl hdl, int fldnr, int ctype, int sqltype, void *val)
Bind a variable whose type is described by ctype to a parameter whose type is described by sqltype.
- MapiMsg mapi_param_numeric(MapiHdl hdl, int fldnr, int scale, int precision, void *val)
Bind to a numeric variable, internally represented by MAPI_INT.
- MapiMsg mapi_param_string(MapiHdl hdl, int fldnr, int sqltype, char *val, int *sizeptr)
Bind a string variable, internally represented by MAPI_VARCHAR, to a parameter. The sizeptr parameter points to the length of the string pointed to by val. If sizeptr == NULL or *sizeptr == -1, the string is NULL-terminated.
- MapiMsg mapi_clear_params(MapiHdl hdl)
Clear all parameter bindings.
Miscellaneous
- MapiMsg mapi_setAutocommit(Mapi mid, int autocommit)
Set the autocommit flag (default is on). This only has effect when the language is SQL. In that case, the server commits after each statement sent to the server. More information about autocommit and multi-statements transactions can be found here.
- MapiMsg mapi_setAlgebra(Mapi mid, int algebra)
Tell the backend to use or stop using the algebra-based compiler.
- MapiMsg mapi_cache_limit(Mapi mid, int maxrows)
A limited number of tuples are pre-fetched after each
execute()
. If maxrows is negative, all rows will be fetched before the application is permitted to continue. Once the cache is filled, a number of tuples are shuffled to make room for new ones, but taking into account non-read elements. Filling the cache quicker than reading leads to an error. - MapiMsg mapi_cache_shuffle(MapiHdl hdl, int percentage)
Make room in the cache by shuffling percentage tuples out of the cache. It is sometimes handy to do so, for example, when your application is stream-based and you process each tuple as it arrives and still need a limited look-back. This percentage can be set between 0 to 100. Making shuffle= 100% (default) leads to paging behavior, while shuffle==1 leads to a sliding window over a tuple stream with 1% refreshing.
- MapiMsg mapi_cache_freeup(MapiHdl hdl, int percentage)
Forcefully shuffle the cache making room for new rows. It ignores the read counter, so rows may be lost.
- char * mapi_quote(const char *str, int size)
Escape special characters such as
\n
,\t
in str with backslashes. The returned value is a newly allocated string which should be freed by the caller. - char * mapi_unquote(const char *name)
The reverse action of
mapi_quote()
, turning the database representation into a C-representation. The storage space is dynamically created and should be freed after use. - MapiMsg mapi_output(Mapi mid, char *output)
Set the output format for results send by the server.
- MapiMsg mapi_stream_into(Mapi mid, char *docname, char *colname, FILE *fp)
Stream a document into the server. The name of the document is specified in docname, the collection is optionally specified in colname (if NULL, it defaults to docname), and the content of the document comes from fp.
- MapiMsg mapi_profile(Mapi mid, int flag)
Set the profile flag to time commands send to the server.
- MapiMsg mapi_trace(Mapi mid, int flag)
Set the trace flag to monitor interaction of the client with the library. It is primarilly used for debugging Mapi applications.
- int mapi_get_trace(Mapi mid)
Return the current value of the trace flag.
- MapiMsg mapi_log(Mapi mid, const char *fname)
Log the interaction between the client and server for offline inspection. Beware that the log file overwrites any previous log. For detailed interaction trace with the Mapi library itself use mapi_trace().
The remaining operations are wrappers around the data structures maintained. Note that column properties are derived from the table output returned from the server.
- char *mapi_get_name(MapiHdl hdl, int fnr)
- char *mapi_get_type(MapiHdl hdl, int fnr)
- char *mapi_get_table(MapiHdl hdl, int fnr)
- int mapi_get_len(Mapi mid, int fnr)
- char *mapi_get_dbname(Mapi mid)
- char *mapi_get_host(Mapi mid)
- char *mapi_get_user(Mapi mid)
- char *mapi_get_lang(Mapi mid)
- char *mapi_get_motd(Mapi mid)
JDBC Driver
JDBC Driver mk Sun, 03/28/2010 - 23:58MonetDB JDBC Driver
The most obvious way to connect to a data source using the Java programming language is by making use of the JDBC API. MonetDB supplies a 100% pure Java JDBC driver (type 4) which allows to connect and work with a MonetDB database from a Java program.
This document gives a short description how to use the MonetDB JDBC driver in Java applications. Familiarity with the Java JDBC API is required to fully understand this document. Please note that you can find the complete JDBC API on Oracle's web site http://docs.oracle.com/javase/7/docs/technotes/guides/jdbc/index.html.
The latest release of the MonetDB JDBC driver has implemented most of the essential JDBC API classes and methods (only CallableStatement is not yet implemented). If you make extensive use of JDBC API and semantics and rely on its features, please report any missing functionality on our bugzilla.
In order to use the MonetDB JDBC driver in Java applications you need (of course) a running MonetDB/SQL server instance, preferably via monetdbd
.
Getting the JDBC driver Jar
The easiest way to acquire the driver is to download it from our MonetDB Java Download Area. You will find a jar file called monetdb-jdbc-X.Y.jar
where X and Y are major and minor version numbers. The other two listed jar files (jdbcclient.jar and monetdb-mcl-*.jar) are optional utility jars. jdbcclient.jar contains a java command line program similar to mclient, see below.
Compiling the driver (using ant, optional)
If you prefer to build the driver yourself, make sure you acquire the MonetDB Java repository, e.g. as part of the source downloads. The Java sources are built using Apache's Ant tool and a make file. Simply issuing the command ant distjdbc should be sufficient to build the driver jar-archive in the subdirectory jars. See the ant web site for more documentation on the ant build-tool: http://ant.apache.org/. The Java sources currently require at least a Java 7 compatible compiler.
Using the JDBC driver in your Java programs
To use the MonetDB JDBC driver, the monetdb-jdbc-X.Y.jar
jar-archive has to be in the Java classpath. Make sure this is actually the case. Note: as of Jul2015 release (monetdb-jdbc-2.17.jar
) the MonetDB JDBC Driver only works with Java 7 (or higher) JVMs.
Using the MonetDB JDBC driver in your Java program:
// request a Connection to a MonetDB server running on 'localhost' (with default port 50000) for database demo for user and password monetdb Connection con = DriverManager.getConnection("jdbc:monetdb://localhost/demo", "monetdb", "monetdb");
The MonetDB JDBC Connection URL string passed to the "getConnection()"method is defined as
jdbc:monetdb://<hostname>[:<portnr>]/<databasename>[?<property>=<value>[&<property>=<value>]]
where elements between "<" and ">" are required and elements between "[" and "]" are optional.
Following optional connection properties are allowed:
so_timeout=<time in milliseconds> language=mal language=sql treat_blob_as_binary=true treat_clob_as_varchar=true hash=<sha512, sha384> user=<login name> password=<secret value> debug=true logfile=<name logfile>
A sample Java JDBC program
import java.sql.*; /** * This example assumes there exist tables a and b filled with some data. * On these tables some queries are executed and the JDBC driver is tested * on it's accuracy and robustness against 'users'. * * @author Fabian Groffen */ public class MJDBCTest { public static void main(String[] args) throws Exception { Connection con = null; Statement st = null; ResultSet rs = null; try { String con_url = "jdbc:monetdb://localhost:50000/mydb?so_timeout=10000&treat_clob_as_varchar=true"; // make a connection to the MonetDB server using JDBC URL starting with: jdbc:monetdb:// con = DriverManager.getConnection(con_url, "monetdb", "monetdb"); // make a statement object st = con.createStatement(); // execute SQL query which returns a ResultSet object String qry = "SELECT a.var1, COUNT(b.id) as total FROM a, b WHERE a.var1 = b.id AND a.var1 = 'andb' GROUP BY a.var1 ORDER BY a.var1, total;" rs = st.executeQuery(qry); // get meta data and print column names with their type ResultSetMetaData md = rs.getMetaData(); for (int i = 1; i <= md.getColumnCount(); i++) { System.out.print(md.getColumnName(i) + ":" + md.getColumnTypeName(i) + "\t"); } System.out.println(""); // now print the data: only the first 5 rows, while there probably are // a lot more. This shouldn't cause any problems afterwards since the // result should get properly discarded when we close it for (int i = 0; rs.next() && i < 5; i++) { for (int j = 1; j <= md.getColumnCount(); j++) { System.out.print(rs.getString(j) + "\t"); } System.out.println(""); } // close (server) resource as soon as we are done processing resultset data rs.close(); rs = null; // tell the driver to only return 5 rows for the next execution // it can optimize on this value, and will not fetch any more than 5 rows. st.setMaxRows(5); // we ask the database for 22 rows, while we set the JDBC driver to // 5 rows, this shouldn't be a problem at all... rs = st.executeQuery("select * from a limit 22"); int var1_cnr = rs.findColumn("var1"); int var2_cnr = rs.findColumn("var2"); int var3_cnr = rs.findColumn("var3"); int var4_cnr = rs.findColumn("var4"); // read till the driver says there are no rows left for (int i = 0; rs.next(); i++) { System.out.println( "[" + rs.getString(var1_cnr) + "]" + "[" + rs.getString(var2_cnr) + "]" + "[" + rs.getInt(var3_cnr) + "]" + "[" + rs.getString(var4_cnr) + "]" ); } // close (server) resource as soon as we are done processing resultset data rs.close(); rs = null; // unset the row limit; 0 means as much as the database sends us st.setMaxRows(0); // we only ask 10 rows rs = st.executeQuery("select * from b limit 10;"); int rowid_cnr = rs.findColumn("rowid"); int id_cnr = rs.findColumn("id"); var1_cnr = rs.findColumn("var1"); var2_cnr = rs.findColumn("var2"); var3_cnr = rs.findColumn("var3"); var4_cnr = rs.findColumn("var4"); // and simply print them while (rs.next()) { System.out.println( rs.getInt(rowid_cnr) + ", " + rs.getString(id_cnr) + ", " + rs.getInt(var1_cnr) + ", " + rs.getInt(var2_cnr) + ", " + rs.getString(var3_cnr) + ", " + rs.getString(var4_cnr) ); } // close (server) resource as soon as we are done processing resultset data rs.close(); rs = null; // perform a ResultSet-less query (with no trailing ; since that should // be possible as well and is JDBC standard) int updCount = st.executeUpdate("delete from a where var1 = 'zzzz'"); System.out.println("executeUpdate() returned: " + updCount); } catch (SQLException se) { System.out.println(se.getMessage()); } finally { // when done, close all (server) resources if (rs != null) rs.close(); if (st != null) st.close(); if (con != null) con.close(); } } }
Note: it is no longer required (or recommended) to include code line:
Class.forName("nl.cwi.monetdb.jdbc.MonetDriver");
as the MonetDriver registers itself with the JDBC DriverManager automatically when the monetdb-jdbc-X.Y.jar
file is loaded.
Using the JdbcClient utility program
We have created an example Java SQL command line program similar to (but not compatible with) mclient program. The jdbcclient.jar
program can be downloaded from the MonetDB Java Download Area. It includes and uses the MonetDB JDBC driver internally. As it already includes the JDBC driver classes it is very easy to start it (assuming you have an MonetDB/SQL server running) via:
% java -jar jdbcclient.jar -p50000 -ddemo -umonetdb password: Welcome to the MonetDB interactive JDBC terminal! Database Server: MonetDB v11.23.14 JDBC Driver: MonetDB Native Driver v2.23 (Liberica 20161117 based on MCL v1.13) Current Schema: sys Type \q to quit, \h for a list of available commands auto commit mode: on sql>
From this sql>
prompt you can enter an SQL query (include an ; as end-of-statement) and execute it by using the enter-key.
If the connection fails, observe the error messages from JdbcClient (and/or the merovingian logs) for clues.
Use \q to quit the program.
To see all jdbcclient startup options just run:
% java -jar jdbcclient.jar --help
Tip: if you do not want to enter the password each time, use a .monetdb file (which contains the user and password settings) similar as for mclient
Python Library
Python Library mk Sun, 03/28/2010 - 23:58The MonetDB MAPI and SQL client python API
Introduction
This is the native python client API. This API is cross-platform, and doesn't depend on any monetdb libraries. It has support for python 2.7 and 3.3+ and is Python DBAPI 2.0 compatible.
Installation
To install the MonetDB python API run the following command:
# pip install pymonetdb
That's all, now you are ready to start using the API. We recommend that you use virtual environments to avoid polluting your global python installation.
Documentation
The python code is well documented, so if you need to find documentation you should have a look at the source code. Below is an interactive example on how to use the monetdb SQL API which should get you started quite fast.
Examples
There are some examples in the 'examples' folder, but here are is a line by line example of the SQL API:
> # import the SQL module > import pymonetdb > > # set up a connection. arguments below are the defaults > connection = pymonetdb.connect(username="monetdb", password="monetdb", hostname="localhost", database="demo") > > # create a cursor > cursor = connection.cursor() > > # increase the rows fetched to increase performance (optional) > cursor.arraysize = 100 > > # execute a query (return the number of rows to fetch) > cursor.execute('SELECT * FROM tables') 26 > > # fetch only one row > cursor.fetchone() (1062, 'schemas', 1061, None, 0, True, 0, 0) > > # fetch the remaining rows > cursor.fetchall() [(1067, 'types', 1061, None, 0, True, 0, 0), (1076, 'functions', 1061, None, 0, True, 0, 0), (1085, 'args', 1061, None, 0, True, 0, 0), (1093, 'sequences', 1061, None, 0, True, 0, 0), (1103, 'dependencies', 1061, None, 0, True, 0, 0), (1107, 'connections', 1061, None, 0, True, 0, 0), (1116, '_tables', 1061, None, 0, True, 0, 0), ... (4141, 'user_role', 1061, None, 0, True, 0, 0), (4144, 'auths', 1061, None, 0, True, 0, 0), (4148, 'privileges', 1061, None, 0, True, 0, 0)] > > # Show the table meta data > cursor.description [('id', 'int', 4, 4, None, None, None), ('name', 'varchar', 12, 12, None, None, None), ('schema_id', 'int', 4, 4, None, None, None), ('query', 'varchar', 168, 168, None, None, None), ('type', 'smallint', 1, 1, None, None, None), ('system', 'boolean', 5, 5, None, None, None), ('commit_action', 'smallint', 1, 1, None, None, None), ('temporary', 'tinyint', 1, 1, None, None, None)]
If you would like to communicate with the database at a lower level you can use the MAPI library:
> from pymonetdb import mapi > mapi_connection = mapi.Connection() > mapi_connection.connect(hostname="localhost", port=50000, username="monetdb", password="monetdb", database="demo", language="sql", unix_socket=None, connect_timeout=-1) > mapi_connection.cmd("sSELECT * FROM tables;") ...
Perl Library
Perl Library mk Sun, 03/28/2010 - 23:56MonetDB Perl Library
Perl is one of the more common scripting languages for which a 'standard' database application programming interface is defined. It is called DBI and it was designed to protect you from the API library details of multiple DBMS vendors. It has a very simple interface to execute SQL queries and for processing the results sent back. DBI doesn't know how to talk to any particular database, but it does know how to locate and load in DBD (`Database Driver') modules. The DBD modules encapsulate the interface library's intricacies and knows how to talk to the real databases.
MonetDB comes with its own DBD module which is included in both the source and binary distribution packages. The module is also available via CPAN.
Two sample Perl applications are included in the source distribution; a MIL session and a simple client to interact with a running server.
For further documentation we refer to the Perl community home page.
A Simple Perl Example
use strict; use warnings; use DBI(); print "\nStart a simple Monet MIL interaction\n\n"; # determine the data sources: my @ds = DBI->data_sources('monetdb'); print "data sources: @ds\n"; # connect to the database: my $dsn = 'dbi:monetdb:database=test;host=localhost;port=50000;language=mil'; my $dbh = DBI->connect( $dsn, undef, undef, # no authentication in MIL { PrintError => 0, RaiseError => 1 } # turn on exception handling ); { # simple MIL statement: my $sth = $dbh->prepare('print(2);'); $sth->execute; my @row = $sth->fetchrow_array; print "field[0]: $row[0], last index: $#row\n"; } { my $sth = $dbh->prepare('print(3);'); $sth->execute; my @row = $sth->fetchrow_array; print "field[0]: $row[0], last index: $#row\n"; } { # deliberately executing a wrong MIL statement: my $sth = $dbh->prepare('( xyz 1);'); eval { $sth->execute }; print "ERROR REPORTED: $@" if $@; } $dbh->do('var b:=new(int,str);'); $dbh->do('insert(b,3,"three");'); { # variable binding stuff: my $sth = $dbh->prepare('insert(b,?,?);'); $sth->bind_param( 1, 7 , DBI::SQL_INTEGER() ); $sth->bind_param( 2,'seven' ); $sth->execute; } { my $sth = $dbh->prepare('print(b);'); # get all rows one at a time: $sth->execute; while ( my $row = $sth->fetch ) { print "bun: $row->[0], $row->[1]\n"; } # get all rows at once: $sth->execute; my $t = $sth->fetchall_arrayref; my $r = @$t; # row count my $f = @{$t->[0]}; # field count print "rows: $r, fields: $f\n"; for my $i ( 0 .. $r-1 ) { for my $j ( 0 .. $f-1 ) { print "field[$i,$j]: $t->[$i][$j]\n"; } } } { # get values of the first column from each row: my $row = $dbh->selectcol_arrayref('print(b);'); print "head[$_]: $row->[$_]\n" for 0 .. 1; } { my @row = $dbh->selectrow_array('print(b);'); print "field[0]: $row[0]\n"; print "field[1]: $row[1]\n"; } { my $row = $dbh->selectrow_arrayref('print(b);'); print "field[0]: $row->[0]\n"; print "field[1]: $row->[1]\n"; } $dbh->disconnect; print "\nFinished\n";
ODBC Driver
ODBC Driver mk Sun, 03/28/2010 - 23:59MonetDB ODBC Driver
Short for Open DataBase Connectivity, a standard database access method developed by the SQL Access group in 1992. The goal of ODBC is to make it possible to access any data from any application, regardless of which database management system (DBMS) is handling the data. ODBC manages this by inserting a middle layer, called a database driver, between an application and the DBMS. The purpose of this layer is to translate the application's data queries into commands that the DBMS understands. For this to work, both the application and the DBMS must be ODBC-compliant – that is, the application must be capable of issuing ODBC commands and the DBMS must be capable of responding to them.
The ODBC driver for MonetDB is included in the Windows installer and Linux RPMs. The source can be found in the SQL CVS tree.
To help you setup your system to use the ODBC driver with MonetDB, two how-tos are available, one for Windows users and one for Linux/UNIX users.
Microsoft Excel demo
A little demo showing how to import data from a MonetDB server into Microsoft Excel.
Using Excel with the MonetDB ODBC Driver
Start up the MonetDB SQL Server and Excel.
In Excel, select from the drop down menu, first Data, then Get External Data, and finally New Database Query...

If MonetDB was installed correctly, there should be an entry MonetDB in the dialog box that opens. Select it and click on OK.

In the wizard that opens, scroll down in the list on the left hand side and select voyages. Then click on the button labeled > and then on Next >.

In the next page of the wizard, click on Next >.

In the next page of the wizard, click on Next >.

In the final page of the wizard, click on Finish.

A new dialog window opens. Click on OK to insert the data into the current Excel worksheet.

That's all.

Installing the MonetDB ODBC Driver for unixODBC
Configuring the MonetDB Driver
As Superuser, start the unixODBC configuration program ODBCConfig and select the Drivers tab.

On this tab, click on the button labeled Add... and fill in the fields as follows.

Name
- MonetDB
Description
- ODBC Driver for MonetDB SQL Server
Driver
- <path-to-MonetDB>/lib(64)/libMonetODBC.so
Setup
- <path-to-MonetDB>/lib(64)/libMonetODBCs.so
Don't change the other fields. When done, click on the check mark in the top left corner of the window. The first window should now contain an entry for MonetDB. Click on OK
Configuring a Data Source
Now as normal user start ODBCConfig again.

On the User DSN tab click on the Add... button. A new window pops up in which you have to select the ODBC driver. Click on the entry for MonetDB and click on OK.

A new window pops up. Fill in the fields as follows.

Name
- MonetDB
Description
- Default MonetDB Data Source
Host
- localhost
Port
- 50000
User
- monetdb
Password
- monetdb
Don't change the other fields. When done, click on the check mark in the top left corner of the window. The first window should now contain an entry for MonetDB. Click on OK
Feature overview
Feature overview mk Sat, 03/27/2010 - 23:29The SQL language standard has grown from a modest SQL'92 document of less then a 100 pages to the latest SQL:2008 >4000 page description comprised of the following volumes:
ISO/IEC 9075-1 Framework (SQL/Framework)
ISO/IEC 9075-2 Foundation (SQL/Foundation)
ISO/IEC 9075-3 Call Level Interface (SQL/CLI)
ISO/IEC 9075-4 Persistent Stored Modules (SQL/PSM)
ISO/IEC 9075-9 Management of External Data (SQL/MED)
ISO/IEC 9075-10 Object Language Bindings (SQL/OLB)
ISO/IEC 9075-11 Information and Definition Schemas (SQL/Schemata)
ISO/IEC 9075-13 Routines and Types using the Java Language (SQL/JRT)
ISO/IEC 9075-14 XML-related specifications (SQL/XML)
The MonetDB SQL implementation is based on SQL 2003 with a limited extension to features found elsewhere. The tables below illustrates the (un) supported features in the current distribution. Beware, some of the features are technically impossible to support due to the design of MonetDB or would cause excessive performance consequences.
Migration from your favored database system may require adjustment of the schema and query formulation. A comparison table for most actively used features is given in the SQL compatibility wikibook.
SQL compatibility
SQL compatibility mk Sun, 10/13/2013 - 23:31Every SQL implementation differs in minor / major areas from the SQL standard. This calls for using middleware to translate queries, or even manually rewrite them according to the DBMS specifics.
A synopsis of the builtin functionality and comparison with related products, such as Postgresql, MySQL, and several commerical systems, can be found in the SQL wikibook.
Supported
Supported mk Thu, 06/02/2011 - 20:14Feature ID | Feature name |
E011 | Numeric data types (FULL support) |
E011-01 | INTEGER and SMALLINT data types (including all spellings) |
E011-02 | REAL, DOUBLE PRECISON, and FLOAT data types |
E011-03 | DECIMAL and NUMERIC data types |
E011-04 | Arithmetic operators |
E011-05 | Numeric comparison |
E011-06 | Implicit casting among the numeric data types |
E021 | Character data types (PARTIAL support) |
E021-01 | CHARACTER data type (including all its spellings) |
E021-02 | CHARACTER VARYING data type (including all its spellings) |
E021-03 | Character literals |
E021-04 | CHARACTER_LENGTH function |
E021-05 | OCTET_LENGTH function |
E021-06 | SUBSTRING function |
E021-07 | Character concatenation |
E021-08 | UPPER and LOWER functions |
E021-09 | TRIM function |
E021-10 | Implicit casting among the character data types |
E021-11 | POSITION function |
E021-12 | Character comparison |
E031 | Identifiers (FULL support) |
E031-01 | Delimited identifiers |
E031-02 | Lower case identifiers |
E031-03 | Trailing underscore |
E051 | Basic query specification (FULL support) |
E051-01 | SELECT DISTINCT |
E051-02 | GROUP BY clause |
E051-04 | GROUP BY can contain columns not in select-list |
E051-05 | Select list items can be renamed |
E051-06 | HAVING clause |
E051-07 | Qualified * in select list |
E051-08 | Correlation names in the FROM clause |
E051-09 | Rename columns in the FROM clause |
E061 | Basic predicates and search conditions (FULL support) |
E061-01 | Comparison predicate |
E061-02 | BETWEEN predicate |
E061-03 | IN predicate with list of values |
E061-04 | LIKE predicate |
E061-05 | LIKE predicate: ESCAPE clause |
E061-06 | NULL predicate |
E061-07 | Quantified comparison predicate |
E061-08 | EXISTS predicate |
E061-09 | Subqueries in comparison predicate |
E061-11 | Subqueries in IN predicate |
E061-12 | Subqueries in quantified comparison predicate |
E061-13 | Correlated subqueries |
E061-14 | Search condition |
E071 | Basic query expressions (FULL support) |
E071-01 | UNION DISTINCT table operator |
E071-02 | UNION ALL table operator |
E071-03 | EXCEPT DISTINCT table operator |
E071-05 | Columns combined via table operators need not have exactly the same data type |
E071-06 | Table operators in subqueries |
E081 | Basic Privileges |
E081-01 | SELECT privilege at the table level |
E081-02 | DELETE privilege |
E081-03 | INSERT privilege at the table level |
E081-04 | UPDATE privilege at the table level |
E081-05 | UPDATE privilege at the column level |
E081-06 | REFERENCES privilege at the table level (SELECT privilege is used) |
E081-07 | REFERENCES privilege at the column level (SELECT privilege is used) |
E081-08 | WITH GRANT OPTION |
E091 | Set functions (FULL support) |
E091-01 | AVG |
E091-02 | COUNT |
E091-03 | MAX |
E091-04 | MIN |
E091-05 | SUM |
E091-06 | ALL quantifier |
E091-07 | DISTINCT quantifier |
E101 | Basic data manipulation (FULL support) |
E101-01 | INSERT statement |
E101-03 | Searched UPDATE statement |
E101-04 | Searched DELETE statement |
E111 | Single row SELECT statement |
E121-01 | DECLARE CURSOR |
E121-02 | ORDER BY columns need not be in select list |
E131 | Null value support (nulls in lieu of values) |
E141 | Basic integrity constraints |
E141-01 | NOT NULL constraints |
E141-02 | UNIQUE constraints of NOT NULL columns |
E141-03 | PRIMARY KEY constraints |
E141-04 | Basic FOREIGN KEY constraint with the NO ACTION default |
E141-07 | Column defaults |
E141-08 | NOT NULL inferred on PRIMARY KEY |
E151 | Transaction support |
E151-01 | COMMIT statement |
E151-02 | ROLLBACK statement |
E152 | Basic SET TRANSACTION statement |
E152-01 | SET TRANSACTION statement: ISOLATION LEVEL SERIALIZABLE clause |
E153 | Updatable queries with subqueries |
E161 | SQL comments using leading double minus |
E182 | Module language |
F021-01 | COLUMNS view |
F021-02 | TABLES view |
F021-03 | VIEWS view |
F021-04 | TABLE_CONSTRAINTS view |
F021-05 | REFERENTIAL_CONSTRAINTS view |
F021-06 | CHECK_CONSTRAINTS view |
F031 | Basic schema manipulation |
F031-01 | CREATE TABLE statement to create persistent base tables |
F031-02 | CREATE VIEW statement |
F031-03 | GRANT statement |
F031-04 | ALTER TABLE statement: ADD COLUMN clause |
F031-13 | DROP TABLE statement: RESTRICT clause |
F031-16 | DROP VIEW statement: RESTRICT clause |
F031-19 | REVOKE statement: RESTRICT clause |
F041 | Basic joined table |
F041-01 | Inner join (but not necessarily the INNER keyword) |
F041-02 | INNER keyword |
F041-03 | LEFT OUTER JOIN |
F041-04 | RIGHT OUTER JOIN |
F041-05 | Outer joins can be nested |
F041-07 | The inner table in a left or right outer join can also be used in an inner join |
F041-08 | All comparison operators are supported (rather than just =) |
F051 | Basic date and time |
F051-01 | DATE data type (including DATE literal) |
F051-02 | TIME data type (including TIME literal) with fractional seconds precision of 0 |
F051-03 | TIMESTAMP data type (including TIMESTAMP literal) with fractional seconds precision of 0 and 6 |
F051-04 | Comparison predicate on DATE, TIME, and TIMESTAMP data types |
F051-05 | Explicit CAST between datetime types and character types |
F051-06 | CURRENT_DATE |
F051-07 | LOCALTIME |
F051-08 | LOCALTIMESTAMP |
F081 | UNION and EXCEPT in views |
F131 | Grouped operations |
F131-01 | WHERE, GROUP BY, and HAVING clauses supported in queries with grouped views |
F131-02 | Multiple tables supported in queries with grouped views |
F131-03 | Set functions supported in queries with grouped views |
F131-04 | Subqueries with GROUP BY and HAVING clauses and grouped views |
F131-05 | Single row SELECT with GROUP BY and HAVING clauses and grouped views |
F181 | Multiple module support (LIMITED support) |
F201 | CAST function |
F221 | Explicit defaults |
F261 | CASE expression |
F261-01 | Simple CASE |
F261-02 | Searched CASE |
F261_03 | NULLIF |
F261-04 | COALESCE |
F311 | Schema definition statement |
F311-01 | CREATE SCHEMA |
F311-02 | CREATE TABLE for persistent base tables |
F311-03 | CREATE VIEW |
F311-05 | GRANT statement |
F471 | Scalar subquery values |
F481 | Expanded NULL predicate |
F501-01 | SQL_FEATURES view |
F501-02 | SQL_SIZING view |
F501-03 | SQL_LANGUAGES view |
F521 | Assertions |
S011 | Distinct data types |
T321 | Basic SQL-invoked routines |
T321-01 | User-defined functions with no overloading |
T321-02 | User-defined stored procedures with no overloading |
T321-03 | Function invocation |
T321-04 | CALL statement |
T321-05 | RETURN statement |
Unsupported
Unsupported mk Thu, 06/02/2011 - 20:12Feature ID | Feature name |
B011-B017 | Embedded Language support. Core SQL:1999 says that at least one of Embedded Ada, Embedded C, Embedded Cobol, Embedded Fortran, Embedded MUMPS, Embedded Pascal or Embedded PL/I 1 should be supported. |
B111-B117 | Module language ADA, COBOL, Fortran MUMPS, Pascal, PL/I |
B121-B127 | Routine language ADA, COBOL, Fortran MUMPS, Pascal, PL/I |
E081-09 | USAGE privilege |
E081-10 | EXECUTE privilege |
E153 | Updatable queries with subqueries |
E121 | Basic cursor support (CURSORS are not supported) |
E121-01 | DECLARE CURSOR |
E121-03 | Value expressions in ORDER BY clause |
E121-04 | OPEN statement |
E121-06 | Positioned UPDATE statement |
E121-07 | Positioned DELETE statement |
E121-08 | CLOSE statement |
E121-10 | FETCH statement: implicit NEXT |
E121-17 | WITH HOLD cursors |
E141-06 | CHECK constraints |
E141-10 | Names in a foreign key can be specified in any order (columns should be in the proper order) |
E152-02 | SET TRANSACTION statement: READ ONLY and READ WRITE clauses |
E171 | SQLSTATE support |
F021 | Basic information schema |
F021-01 | COLUMNS view |
F021-02 | TABLES view |
F021-03 | VIEWS view |
F021-04 | TABLE_CONSTRAINTS view |
F021-05 | REFERENTIAL_CONSTRAINTS view |
F021-06 | CHECK_CONSTRAINTS view |
F121 | Basic diagnostic management |
F121-01 | GET DIAGNOSTICS statement |
F121-02 | SET TRANSACTION statement: DIAGNOSTICS SIZE clause |
F122 | Enhanced diagnostics statement |
F123 | All diagnostics |
F181 | Multiple module support |
F202 | TRUNCATE TABLE: identify column restart option |
F263 | Comma-separated predicates in simple CASE expression |
F291 | UNIQUE predicate |
F301 | CORRESPONDING in query expressions |
F311-04 | CREATE VIEW: WITH CHECK OPTION |
F312 | MERGE statement |
F313 | Enhanced MERGE statement |
F341 | Usage tables (ROUTINE_*_USAGE) |
F394 | Optional normal form specification |
F403 | Partitioned joined tables |
F451 | Character set definition |
F461 | Named character sets |
F501 | Features and conformance views |
F501-01 | SQL_FEATURES view |
F501-02 | SQL_SIZING view |
F501-03 | SQL_LANGUAGES view |
F671 | Subqueries in CHECK |
F693 | SQL-session and client module collations |
F695 | Translation support |
F696 | Additional translation documentation |
F721 | Deferrable constraints |
F751 | View CHECK enhancements |
F812 | Basic flagging |
F831 - 831-02 | Cursor update, scroll, and order |
F841 | LIKE_REGEX |
F842 | OCCURRENCES_REGEX function |
F843 | POSITION_REGEX function |
F844 | SUBSRING_REGEX function |
F845 | TRANSLATE_REGEX function |
F846 | Octet support in regular expression |
F847 | Nonconstant regular expression |
S011-01 | USER_DEFINED_TYPES view |
S023 | Basic structured types |
S024 | Enhanced structure types |
S025 | Final structured types |
S026 | Self-reference structured types |
S027 | Create method by specific method name |
S028 | Permutable UDT options list |
S051 | Create table of type |
S081 | Subtables |
S091 | Basic array support |
S091-01 | Arrays of built-in data types |
S091-02 | Arrays of distinct types |
S091-03 | Array expressions |
S094 | Arrays of reference types |
S097 | Array element assignment |
S151 - S404 | Object support features |
T321-06 | ROUTINES view |
T321-07 | PARAMETERS view |
T011 | Timestamp in Information Schema |
T021 | BINARY and VARBINARY data types |
T022 | Advanced support for BINARY and VARBINARY data types |
T023 | Compound binary literal |
T024 | Spaces in binary literals |
T041 | Basic LOB data type support |
T041-01 | BLOB data type |
T041-02 | CLOB data type |
T041-03 | POSITION, LENGTH, LOWER, TRIM, UPPER, and SUBSTRING functions for LOB data types |
T041-04 | Concatenation of LOB data types |
T041-05 | LOB locator: non-holdable |
T042 | Extended LOB data type support |
T043 | Multiplier T |
T044 | Multiplier P |
T051 | Row types |
T052 | MAX and MIN for row types |
T053 | Explicit aliases for all-fields reference |
T061 | UCS support |
T101 | Enhanced nullability determination |
T111 | Updatable joins, unions, and columns |
T174 | Identity columns |
T175 | Generated columns |
T176 | Sequence generator support |
T177 | Sequence generator support: simple restart option |
T178 | Identity columns: simple restart option |
T211 | Basic trigger capability |
T211-06 | Support for run-time rules for the interaction of triggers and constraints |
T211-08 | Multiple triggers for the same event are executed in the order in which they were created in the catalog intentionally omitted |
T251 | SET TRANSACTION statement: LOCAL option |
T261 | Chained transactions |
T272 | Enhanced savepoint management |
T285 | Enhanced derived column names |
T301 | Functional dependencies partially supported |
T321 | Basic SQL-invoked routines |
T321-02 | User-defined stored procedures with no overloading |
T321-04 | CALL statement |
T321-05 | RETURN statement |
T324 | Explicit security for SQL routines |
T325 | Qualified SQL parameter references |
T326 | Table functions |
T332 | Extended roles mostly supported |
T431 | Extended grouping capabilities |
T432 | Nested and concatenated GROUPING SETS |
T433 | Multiargument GROUPING function |
T434 | GROUP BY DISTINCT |
T471 | Result sets return value |
T491 | LATERAL derived table |
T511 | Transaction counts |
T541 | Updatable table references |
T561 | Holdable locators |
T571 | Array-returning external SQL-invoked functions |
T572 | Multiset-returning external SQL-invoked functions |
T601 | Local cursor references |
T611 | Elementary OLAP operations most forms supported |
T612 | Advanced operations some forms supported |
T613 | Sampling |
T616 | Null treatment option for LEAD and LAG functions |
T618 | NTH_VALUE function function exists, but some options missing |
T641 | Multiple column assignment only some syntax variants supported |
T652 | SQL-dynamic statements in SQL routines |
T653 | SQL-schema statements in external routines |
T654 | SQL-dynamic statements in external routines |
M001 | Datalinks |
M002 | Datalinks via SQL/CLI |
M003 | Datalinks via Embedded SQL |
M004 | Foreign data support partially supported |
M005 | Foreign schema support |
M006 | GetSQLString routine |
M007 | TransmitRequest |
M009 | GetOpts and GetStatistics routines |
M010 | Foreign data wrapper support |
M011 | Datalinks via Ada, C, COBOL, Fortran, MUMPS, Pascal, PL/I |
M018 - M024 | Foreign data wrapper interface routines in Ada, C, COBOL, Fortran,MUMPS,Pascal,PL/I |
M030 | SQL-server foreign data support |
M031 | Foreign data wrapper general routines |
X012 | Multisets of XML type |
X013 | Distinct types of XML type |
X014 | Attributes of XML type |
X015 | Fields of XML type |
X025 | XMLCast |
X030 | XMLDocument |
X038 | XMLText |
X065 | XMLParse: BLOB input and CONTENT option |
X066 | XMLParse: BLOB input and DOCUMENT option |
X068 | XMLSerialize: BOM |
X069 | XMLSerialize: INDENT |
X073 | XMLSerialize: BLOB serialization and CONTENT option |
X074 | XMLSerialize: BLOB serialization and DOCUMENT option |
X075 | XMLSerialize: BLOB serialization |
X076 | XMLSerialize: VERSION |
X077 | XMLSerialize: explicit ENCODING option |
X078 | XMLSerialize: explicit XML declaration |
X080 | Namespaces in XML publishing |
X081 | Query-level XML namespace declarations |
X082 | XML namespace declarations in DML |
X083 | XML namespace declarations in DDL |
X084 | XML namespace declarations in compound statements |
X085 | Predefined namespace prefixes |
X086 | XML namespace declarations in XMLTable |
X091 | XML content predicate |
X096 | XMLExists |
X100 | Host language support for XML: CONTENT option |
X101 | Host language support for XML: DOCUMENT option |
X110 | Host language support for XML: VARCHAR mapping |
X111 | Host language support for XML: CLOB mapping |
X112 | Host language support for XML: BLOB mapping |
X113 | Host language support for XML: STRIP WHITESPACE option |
X114 | Host language support for XML: PRESERVE WHITESPACE option |
X131 | Query-level XMLBINARY clause |
X132 | XMLBINARY clause in DML |
X133 | XMLBINARY clause in DDL |
X134 | XMLBINARY clause in compound statements |
X135 | XMLBINARY clause in subqueries |
X141 | IS VALID predicate: data-driven case |
X142 | IS VALID predicate: ACCORDING TO clause |
X143 | IS VALID predicate: ELEMENT clause |
X144 | IS VALID predicate: schema location |
X145 | IS VALID predicate outside check constraints |
X151 | IS VALID predicate with DOCUMENT option |
X152 | IS VALID predicate with CONTENT option |
X153 | IS VALID predicate with SEQUENCE option |
X155 | IS VALID predicate: NAMESPACE without ELEMENT clause |
X157 | IS VALID predicate: NO NAMESPACE with ELEMENT clause |
X160 | Basic Information Schema for registered XML Schemas |
X161 | Advanced Information Schema for registered XML Schemas |
X170 | XML null handling options |
X171 | NIL ON NO CONTENT option |
X181 | XML(DOCUMENT(UNTYPED)) type |
X182 | XML(DOCUMENT(ANY)) type |
X190 | XML(SEQUENCE) type |
X191 | XML(DOCUMENT(XMLSCHEMA)) type |
X192 | XML(CONTENT(XMLSCHEMA)) type |
X200 | XMLQuery |
X201 | XMLQuery: RETURNING CONTENT |
X202 | XMLQuery: RETURNING SEQUENCE |
X203 | XMLQuery: passing a context item |
X204 | XMLQuery: initializing an XQuery variable |
X205 | XMLQuery: EMPTY ON EMPTY option |
X206 | XMLQuery: NULL ON EMPTY option |
X211 | XML 1.1 support |
X221 | XML passing mechanism BY VALUE |
X222 | XML passing mechanism BY REF |
X231 | XML(CONTENT(UNTYPED)) type |
X232 | XML(CONTENT(ANY)) type |
X241 | RETURNING CONTENT in XML publishing |
X242 | RETURNING SEQUENCE in XML publishing |
X251 | Persistent XML values of XML(DOCUMENT(UNTYPED)) type |
X252 | Persistent XML values of XML(DOCUMENT(ANY)) type |
X253 | Persistent XML values of XML(CONTENT(UNTYPED)) type |
X254 | Persistent XML values of XML(CONTENT(ANY)) type |
X255 | Persistent XML values of XML(SEQUENCE) type |
X256 | Persistent XML values of XML(DOCUMENT(XMLSCHEMA)) type |
X257 | Persistent XML values of XML(CONTENT(XMLSCHEMA)) type |
X260 | XML type: ELEMENT clause |
X261 | XML type: NAMESPACE without ELEMENT clause |
X263 | XML type: NO NAMESPACE with ELEMENT clause |
X264 | XML type: schema location |
X271 | XMLValidate: data-driven case |
X272 | XMLValidate: ACCORDING TO clause |
X273 | XMLValidate: ELEMENT clause |
X274 | XMLValidate: schema location |
X281 | XMLValidate: with DOCUMENT option |
X282 | XMLValidate with CONTENT option |
X283 | XMLValidate with SEQUENCE option |
X284 | XMLValidate NAMESPACE without ELEMENT clause |
X286 | XMLValidate: NO NAMESPACE with ELEMENT clause |
X300 | XMLTable |
X301 | XMLTable: derived column list option |
X302 | XMLTable: ordinality column option |
X303 | XMLTable: column default option |
X304 | XMLTable: passing a context item |
X305 | XMLTable: initializing an XQuery variable |