Extensions

SQL is the prime language of interaction with MonetDB. However, it is continuously being extended with small and major code fragments that extend the functionality described in the SQL 2003 standard.

The geospatial component wraps the well-known geos library to support GIS applications.

Complex Event Processing (CEP) is being supported by the DataCell component.

A preliminary approach to address the needs of NoSQL applications is encapsulated in Jaqueline.

GeoSpatial

Spatial support

MonetDB/SQL comes with an interface to the Simple Feature Specification of the Open Geospatial Consortium (formerly known as Open GIS Consortium) which opens the route to develop GIS applications.

The MonetDB/SQL/GIS module supports all objects and functions specified in the OGC "Simple Features for SQL" specification. Spatial objects can, however, for the time being only expressed in the Well-Known Text (WKT) format. WKT includes information about the type of the object and the object's coordinates.

Installation

The GIS functionality is packaged as a separate MonetDB module called geomgeom. To benefit from the geometry functionality you first have to download and install geos. It is a well-known and sound library to built upon. The next step is to (re-)build MonetDB with the --enable-geom configure argument.  This will build the necessary extension modules and activate them upon the first start of the server. Note that databases created before you configured with support for geom will lack geom functions in SQL. We recommend you to start on a new database.

Get Going

The spatial extension of MonetDB requires the user to simply use geom data types from SQL.

Example The script below creates and populates a 'forest' table and a 'buildings' table followed by a spatial query in this fictive landscape.

CREATE TABLE forests(id INT,name TEXT,shape MULTIPOLYGON);
CREATE TABLE buildings(id INT,name TEXT,location POINT,outline POLYGON);

INSERT INTO forests VALUES(109, 'Green Forest',
'MULTIPOLYGON( ((28 26,28 0,84 0,84 42,28 26), (52 18,66 23,73 9,48 6,52 18)), ((59 18,67 18,67 13,59 13,59 18)))');

INSERT INTO buildings VALUES(113, '123 Main Street',
	'POINT( 52 30 )',
	'POLYGON( ( 50 31, 54 31, 54 29, 50 29, 50 31) )');
INSERT INTO buildings VALUES(114, '215 Main Street',
	'POINT( 64 33 )',
	'POLYGON( ( 66 34, 62 34, 62 32, 66 32, 66 34) )');

SELECT forests.name,buildings.name
FROM forests,buildings
WHERE forests.name = 'Green Forest' and
    Overlaps(forests.shape, buildings.outline) = true;

Acceleration Spatial Operations

There are no special accelerators to speed up access to Spatial Objects yet. However, we can use the Minimum Bounding Rectangle (mbr) datatype to accelerate operations considerably. This requires a small query rewrite. In the example above the performance of the query can be improved in the following manner:

ALTER TABLE forests ADD bbox mbr;
UPDATE forests SET bbox = mbr(shape);
ALTER TABLE buildings ADD bbox mbr;
UPDATE buildings SET bbox = mbr(outline);

SELECT forests.name,buildings.name
FROM forests,buildings
WHERE forests.name = 'Green Forest' AND
    mbroverlaps(forests.bbox,buildings.bbox) = TRUE AND
    Overlaps(forests.shape, buildings.outline) = TRUE;

In this way the mbr operation acts as a filter. Upon request, and availabilty of resources, we will develop MAL optimizers to automate this process.

 

Limitations

This is the first implementation of OGC functionality in MonetDB. It is based on libgeos 3.3.0. Further development will be based on concrete external requests and availability of manpower. The shortlist of open issues is:

 

Spatial data types

Spatial Types

MonetDB supports the OpenGIS types: Point, Curve, LineString, Surface, Polygon, MultiPoint, MultiCurve, MultiLineString, MultiSurface, MultiPolygon, Geometry and GeomCollection. One non-OpenGIS type for fast access using pre-filtering is used. This type 'mbr' is used for storing a 2D box. Functions to create these boxes are specified in following sections.

Conversion from and to Well-known Text

Convert a Well-Known-Text string to a spatial object. The SRID parameter is a reference to the Spatial Reference System in which the coordinates are expressed.

CREATE FUNCTION GeomFromText(wkt string, srid SMALLINT) RETURNS Geometry
CREATE FUNCTION PointFromText(wkt string, srid SMALLINT) RETURNS Point
CREATE FUNCTION LineFromText(wkt string, srid SMALLINT) RETURNS LineString
CREATE FUNCTION PolyFromText(wkt string, srid SMALLINT) RETURNS Polygon
CREATE FUNCTION MPointFromText(wkt string, srid SMALLINT) RETURNS MultiPoint
CREATE FUNCTION MLineFromText(wkt string, srid SMALLINT) RETURNS MultiLineString
CREATE FUNCTION MPolyFromText(wkt string, srid SMALLINT) RETURNS MultiPolygon
CREATE FUNCTION GeomCollectionFromText(wkt string, srid SMALLINT) RETURNS MultiPolygon
-- alias
CREATE FUNCTION PolygonFromText(wkt string, srid SMALLINT) RETURNS Polygon
 
CREATE FUNCTION AsText(p Point) RETURNS STRING
CREATE FUNCTION AsText(c Curve) RETURNS STRING
CREATE FUNCTION AsText(l LineString) RETURNS STRING
CREATE FUNCTION AsText(s Surface) RETURNS STRING
CREATE FUNCTION AsText(p Polygon) RETURNS STRING
CREATE FUNCTION AsText(p MultiPoint) RETURNS STRING
CREATE FUNCTION AsText(c MultiCurve) RETURNS STRING
CREATE FUNCTION AsText(l MultiLineString) RETURNS STRING
CREATE FUNCTION AsText(s MultiSurface) RETURNS STRING
CREATE FUNCTION AsText(p MultiPolygon) RETURNS STRING
CREATE FUNCTION AsText(g Geometry) RETURNS STRING

Analysis functions on Geometry

The following functions perform analysis operations on geometries:

CREATE FUNCTION Area(g Geometry) RETURNS FLOAT
CREATE FUNCTION Length(g Geometry) RETURNS FLOAT
CREATE FUNCTION Distance(a Geometry, b Geometry) RETURNS FLOAT
CREATE FUNCTION Buffer(a Geometry, distance FLOAT) RETURNS Geometry
CREATE FUNCTION ConvexHull(a Geometry) RETURNS Geometry
CREATE FUNCTION Intersection(a Geometry, b Geometry) RETURNS Geometry
CREATE FUNCTION "Union"(a Geometry, b Geometry) RETURNS Geometry
CREATE FUNCTION Difference(a Geometry, b Geometry) RETURNS Geometry
CREATE FUNCTION SymDifference(a Geometry, b Geometry) RETURNS Geometry
 
CREATE FUNCTION Dimension(g Geometry) RETURNS integer
CREATE FUNCTION GeometryTypeId(g Geometry) RETURNS integer
CREATE FUNCTION SRID(g Geometry) RETURNS integer
CREATE FUNCTION Envelope(g Geometry) RETURNS Geometry
CREATE FUNCTION IsEmpty(g Geometry) RETURNS BOOLEAN
CREATE FUNCTION IsSimple(g Geometry) RETURNS BOOLEAN
CREATE FUNCTION Boundary(g Geometry) RETURNS Geometry
 
CREATE FUNCTION Equals(a Geometry, b Geometry) RETURNS BOOLEAN
CREATE FUNCTION Disjoint(a Geometry, b Geometry) RETURNS BOOLEAN
CREATE FUNCTION "Intersect"(a Geometry, b Geometry) RETURNS BOOLEAN
CREATE FUNCTION Touches(a Geometry, b Geometry) RETURNS BOOLEAN
CREATE FUNCTION Crosses(a Geometry, b Geometry) RETURNS BOOLEAN
CREATE FUNCTION Within(a Geometry, b Geometry) RETURNS BOOLEAN
CREATE FUNCTION Contains(a Geometry, b Geometry) RETURNS BOOLEAN
CREATE FUNCTION Overlaps(a Geometry, b Geometry) RETURNS BOOLEAN
CREATE FUNCTION Relate(a Geometry, b Geometry, pattern STRING) RETURNS BOOLEAN
SQL functions on Point
CREATE FUNCTION X(g Geometry) RETURNS double
CREATE FUNCTION Y(g Geometry) RETURNS double
 
CREATE FUNCTION Point(x double,y double) RETURNS Point
 
SQL functions on Curve
CREATE FUNCTION IsRing(l LineString) RETURNS BOOLEAN
CREATE FUNCTION StartPoint(l LineString) RETURNS Point -- not yet implemented
CREATE FUNCTION EndPoint(l LineString) RETURNS Point -- not yet implemented
 
SQL functions on LineString
CREATE FUNCTION NumPoints(l LineString) RETURNS integer -- not yet implemented
CREATE FUNCTION PointN(l LineString,i integer) RETURNS Point -- not yet implemented
 
SQL functions on Surface
CREATE FUNCTION PointOnSurface(s Surface) RETURNS Point -- not yet implemented
CREATE FUNCTION Centroid(s Surface) RETURNS Point -- not yet implemented
 
SQL functions on Polygon
CREATE FUNCTION ExteriorRing(s Surface) RETURNS LineString -- not yet implemented
CREATE FUNCTION NumInteriorRing(s Surface) RETURNS integer -- not yet implemented
CREATE FUNCTION InteriorRingN(s Surface,n integer) RETURNS LineString -- not yet implemented
 
SQL functions on GeomCollection
-- Unimplemented Documentation
CREATE FUNCTION NumGeometries(GeomCollection c) RETURNS integer -- not yet implemented
CREATE FUNCTION GeometryN(GeomCollection c,n integer) RETURNS Geometry -- not yet implemented

SQL functions on spatial objects

The following functions return the minimum bounded rectangle (or boolean) of a given geometry:

CREATE FUNCTION mbr (p Point) RETURNS mbr
CREATE FUNCTION mbr (c Curve) RETURNS mbr
CREATE FUNCTION mbr (l LineString) RETURNS mbr
CREATE FUNCTION mbr (s Surface) RETURNS mbr
CREATE FUNCTION mbr (p Polygon) RETURNS mbr
CREATE FUNCTION mbr (m multipoint) RETURNS mbr
CREATE FUNCTION mbr (m multicurve) RETURNS mbr
CREATE FUNCTION mbr (m multilinestring) RETURNS mbr
CREATE FUNCTION mbr (m multisurface) RETURNS mbr
CREATE FUNCTION mbr (m multipolygon) RETURNS mbr
CREATE FUNCTION mbr (g Geometry) RETURNS mbr
CREATE FUNCTION mbr (g GeomCollection) RETURNS mbr
CREATE FUNCTION mbroverlaps(a mbr, b mbr) RETURNS BOOLEAN

Streaming

The DataCell stream processing facilities of MonetDB are best illustrated using a minimalist example, where a sensor sends events to the database, which are picked up by a continuous query and sent out over a stream towards an actuator. To run the example, you should have a MonetDB binary with DataCell functionality enabled. This will ensure that the required libraries are loaded and the SQL catalog is informed about the stream specific functions/operators. It will also create the DataCell schema, which is used to collect compiled continuous queries. The final step in the startup is to enable the DataCell optimizer pipeline.

sql> set optimizer = 'datacell_pipe';
sql> create table datacell.bsktin (id integer, tag timestamp, payload integer);
sql> create table datacell.bsktout (like datacell.bsktin);

sql> call datacell.receptor('datacell.bsktin', 'localhost', 50500);
sql> call datacell.emitter('datacell.bsktout', 'localhost', 50600);
sql> call datacell.query('datacell.pass', 'insert into datacell.bsktout select * from datacell.bsktin;');
sql> call datacell.resume();

After these simple steps, it suffices  to hook up a sensor to sent events to the DataCell and to hook up an actuator to listen for response events. The result of this experiment will be a large number of randomly generated events passing through the stream engine in a bulk-fashion. 

$ nc -l -u localhost 50600 &
$ sensor --host=localhost --port=50500 --events=1000 --columns=3 &

The Linux netcat (nc) can be used as a strawmen's actuator to monitor the output of the DataCell. The distribution comes with a sensor and actuatoror simulator. The DataCell source code contains  a fire detection scenario to exercise the DataCell and forms as a basis for cloning your own application.

The example reconsidered
The DataCell operates on relational tables. The first action is to identify all such tables and redefine them as baskets by attaching them to receptors, emittors, or intermittent baskets.

sql> call datacell.receptor('datacell.bsktin', 'localhost', 50500);
sql> call datacell.emitter('datacell.bsktout'
, 'localhost', 50600);

A receptor thread is attached to the 'bsktin' basket on a TCP stream on port 50500 by default over which we receive tuples in CSV format. The number of fields and their lexical convention should comply with the corresponding table definition. The same semantics apply to the format as you would normally use when a COPY INTO command over a CSV file is given. The receptor mode is either active or passive. In passive mode, the default setting, it is the sensor that takes the initiative in contacting the streaming engine to deposits events. In the active mode, it is the streaming engine that contacts the sensor for more events. Note, the receptor becomes active only after you issue the datacall.resume('bsktin') or datacell.resume() operation. The calls shown are actually a shorthand for the more verbose version, where protocol and mode are made explicit.

sql> call datacell.receptor('datacell.bsktin', 'localhost', 50500,'tcp','active');
sql> call datacell.emitter('datacell.bsktout'
, 'localhost', 50600,'udp','passive');

The sensor simulator is geared at testing the infrastructure and takes short cuts on the event formats sent. Currently, it primarily generates event records starting with an optional event identifier, followed by an optional timestamp, and a payload of random integer values. To generate a test file with 100 events for the example, you can rely mostly on the default settings. Hooking up the sensor to the stream engine merely requires an additional hostname and port instead of a file argument.  A glimpse of the sensor interaction can be obtained using --trace, which writes the events to standard output, or a specific file. The sensor simulator asks for user input before it exits. This way, the receiving side can pick up the events and not be confronted with a possible broken UDP channel.

$ sensor --events=100 --protocol=debug --columns=3
1,306478,1804289383
... 98 more ...
100,137483,1956297539
$ sensor --host=localhost --port=50500 --events=100 --columns=3

An alternative scheme is to replay an event log using the --file and the --replay option. It reads an event possibly with a fixed delay (--delay=<milliseconds>), and sents it over the receptor.  An exact (time) replay calls for identifying the column with the temporal information, i.e. using option --time=<field index>.

After this step, the events have been picked up by the receptor and added to the basket datacell.bsktin. This table can be queried like any other table, but be aware that it may be emptied concurrently. The next step is define a continuous query, which in this case passes the input receveid to the output channel. Reception and emitting can be temporarily interrupted using the datacell.pause(objectname) operation.

After registration of the query, the datacell module contains the necessary optimized code for the continuous query processing. The scheduler is subsequently restarted using datacell.resume(), which moves the data from the bsktin into bsktout  when it arrives. You can check the result using ordinary SQL queries over the table producing functions:datacell.receptors(), datacell.emitters(), datacell.baskets() and datacell.queries();

sql> call datacell.query('datacell.pass', 'insert into datacell.bsktout select * from datacell.bsktin;');

 

Architecture Overview

The DataCell Architecture
The DataCell approach is easily understood using previously mentioned example. The sensor program is a simulator of a real-world sensor which emits events at a regular interval, e.g. a temperature, humidity, noise, etc. The actuator is a device simulator that is controlled using events received, e.g. a fire alarm. The sensors and actuators work independently.  They are typically proprietary devices that communicate with a controlling station using a wired network. The only requirement in the DataCell is that devices can communicate using the UDP protocol to deliver events by default with the most efficient event message format CSV. Alternative message format handlers can readily be included by extending the formats recognized by the adaptors or as a simple filter between the device and the DataCell.

Baskets
The basket is the key data structure of the streaming engine. Its role is to hold a portion of an event stream, also denoted as an event window. It is represented as a temporary main-memory table. Unlike other stream systems there is no a priori order or fixed window size. The basket is simply a (multi-) set of event records received from an adapter or events ready to be shipped to an actuator.  There is no persistency and no transaction management over the baskets. If a basket should survive session brackets, its content should be inserted into a normal table. The baskets can be queried with SQL like any other table, but concurrent actions may leave you with a mostly empty table to look at.

Adapters
The receptor and emitter adapters are the interface units in the DataCell to interact with sensors and actuators. Both communicate with their environment through a channel. The default channel is a UDP connection for speed. By default the receptor is a passive thread, opening a channel and awaiting events to arrive. Contrary, the emitter is an active thread, which immediately throws the events on the channel identified. Hooks have been created to change the roles, e.g. the receptor polling a device and emitter to wait for polling actuators.

Events that can not be parsed are added to the corresponding basket as an error. All errors collected can be inspected using the table producing function datacell.errors().

Continuous queries
The continuous queries are expressed as ordinary SQL queries, where previously declared basket tables are recognized by the DataCell optimizer. For convenience they can be packed in a procedure, where the events from a basket can be delivered to multiple baskets. Access to these tables is replaced and interaction with the adapters is regulated with a locking scheme. Mixing basket tables and persistent tables is allowed. An SQL procedure can be used to encapsulate multiple SQL statements and deliver the derived events to multiple destinations.

Continuous queries often rely on control over the minimum/maximum number of events to consider when the query is executed. This information is expressed as an ordinary predicate in the where clause.  The following pre-defined predicates are supported. They inform the DataCell scheduler when the next action should be taken. They don't affect the current query, which allows for a dynamic behavior. The window slide size can be calculated with a query.  It also means that a startup query is needed to inform the scheduler the first time or set the properties explicitly using datacell.basket() and datacell.beat() calls.

datacell.threshold(B,N) query is only executed when the basket B has at least size N
datacell.window(B,M,S) extract a window of at most size M and slide with size S afterwards
datacell.window(B,T,Ts) extract a window based on a temporal interval of size T followed by a stride Ts
datacell.beat(B,T) next query is executed after a T milliseconds delay (excluding query execution time)

The sliding windows constraints are mutually exclusive. Either one slide based on the number of events is consumed or the time window. For time slicing, the first timestamp column in the basket is used as frame of reference. This leaves all other temporal columns as ordinary attributes.

Stream catalog

The status of the DataCell is mapped onto a series of table producing SQL functions. The status of the DataCell can be queried using the table producing functions datacell.baskets(), datacell.receptors(), datacell.emitters() and datacell.queries()

sql>sql>select * from datacell.receptors();
+-----------------+-----------+-------+----------+---------+---------+----------------------------+--------+----------+---------+
| nme             | host      | port  | protocol | mode    | status  | lastseen                   | cycles | received | pending |
+=================+===========+=======+==========+=========+=========+============================+========+==========+=========+
| datacell.bsktin | localhost | 50500 | TCP      | passive | running | 2012-08-15 19:31:28.000000 |      2 |       20 |       0 |
+-----------------+-----------+-------+----------+---------+---------+----------------------------+--------+----------+---------+
1 tuple (1.800ms)
sql>select * from datacell.emitters();
+------------------+-----------+-------+----------+--------+---------+----------------------------+--------+------+---------+
| nme              | host      | port  | protocol | mode   | status  | lastsent                   | cycles | sent | pending |
+==================+===========+=======+==========+========+=========+============================+========+======+=========+
| datacell.bsktout | localhost | 50600 | UDP      | active | running | 2012-08-15 19:31:28.000000 |      2 |   10 |       0 |
+------------------+-----------+-------+----------+--------+---------+----------------------------+--------+------+---------+
1 tuple (1.725ms)

The receptors and emitters are qualified by their communication protocal and modes. The last time they have received/sent events is shown. The events not yet handled by a continuous query are denoted as pending.


sql>select * from datacell.baskets();
+---------------------+-----------+---------+-----------+-----------+------------+------+----------------------------+--------+
| nme                 | threshold | winsize | winstride | timeslice | timestride | beat | seen                       | events |
+=====================+===========+=========+===========+===========+============+======+============================+========+
| datacell.bsktmiddle |         0 |       0 |         0 |         0 |          0 |    0 | 2012-08-15 19:31:28.000000 |      0 |
| datacell.bsktin     |         0 |       0 |         0 |         0 |          0 |    0 | 2012-08-15 19:31:28.000000 |      0 |
| datacell.bsktout    |         0 |       0 |         0 |         0 |          0 |    0 | 2012-08-15 19:31:28.000000 |      0 |
+---------------------+-----------+---------+-----------+-----------+------------+------+----------------------------+--------+
3 tuples (1.639ms)
sql>select * from datacell.queries();
+-----------------+---------+----------------------------+--------+--------+------+-------+---------------------------------------------------------------------------------+
| nme             | status  | lastrun                    | cycles | events | time | error | def                                                                             |
+=================+=========+============================+========+========+======+=======+=================================================================================+
| datacell.pass   | running | 2012-08-15 19:31:28.000000 |      6 |     20 |  613 |       | insert into datacell.bsktmiddle select * from datacell.bsktin;                  |
| datacell.filter | running | 2012-08-15 19:31:28.000000 |      4 |      7 |  653 |       | insert into datacell.bsktout select * from datacell.bsktmiddle where id %2 = 0; |
+-----------------+---------+------
---------------------+--------+--------+------+-------+----------------------------------------------------------------------------------+

The baskets have properties used by the scheduler for emptying them. The events pending are shown. The continuous queries are marked with how often they have been selected for execution, the total number of events take from all input baskets, the total execution time and their definition.

Sensor simulator

The sensor simulator is geared at testing the total infrastructure and takes short cuts on the event formats sent. Currently, it primarily generates event records starting with an optional event identifier, followed by an optional timestamp, and a payload of random integer values.

sensor [options]
--host=<host name>, default=localhost
--port=<portnr>, default=50500
--sensor=<name>
--protocol=<name> udp or tcp(default)
--increment=<number>, default=1
--timestamp, default=on
--columns=<number>, default=1
--events=<events length>, (-1=forever,>0), default=1
--file=<data file>
--replay use file or standard input
--time=<column> where to find the exact time

--batch=<batchsize> , default=1
--delay=<ticks> interbatch delay in ms, default=1
--trace=<trace> interaction
--server run as a server
--client run as a client


To generate a test file with 100 events for the example, you can rely mostly on the default settings. Hooking up the sensor to the stream engine merely requires an additional hostname and port instead of a file argument.  A glimpse of the sensor interaction can be obtained using the 'debug' protocol, which writes the events to standard output, or a specific file. The status of the DataCell can be checked with datacell.dump(), which now shows a hundred events gathered in the basket bsktin. The sensor simulator asks for user input before it exits. This way, the receiving side can pick up the events and not be confronted with a broken UDP channel.

$ sensor --events=100 --protocol=debug --columns=3
1,306478,1804289383
... 98 more ...
100,137483,1956297539
$ sensor --host=localhost --port=50500 --events=100 --columns=3

Actuator simulator

To test the DataCell, the distribution contains a simple event simulator. It generates a stream of MonetDB tuple values containing only random integers. Each tuple starts with a time stamp it has been created.

The actuator simulator provides the following options:

actuator [options]
--host=<host name>, default localhost
--port=<portnr>, default 50600
--protocol=<name>  either tcp/udp, default tcp
--actuator=<actuator name> to identify the event received
--server run as a server (default)
--client run as a client
--events=<number>  number of events to receive
--statistics=<number>  show statistics after a series of events

Jacqueline

Jacqueline: A JSON Query Language for MonetDB

MonetDB/JAQL is an implementation of the Query Language for JavaScript Object Notation (JSON) on top of MonetDB's relational column-store engine. It implements JAQL Core from its specifications, thereby ignoring the Hadoop-centric view that flows through the various examples. The result is a pure column-store JSON query processing system, benefitting from the power of the MonetDB engine.

Note: MonetDB/JAQL was first released as a beta in the Jul2012 release. It is, however,  a work in progress ! It requires major changes to comply with the single columnar approach under development. It is likely to be dropped in an upcoming 2014 release.

JAQL is a query language for the JSON data-format. JSON itself is a free-form format that allows to e.g. express hierarchical data or mix datatypes. It is similar to XML in this respect, albeit a whole lot more limited, which greatly simplifies working with JSON data. Increasing popularity for the JSON format has stimulated support for JSON in many popular programming languages, and development of query languages. We have chosen to implement JAQL Core, for it appears to be designed with the same simplicity in mind as JSON itself, unlike XQuery alike alternatives.

The driving force behind JAQL is the use of pipes to create a flow of JSON data between operations. Typically, a pipeline starts with a data source to operate on. Operations are chained, operating on the output of the previous operation. This constitutes in a logical flow per operator, suitable for MapReduce-like parallelisation. The core operations from JAQL are similar to SQL's operations. This includes a selection operation filter and general purpose projection operation transform.

To quickly give an impression on JAQL, an example query that selects data from an input and projects it into another shape:

[
  {"name": "Fabian", "data": [1, 3, 4]},
  {"name": "Niels", "data": [3, 5]},
  {"name": "Martin"}
]
   -> filter 3 in $.data
   -> transform $.name;

would result into an array with two members: [ "Fabian", "Niels" ].

The current state of MonetDB/JAQL has implemented JAQL Core. The core operators can be found in the JAQL documentation. It is important to note that MonetDB/JAQL differs from the original JAQL specification in many subtle ways.