Hello,

Is there (or is there a plan for) efficient bulk load via JDBC? (I'm currently running Oct2014 and JDBC 2.13)

I looked at: https://www.monetdb.org/book/export/html/340

As far as I understand, the most efficient way currently available in JDBC is to use batches of inserts via prepared statements (with autocommit off).

Unfortunately this does not get even close to be fast enough.

The speed of a COPY INTO is what I am looking for. But, this is not supported via JDBC. Is there a specific reason, or is it simply not implemented?

I do know about the workaround of performing the COPY INTO via mapi protocol (like in http://dev.monetdb.org/hg/MonetDB/file/tip/java/example/SQLcopyinto.java). This provides good speed indeed.

However, it is not transaction-safe. When I use this method, and some SQL transaction happens to read (only read!) from the same tables at the same time, then I often get data corruption: the COPY INTO seems to end well (and the data gets actually in place), but new SQL queries on those tables fail with a "BATproject: does not match always" GDK error (checking with gdb I see that the right side of a fetchjoin has count 0).

Any idea?