Hi,
I am getting some strange exceptions using MonetDB JDBC 1.8/1.9, and I
decided to look into the code. While inspecting the code I came across
these strange lines in MonetConnection.executeQuery(String[] templ,
String query) method
(http://dev.monetdb.org/hg/MonetDB/file/718da8ca0a1a/java/src/nl/cwi/monetdb…).
In my case, these lines are executed for sure when doing a batch
insert. So, suppose that the batch contains the following commands (as
constructed by the MonetStatement.executeBatch() method):
exec 1(1, 957339737330229055);
exec 1(2, 278262503670654331);
exec 1(805306369, 3763943296910752235)
The lines in question take the above commands as a string, prepend a
's' character and append a ';' at the end. The resulting commands,
which are written in the server's socket, are the following:
sexec 1(1, 957339737330229055);
exec 1(2, 278262503670654331);
exec 1(805306369, 3763943296910752235);
First of all, I am not familiar with the internals of JDBC drivers.
Taking this into account, is this what it should be? From a symmetric
point of view, I would assume that the correct would be the following:
sexec 1(1, 957339737330229055);
sexec 1(2, 278262503670654331);
sexec 1(805306369, 3763943296910752235);
That is, it should prepend a 's' character before an exec command.
Last, are these exec commands (with or without a prepending 's')
specific to MonetDB? In either case, is there any documentation to get
familiar with their meaning?
Thanks a lot,
Babis
Hi,
I get an error message when I retrieve a MAL plan over JDBC after the 250th
call to ResultSet.next(). The relevant part of the stack trace is:
java.lang.AssertionError: block 0 should have been fetched by now :(
at nl.cwi.monetdb.jdbc.MonetConnection$ResultSetResponse.getLine(MonetConnection.java:1585)
at nl.cwi.monetdb.jdbc.MonetResultSet.absolute(MonetResultSet.java:194)
at nl.cwi.monetdb.jdbc.MonetResultSet.relative(MonetResultSet.java:2149)
at nl.cwi.monetdb.jdbc.MonetResultSet.next(MonetResultSet.java:2116)
The call to EXPLAIN works in mclient and returns a MAL plan that is 402 lines
long. I can reproduce the error when I try to create the plan via jdbcclient,
except that it says that block 1 should have been fetched and the exporter
stops after line 160 or so.
However, other SQL queries that also produce more than 250 results work over
JDBC. Not sure what the difference is, but these queries only return column of
integers.
I'm using the Aug2011-SP3 release on OS X 10.6.8. Please let me know if I
should provide other information like the SQL query or the MAL plan.
Cheers,
Viktor
Hi,
the example at the top of the documentation of the JDBC client at
http://www.monetdb.org/Documentation/Cookbooks/SQLrecipies/Clients/JDBC
uses the -d switch to specify the table and the -b switch to specify the
database. However the current version does not recognize the -b switch
and the -d switch can be used to select the database as is documented
further on the page.
Also, the page says that TRACE is not possible via JDBC. However, as the
jdbcclient.jar demonstrates it is indeed possible to use TRACE on a query by
checking additional result sets.
Cheers,
Viktor
The MonetDB team at CWI/MonetDB BV is pleased to announce the
Aug2011-SP3 bugfix release of the MonetDB suite of programs.
More information about MonetDB can be found on our website at
http://www.monetdb.org/.
For details on this release, please see the release notes at
<http://www.monetdb.org/Downloads/ReleaseNotes>.
As usual, the download location is <http://dev.monetdb.org/downloads/>.
Note that this is the last release of the Aug2011 branch. From now on, no
further development will take place on this branch.
Aug 2011-SP3 bugfix release
SQL * Added a fix for bug #2834, which caused weird (failing)
behaviour with PreparedStatements.
Merovingian * Fixed a bug where monetdbd's socket files from /tmp were
removed when a second monetdbd was attempted to be started
using the same port.
Bug Fixes * 2875: incomplete database initialization
* 2912: Function body disappears after loading data - error
message: SQLException:sql.resultSet:Cannot access
descriptor
* 2919: Add column and select in same transaction
* 2920: Updated database from Aug2011 to Dec2011 make
queries fail with "Subquery result missing".
* 2921: complex where clause broken
* 2923: Multiplication and cast from decimal to integer
leads to 0 value
* 2924: use LIKE with ESCAPE ignores OR operator
* 2928: Cannot update a field with a count query based on an
other table
Hi,
I'm evaluating MonetDB, and the following is based on reading the docs
and some of the monetdb-users email list archive.
I wonder if the following setup+workflow is correct, and if so, is it
monetdb best practice, given this use case:
We will have read only tables, each distributed across several
machines/servers, they will be updated daily.
There will be no cross table queries, i.e. only one table touched by each quert.
We would like to be able to update each table while not affecting the
availability of any other table.
The table_db's are not horizontally sharded, i.e. all the data for a
query will always come from that one table.
We cannot use udp multicast/broadcast so a monetdb cluster is not
possible (unless a localhost cluster is possible/sensible?).
The setup+workflow:
- one table per database (this allows for independent table updates),
let these be <table_db>.
- update a master (writable) instance of the table on a 'special'
monetdbd/machine.
- copy the updated table to each machine.
- To update the table on a machine:
$> monetbd lock <mytable_db>
$> monetbd stop <mytable_db>
$> mclient -u monetdb <mytable_db> updatefile
$> cat updatefile
copy into MyTable from ('path_to_mytable_col_file_i',
'path_to_mytable_col_file_f', 'path_to_mytable_col_file_s');
$> monetbd release <mytable_db>
$> monetbd start <mytable_db>
Is the above the best pattern/architecture of monetdb for such a use case?
Appreciate any insights people can offer
TIA
Hedge
--
πόλλ' οἶδ ἀλώπηξ, ἀλλ' ἐχῖνος ἓν μέγα
[The fox knows many things, but the hedgehog knows one big thing.]
Archilochus, Greek poet (c. 680 BC – c. 645 BC)
http://hedgehogshiatus.com
Hi there,
I'm a new user to both Tableau and MonetDb.
I run under Windows, and my MonetDb base is up and running, a table "matable" was created and populated.
Although I can connect the MonetDb base to Tableau (although I get some warnings), as soon as Tableau sends a request to MonetDb, the ODBC driver returns the following :
« Database error (0x80004005) : [MonetDB][ODBC Driver 11.5.7]Invalid identifier 'none:name:nk' »
As can be seen here : http://puu.sh/8STw
Has any of you already used Tableau and solved this issue ?
Shall I consider using Pentaho ?
Thank you !
Regards
hi,
I tried to compile the latest monetdb(11.5.3) on centos6 x64, but the pcre library is not found when configure. I'm sure that the pcre library has been installed via compiling the source, the libpcre.la, libpcre.a, libpcre.so has been installed in /usr/local/lib. I also configure monetdb like:
pcre_LIBS=/usr/local/lib pcre_CFLAGS=/usr/local/include ./configure, but it doesn't work. How to fix this problem?
Any reply will be apprecitated.
Hi,
I get an error message when executing the following query:
SELECT
count(*)
FROM
_rank AS _rank1,
_rank AS _rank2
WHERE
_rank1.pre = 23 AND
_rank2.pre = 33 AND
EXISTS (SELECT 1 FROM _rank AS ancestor WHERE
ancestor.pre < _rank1.pre AND _rank1.pre < ancestor.post AND
ancestor.pre < _rank2.pre AND _rank2.pre < ancestor.post);
The error message is:
TypeException:user.s0_1[99]:'algebra.join' undefined in: _129:any := algebra.join(_128:bat[:oid,:oid], _108:bat[:int,:oid])
SQLException:SQLengine:Program contains errors
A minimal example of test data to run this query follows:
CREATE TABLE _rank (pre INTEGER, post INTEGER);
INSERT INTO _rank VALUES (22, 37);
INSERT INTO _rank VALUES (23, 24);
INSERT INTO _rank VALUES (33, 34);
Cheers,
Viktor
Hi all,
I've managed to get MonetDB running and loaded with a chunk of data
(thanks to some great help from the list) and am now trying to figure
out the "right" way to make use of it. I don't have a whole lot of
exposure to columnar stores, and am therefore not sure if I'm doing
things in an idiomatically correct way. I have a couple general
questions about columnar DBs, and hope its appropriate to ask them
here even thought they aren't specific to MonetDB. If these questions
aren't considered appropriate for this list I apologize in advance and
will happily move them to a different forum (but would appreciate
pointers to where such a forum might be.)
Two questions are general, and one is fairly specific. First the general:
1. Does working with a columnar store fundamentally change the way one
would structure the tables? I'm somewhat familiar with star-schema
designs where a central fact table holds the entire attribute set in
as rows of indexes with supporting lookup tables for each column type,
but it seems like columnar stores achieve the same thing without
actually requiring the developer to specifically build those
structures. Tables are constructed in a de-normalized RDBMS manner
and the underlying DB takes care of the vectorization. Is this a
valid description, or am I fundamentally misunderstanding the basics?
2. If there are significant differences in the way columnar stores
should be designed, can folks point me to any resources (web, book,
article, etc.) to help educate me on the topic?
Now the specific:
I have several sets of time-series data modeled as follows:
create table sensordata (
sensor varchar(32),
epoch_ts bigint,
reading double,
CONSTRAINT SENSORDATA_PKEY PRIMARY KEY (sensor, epoch_ts)
);
I've populated that data, and now want to query data for a subset of
sensors and generate a table that shows the value of the sensors at
each point in time. I'm not sure what the right way to do this is.
My desired out:ut would be:
epoch_ts sensor1 sensor2 delta
-------------- ------------ ------------ ------
<time1> 23.5 24.1 0.6
<time2> 19.3 25.2 5.9
...
<timeN> 25.1 19.3 -5.8
I tried to build that query as follows but end up with NULL results
(note: I'm in no way a SQL pro so it might be really ugly.)
select a.epoch_ts as epoch,
a.reading as sensor1_reading,
b.reading as sensor2_reading,
b.reading - a.reading as delta
from
(select epoch_ts, reading from sensordata where sensor='sensor1') a,
(select epoch_ts, reading from sensordata where sensor='sensor2') b
where b.epoch_ts = a.epoch_ts
order by epoch asc;
Is this a sane way to approach the problem, or am I completely off
track here. Each subselect returns results, but they obviously aren't
joining correctly and it feels like it might simply be the wrong way
to try to build the result set.
I appreciate any direction you might provide.
Thanks!