Hi there,
We have an issue about the remote table on monetdb version of 2016 (sp1
and sp2)
on Ubuntu 14.04 and on Windows Server2012 R2
The scenario for reproducing the issue with two nodes
on node1
Create a table on remote node and fill it:
CREATE TABLE test(id int not null, name text, valid boolean);
INSERT INTO test (id, name) VALUES (1, '1');
INSERT INTO test (id, name) VALUES (2, '2');
INSERT INTO test (id) VALUES (3);
on node2
CREATE REMOTE TABLE test(id int not null, name text, valid boolean) ON
'mapi:monetdb://node1:50000/dbfarm';
Then on node2:
select * from test;
+------+------+-------+
| id | name | valid |
+======+======+=======+
| 1 | 1 | null |
| 2 | 2 | null |
| 3 | null | null |
+------+------+-------+
It works fine, but:
select * from test where name is null;
+----+------+-------+
| id | name | valid |
+====+======+=======+
+----+------+-------+
id 3 should appear here. Furthermore:
select * from test where name is not
null;
(mapi:monetdb://monetdb@192.168.254.31/reports2) Cannot
register
project
(
select
(
table(sys.test) [ test.id NOT NULL, test.name, test.valid ]
COUNT
) [ clob "NULL" ! <= test.name ! <= clob "NULL"
]
) [ test.id NOT NULL, test.name, test.valid ] REMOTE
mapi:monetdb://.../...
select * from test where valid is null;
illegal input, not a JSON header (got '')
and node1 is crashed (need : monetdb start farm).
After downgrading on ubuntu 14.04 to version of 2015 (SP4) this
scenraio works fine.
Thanks,
SG
Hi all,
Do you have any idea about the query throughput regarding monetdb
cluster, is it slow?
I created the monetdb cluster using remote table, when I do query, it's
really slow, about 3sec/query.
Do you know how to improve this? Or MonetDB can actually do better
than this? And How?
Thank you
Hi all,
Do you know how to fix the src compilation error? Error is as following:
*fatal error: *geos_c.h: No such file or directory
#include <geos_c.h>
* ^*
compilation terminated.
Thanks,
Hi,
does MonetDB support any efficient/optimized functionality to compute running aggregates?
A quick check/test suggest that aggregation functions with windows functions like below
does not seem to be supported, is it?
========
SELECT somedate, somevalue,
SUM(somevalue) OVER(ORDER BY somedate
ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
AS RunningTotal
FROM Table
========
Is their anything else that would provide running aggregates?
Or am I (for now?) bound to calculating running aggregates
using the obvious theta-self-join (expected to be "non-optimal"
due to redundant work and large (huge) intermediate results),
or "hijacking"/"mis-using" the bulk-version of a (to be implemented)
scalar function?
Thanks!
Stefan
--
| Stefan.Manegold(a)CWI.nl | DB Architectures (DA) |
| www.CWI.nl/~manegold/ | Science Park 123 (L321) |
| +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |
hi, i upgraded monetdb and am hitting this error with code that previously
worked. my (giant) query has lots of CAST and IS NULL commands in it..
trying to narrow it down now, but does anyone have ideas about what
might've introduced this? it looks like something is calling IS NULL where
it shouldn't be allowed to? thanks
Hi,
I'm confused about the single-user mode (working with latest Dec2016 from
hg).
*Question/suggestion 1*
'mserver5 --single-user' can also be set with 'monetdb set singleuser=yes'.
However, 'monetdb get all' doesn't show 'singleuser' as one of the options,
I had to guess it myself.
It would be nice if monetdb could show an actual list of available options,
instead a manually compiled one.
*Question 2*
is 'mserver5 --single-user' expected to put the database in a locked state,
equivalent to 'monetdb lock' ?
If the answer is yes, then it doesn't do it consistently:
- it does allow to use COPY INTO.... LOCKED (from which you would think the
database is locked)
but:
- it doesn't show the status of the database as locked
- it indeed allows new client connections
Could you clarify how 'single-user' and 'locked' relate?
*Question 3 *(long-ish)
I am pushing, via JDBC, the entire English DBpedia via COPY INTO (without
'LOCKED'), into a dbfarm on local SSD.
It is about 200GB raw data and it takes ~9 hours.
Data goes through a pipeline of light pre-processing (feeding thread) and
then inserted into the stream of COPY INTO (output thread).
What I observe is that, at the beginning, the feeding thread is mush
active, and the output thread very little.
After a couple of hours, the output thread is mostly active, and the
feeding thread mostly idle.
I told myself that MonetDB is taking more and more time to flush appends to
disk.
That's where I though using 'COPY INTO .. LOCKED' would help.
Not only it didn't help, but it got the symptoms worse. The output thread
is almost 100% active, and the feeding thread almost 100% idle.
Indeed, it is already running for 12h and still going on.
Was I wrong to expect that 'LOCKED' would help?
Thanks,
Roberto
All,
Is there a way to suppress the output of status messages like "3000 affected rows" when using mclient and "-f csv"? I have an sql script where I keep reducing a table and inserting the rows into another table. The final sql command extracts the final result. Would be nice if the status messages could be suppressed completely or sent to stderr (a --quiet option). I could also just run two scripts, or pipe the output through grep to remove the offending lines.
Thanks,
Dave
All,
Is there a preferred method to move an entire table into the database hot-set? I guess I can have a query which computes across all of the columns in the table, but wondered if there was something better/cleaner.
Thanks,
Dave
I have implemented a custom join as a filter function.
By design, it cannot work correctly when the input tables are partitioned
by mitosis.
How can I prevent mitosis from partitioning its inputs?
Roberto
Hi all,
We recently upgraded to the Dec2016 release but had to downgrade
immediately after experiencing a severe degradation in performance. I have
attached 4 graphs of our performance monitoring (of the last 3 days). It
should be fairly obvious when we upgraded and when we downgraded again.
We're still looking into the root cause but it appears that it has
something to do with memory leaks when using aggregations on varchar colums
with a high cardinality.
Has anyone else experienced the same performance degradation? Is this a
known (regression) bug?
Best regards,
Dennis Pallett