Hi There,
I have some problem with the row_number() over() function.
Consider the following table:
create table "table1" ("customer" varchar(40), "product" varchar(40),
"price" double)
insert into "table1" values ('cust1', 'p1', 100)
insert into "table1" values ('cust1', 'p2', 200)
insert into "table1" values ('cust1', 'p3', 150)
insert into "table1" values ('cust2', 'p1', 300)
insert into "table1" values ('cust2', 'p3', 200)
The following query over this table:
SELECT "customer",
"product",
"sumprice",
(Row_number() OVER(PARTITION BY "customer" ORDER BY "sumprice")) as
"rank"
FROM ( SELECT "customer",
"product",
(Sum("price")) AS "sumprice"
FROM "table1"
GROUP BY "customer",
"product") AS "temp"
Returns:
Customer product sumprice rank
Cust1 p1 100 1
Cust1 p2 200 2
Cust1 p3 150 3
Cust2 p1 300 1
Cust2 p3 200 2
But doesn’t it suppose to return the following result set?:
Customer product sumprice rank
Cust1 p1 100 1
Cust1 p3 150 2
Cust1 p2 200 3
Cust2 p3 200 1
Cust2 p1 200 2
Thanks
--
Leonard Forgge
--
View this message in context: http://www.nabble.com/Possible-wrong-results-when-Querying-tp22821802p22821…
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi,
This is on CentOS-5/x86_64 running :
[kbsingh@koala data]$ rpm -qa | grep -i Monet
MonetDB-1.22.0-1.el5.kb.oid64
MonetDB5-server-5.4.0-1.el5.kb.oid64
MonetDB-SQL-2.22.0-1.el5.kb.oid64
MonetDB-client-1.22.0-1.el5.kb.oid64
MonetDB-SQL-server5-2.22.0-1.el5.kb.oid64
server started with :
[root@koala ~]# mserver5 --dbinit="include sql;"
on the client :
[kbsingh@koala data]$ echo "\d sample" | mclient -lsql -ukb -Pkb
CREATE TABLE "kb"."sample" (
"id" char(33),
"sdate" char(20),
"key" varchar(255),
"country" char(2),
"region" char(2),
"city" varchar(50),
"pid" int
);
[kbsingh@koala data]$ echo "select 'hello';" | mclient -lsql -ukb -Pkb
% . # table_name
% single_value # name
% char # type
% 5 # length
[ "hello" ]
[kbsingh@koala data]$ echo "copy 1000 records into sample from
'/home/kbsingh/data/data/1000.utf8' using delimiters '\t','\n','';" |
mclient -lsql -ukb -Pkb
[ 1000 ]
also:
[kbsingh@koala data]$ mclient -lsql -ukb -Pkb -s "copy 1000 records into
sample from '/home/kbsingh/data/data/1000.utf8' using delimiters
'\t','\n','';"
[ 1000 ]
so, works so far. However:
[kbsingh@koala data]$ cat 1000.utf8 | mclient -lsql -ukb -Pkb -s "copy
1000 records into sample from STDIN using delimiters '\t','\n','';"
MAPI = kb@localhost:50000
QUERY = copy 1000 records into sample from STDIN using delimiters
'\t','\n','';
ERROR = !SQLException:sql:missing sep line 0 field 0
!SQLException:importTable:failed to import table
given that mclient is on the right side of the pipe, the data is surely
being made available on the stdin, but why does mclient fail like this ?
also, adding an OFFSET makes mclient just die quietly like this:
[kbsingh@koala data]$ cat 1000.utf8 | mclient -lsql -ukb -Pkb -s "copy 5
offset 1000 records into sample from STDIN using delimiters '\t','\n','';"
[kbsingh@koala data]$
so, what am I getting so wrong here ? the load chain I need to setup
will look like this :
zcat andy1.gz | iconv -f latin1 -t utf8 | mclient ......
that compressed file is 62 GB, the changes of me being able to expand
that run the load from a specific filename is zero.
--
Karanbir Singh : http://www.karan.org/ : 2522219@icq
Hi,
I have been compiling MonetDB successfuly under Windows so far.
Now I have tried compiling as I have done before, but with "optimize=yes".
Now when I start the server, I get this message:
!FATAL: BBPinit: BBP.dir is empty
What does this mean? Am I doing something wrong?
Thanks.
--
View this message in context: http://www.nabble.com/Compiling-with-optimizer-tp23022199p23022199.html
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi,
I'm trying to run TPC-H benchmark on MonetDB. I downloaded the package
from the TPC-H site but it needs some configuration for it to be able to
generate DBGEN and QGEN. I was wondering if anyone can advise me what
needs to be changed in makefile, tpcd.h and for 64-bit support. I'm
running this on a Dell Intel 32-bit architecture.
Thanks.
Mayank
hi,
i started to have suddenly following problem with the M5Server. I can start
the server and get the following screen which seems fine:
# MonetDB server v5.10.2, based on kernel v1.28.2
# Serving database 'demo', using 1 thread
# Compiled for i686-pc-win32/32bit with 32bit OIDs dynamically linked
# Copyright (c) 1993-July 2008 CWI.
# Copyright (c) August 2008- MonetDB B.V., all rights reserved
# Visit http://monetdb.cwi.nl/ for further information
#warning: please don't forget to set your vault key!
#(see C:\Program Files\MonetDB\MonetDB5\etc\monetdb5.conf)
# Listening for connection requests on mapi:monetdb://127.0.0.1:50000/
# MonetDB/SQL module v2.28.2 loaded
When i start the client I get following message and the server shuts down:
Connection terminated
Press any key to continue . . .
I have tried to reinstall monetDb but it does not help. can anyone point me
into the right direction?
Thanks
--
View this message in context: http://www.nabble.com/connection-terminated-tp23260811p23260811.html
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi all,
My monetdb server (MonetDB server v5.10.2, based on kernel v1.28.2) crashes
on me in a reproducible fashion. I'm generally (but not always) accessing
it via JDBC. I have 100 tables, each containing just less than 4 million
records.
Crash 1:
- I'm testing retrieval speeds. When I hit table 17 or 18 the server
crashes.
Crash 2:
- I'm testing insertion speeds. Again when I hit table 17 or 17 the server
crashes. I'm only inserting about 1000 records per table. I've tried it
via "COPY INTO" and via batch INSERT statments.
Crash 3:
- Using the Mclient.exe application I manually selected 5000 rows from each
table. When I hit the 17th table the server crashed again. I accessed the
tables in a different order to the above programmatic crashes to try and
eliminate this being due to a particular table. Also, this crash also
eliminates this being a JDBC-related issue.
So, I'm wondering if this may be a known windows issue, or perhaps a Vista
issue? Could it be due to some server internal limit being hit as it loads
and caches tables? Are there any server logs I can examine?
Any ideas?
Thanks,
Scott
--
View this message in context: http://www.nabble.com/Vista-M5Server-Crashes-tp23218455p23218455.html
Sent from the monetdb-users mailing list archive at Nabble.com.
Hello (again!),
I have MonetDB running on a 32-bit Windows Vista box. When I start the
server it says it's "MonetDB server v5.10.2, based on kernel v1.28.2". I'm
using JDBC batch updates to populate about 100 tables with approximately 4
million records per table. I've a couple of issues:
1) My batch updates fail from time-to-time. On average this might be every
10,000 batch updates with each batch containing 100 insertions. I get an
exception with the following message:
"Error(s) occurred while executing the batch, see next SQLExceptions for
details"
If I close the connection then open another one I can continue quite
happily. If I try to reuse the existing connection with a cleared batch I
get the same error every time until I do close the connection.
I think I may not be the only person with this issue:
http://wiki.pentaho.com/display/EAI/MonetDBhttp://wiki.pentaho.com/display/EAI/MonetDB
2) After a very large amount of insertions (more like 100+ million records,
across numerous tables) the insertion simply hangs (a commit usually takes
about a second and I've left it 30+ minutes). After killing the client the
server becomes impossible to connect to. Even after a reboot I get the
following messages:
SERVER:
>!ERROR: Incompatible database version 000000, this server supports version
050000
!ERROR: Please move away c:\data\monetdbdata\MonetDB5\sql_logs\demo\sql\ and
its corresponding dbfarm.
!mvc_init: unable to create system tables
CLIENT:
!SQLException:SQLinit:Catalogue initialization failed
JDBC CLIENT:
Connection reset (Mserver still alive?)
If, however, I delete/move the sql_logs directory then restart things then I
can connect and my data is still there. The last time I did this one of the
log files (log.97) was ~250mb in size.
Any help, advice or comments would be much appreciated. I'm going to spend
this afternoon trying out the bulk COPY INTO functionality.
Thanks,
Scott
--
View this message in context: http://www.nabble.com/Windows-JDBC-Issues-tp23196456p23196456.html
Sent from the monetdb-users mailing list archive at Nabble.com.
----- "Martin Kersten" <Martin.Kersten(a)cwi.nl> wrote:
> Garrett Smith wrote:
>> Does anyone have some guidance on approaches for optimizing database
>> performance?
>>
>> I realize this is a pretty broad question :)
>>
>> More specifically, I'm running a simple query
>>
>> select max(date) from metric
>>
>> on a table with 21 million rows (or whatever you call them in
>> Monet).
>> It's taking quite a bit longer than I'd expect, given a typically
>> indexed column.
> Given the fact that there is no (b-tree) index, the underlying
> engine has to perform a scan to find the maximum.
> This entails a linear scan over 21M elements in the current
> implementation.
Thanks! After going back and reading the docs on MonetDB, I realized
my question was pretty naive. The SQL interface triggered a Pavlovian
response to think in terms of a relational database with b-tree indexes.
It's pretty amazing how fast that linear scan is. Still, my application
calls for something else. I took a look at FastBit, which is insanely
adept at selects into huge data sets, but requires a relatively
expensive index update to get that performance. As I have routine
updates to deal with, that doesn't seem a good fit.
We'll probably stick with MySQL+MyISAM for this particular component.
Garrett