Hi,
I've installed monetdb on *Ubuntu 9.04 Server* with apt-get. These are
the contents of php client
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/php5-monetdb-client
/usr/share/doc/php5-monetdb-client/changelog.Debian.gz
/usr/share/doc/php5-monetdb-client/copyright
I need the moduels, where are they? Thanks. Dariusz.
Hi There,
I have some problem with the row_number() over() function.
Consider the following table:
create table "table1" ("customer" varchar(40), "product" varchar(40),
"price" double)
insert into "table1" values ('cust1', 'p1', 100)
insert into "table1" values ('cust1', 'p2', 200)
insert into "table1" values ('cust1', 'p3', 150)
insert into "table1" values ('cust2', 'p1', 300)
insert into "table1" values ('cust2', 'p3', 200)
The following query over this table:
SELECT "customer",
"product",
"sumprice",
(Row_number() OVER(PARTITION BY "customer" ORDER BY "sumprice")) as
"rank"
FROM ( SELECT "customer",
"product",
(Sum("price")) AS "sumprice"
FROM "table1"
GROUP BY "customer",
"product") AS "temp"
Returns:
Customer product sumprice rank
Cust1 p1 100 1
Cust1 p2 200 2
Cust1 p3 150 3
Cust2 p1 300 1
Cust2 p3 200 2
But doesn’t it suppose to return the following result set?:
Customer product sumprice rank
Cust1 p1 100 1
Cust1 p3 150 2
Cust1 p2 200 3
Cust2 p3 200 1
Cust2 p1 200 2
Thanks
--
Leonard Forgge
--
View this message in context: http://www.nabble.com/Possible-wrong-results-when-Querying-tp22821802p22821…
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi,
I have compiled the latest August build from CVS on windows.
When I run mclient, I get this:
server requires 'SHA512' hash, but client support was not compiled in
I saw another thread regarding this before, but couldn't resolve the issue
based on it.
Please let me know what I might be doing wrong.
Here is my server output:
# MonetDB server v5.14.0, based on kernel v1.32.0
# Serving database 'demo', using 1 thread
# Compiled for i686-pc-win32/32bit with 32bit OIDs dynamically linked
# Copyright (c) 1993-July 2008 CWI.
# Copyright (c) August 2008-2009 MonetDB B.V., all rights reserved
# Visit http://monetdb.cwi.nl/ for further information
#warning: please don't forget to set your vault key!
#(see
C:\MonetCVS_base_Aug09\MonetCVS32_cleanBuild\Install\etc\monetdb5.conf)
# Listening for connection requests on mapi:monetdb://127.0.0.1:50000/
>
Thanks.
--
View this message in context: http://www.nabble.com/OpenSSL-and-SHA512-tp25223356p25223356.html
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi,
I'm trying to create a benchmark MonetDB vs Postgres and for many types of
queries, MonetDB quite faster than Postgres.But there is an interesting
result when i try a query which contains an 'in' statement.The response times are
MonetDB - 58 sec
Postgres - 3 sec
for the query :
select RFOADV_10.rfoadvsup as c0, sum((case when cabact___rfountide = 'NB_E'
then cabactqte else 0 end)) as m0 from RFOADV as RFOADV_10, CABACT as CABACT
where (cabact___rforefide = 'FHSJ' and
cabact___rteprcide = 'CPTANA' and
cabact___rtestdide = '100' and
cabact___rfovsnide = '200805_001') and CABACT.cabact_c2rfodstide
= RFOADV_10.rfoadvinf and RFOADV_10.rfoadvsup in ('5030', '5031', '5032',
'5033', '5034', '5035', '5036', '5037', '5038') group by RFOADV_10.rfoadvsup
i reform the query so that it can give the same result without 'in' :
select RFOADV_10.rfoadvsup as c0, sum((case when cabact___rfountide = 'NB_E'
then cabactqte else 0 end)) as m0 from RFOADV as RFOADV_10
join CABACT on (CABACT.cabact_c2rfodstide = RFOADV_10.rfoadvinf)
join (select '5031' as pole
UNION
select '5032' as pole
UNION
select '5033' as pole
UNION
select '5034' as pole
UNION
select '5035' as pole
UNION
select '5036' as pole
UNION
select '5037' as pole
UNION
select '5038' as pole
) sub ON (RFOADV_10.rfoadvsup = sub.pole)
where (cabact___rforefide = 'FHSJ' and
cabact___rteprcide = 'CPTANA' and
cabact___rtestdide = '100' and
cabact___rfovsnide = '200805_001')
group by RFOADV_10.rfoadvsup
the response time for MonetDB decreases 100 times (0.6 sec).
Do you have any idea about it ?
I run both Monetdb and Postgres on a server with 10GB Ram and the tables have
CABACT : 677000
RFOADV : 140000 rows.
Thanks,
Mehmet
--
Open WebMail Project (http://openwebmail.org)
I noticed that COPY...INTO for bulk loading requires that all fields be
present in the data file. Is there any way to use bulk loading and have it
honor the auto increment column or at least not have to specify every column
in the data file?
73,
Matthew W. Jones (KI4ZIB)
http://matburt.net
Hi,
I am trying to create an overloaded iif function, so I created these custom
functions:
1. create function iifnew(tv timestamp, fv timestamp, cond boolean) returns
timestamp begin if cond then return tv; else return fv; end if; end
2. create function iifnew(tv int, fv int, cond boolean) returns int begin if
cond then return tv; else return fv; end if; end
3. create function iifnew(tv string, fv string, cond boolean) returns string
begin if cond then return tv; else return fv; end if; end
But now, if I do something like this:
update table1 set "Column1" = iifnew('hello1', 'hello2', ("Column2" =
"Column3"))
I get this:
!SQLException:int:conversion of string 'hello1' failed
Is this a bug in function overloading?
I am running Feb2009 from CSV on Windows.
Thanks.
--
View this message in context: http://www.nabble.com/Function-overloading-tp25210862p25210862.html
Sent from the monetdb-users mailing list archive at Nabble.com.
The following query:
tijah:queryall("//p[about(., 'drug treatment')]")
returns a number of results from my sample document. Some of these
results contain the phrase "drug misuse". The following query:
tijah:queryall("//p[about(., 'drug misuse')]")
returns zero results from the sample document, which is clearly
incorrect since some results returned by the first query should be
returned by the second.
I have deleted and reloaded the sample document and I have recreated the
tijah index and the result is consistently incorrect. Is this a bug?
-- Roy
Hi, I am in the process of evaluating two major databases....monetdb
and postgresql for building a backend database system for backtesting
on huge historical data sets around 100GB. I copied some test data(a
250mb table) using bulk copy in both and monetdb was way faster. But
when I did a simple query "SELECT * from tablename", it took around 3
minutes for 1 lakh rows via mclient and a gui interface (aqua studio
based on JVM simply said out of memory). The bulk copy file was in the
format collumns seperated by pipes in a text file.
While using postgresql, at first it was slow but once I did a
vacuum(function in postgresql), the (select * from tablename) was much
faster than monetdb.
So, what am I missing here, Do I need to do something to enhance the
performance in Monetdb.......
I have high hopes from monetdb, so please help.
Thanks
--
Jatin Patni
Tel: 91 9911 649 657
jatinpatni(a)gmail.com
www.jatinpatni.co.nr
Hi Stefan,
Thanks for your reply.
Well, I understand that a simple dump of the table wouldn't be that fast in
monetdb cuz it was not built for that purpose.
I am evaluating monetdb 5 and postgresql plus standard server from
enterprise db.
I have not yet tested performance of running complex queries on both
systems.
I am testing on windows 32 bit, core2duo processor, 2.8 ghz, with 3gb ram
2.8 ghz
I hope this config would work fine for backtesting with around 100 gb
databases.
Will post a detailed result of using both postgresql and monetdb later.
Let me go into some detail.....my database would have 4 collumns(sno, price,
name, volume) and my backtesting would generally involve querying this
dataset and checking for moving average crossovers of price with the last n
days moving averages, and may grow to more complex querying involving
volumes. My concern is, if I need all the collumns for the queries, will
monetdb perform better or should I stick with postgresql plus standard.
I'll post the pperformance results once I have copmp[leted populating the
databases in postgresql......But I want to try out monetdb also, I know that
the postgresql interpreter may perform very slow.....
Anyways, thanks for this amazing product.....Also, when will the X100 be
available as an extension in Monetdb
---------------------------------------------------------
*Hi Jatin,
thank you very much for your interest in MonetDB!
We are glad to hear that you found MonetDB "way faster" than PostgreSQL with
bulk loading your 250 MB dataset --- btw, which versions of MonetDB and
PostgreSQL are you comparing on what kind of system (OS, hardware (CPU, RAM,
I/O system))?
With a simple select * from tablename, you basically evaluate to ability of
both server and client to pump your 250 MB of data from the server to the
client. That is probably an interesting measurement in case you mostly do
select * from table queries. However, we do not consider such queries the
"major challenges" for DBMSs and designed MonetDB not for plainly pumping
all data from the server to the client in the most efficient way. Rather, we
assume that the major part of data-management, i.e., processing (more)
complex queries including selections, joins, aggregation, etc. are done by
the DBMSs (server) and only the rather small results of such (analytical)
queries are then sent to the server.
Moreover, as opposed to "classical" row-stores, column-stores in general are
not necessarily designed with a major focus on most efficiently
re-constructing the whole table. They rather assume that most (analytical)
queries usually only access a subset of all column of a table. Being able to
access only the requested data and hence not having to carry-around
excess-luggage of non-used columns during query processing is one of the
key design differences of column-stores over row-stores.
Hence, to speed up simple select * from tablename queries in MonetDB, you'd
probably (at least) need to re-design and -implement MonetDB's client-server
protocol (MAPI).
We are curious, though, to also hear about your experiences with more
complex queries that return much smaller (i.e., "more reasonable"?) result
sets.
Stefan
ps: It is interesting to hear that the "vacuum" command in PostgreSQL had a
significant impact on a simple select * from table query over
bulk-loaded data, i.e., without any updates being performed --- I
wouldn't know wht akind of "garbage" bulk-loading leaves behind that
needs to be cleaned-up ...
On Fri, Aug 21, 2009 at 07:18:46PM +0530, jatin patni wrote:*
> Hi, I am in the process of evaluating two major databases....monetdb
> and postgresql for building a backend database system for backtesting
> on huge historical data sets around 100GB. I copied some test data(a
> 250mb table) using bulk copy in both and monetdb was way faster. But
> when I did a simple query "SELECT * from tablename", it took around 3
> minutes for 1 lakh rows via mclient and a gui interface (aqua studio
> based on JVM simply said out of memory). The bulk copy file was in the
> format collumns seperated by pipes in a text file.
> While using postgresql, at first it was slow but once I did a
> vacuum(function in postgresql), the (select * from tablename) was much
> faster than monetdb.
>
> So, what am I missing here, Do I need to do something to enhance the
> performance in Monetdb.......
> I have high hopes from monetdb, so please help.
> Thanks
>
> --
> Jatin Patni
> Tel: 91 9911 649 657
> jatinpatni(a)gmail.com
> www.jatinpatni.co.nr
>
> ------------------------------
------------------------------------------------
> Let Crystal Reports handle the reporting - Free Crystal Reports 2008
30-Day
> trial. Simplify your report design, integration and deployment - and focus
on
> what you do best, core application coding. Discover what's new with
> Crystal Reports now. http://p.sf.net/sfu/bobj-july
> _______________________________________________
> MonetDB-users mailing list
> MonetDB-users(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/monetdb-users
>
>
>
>
>
--
| Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl |
| CWI, P.O.Box 94079 |
http://www.cwi.nl/~manegold/<http://www.cwi.nl/%7Emanegold/> |
| 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 |
| The Netherlands | Fax : +31 (20) 592-4312 |
--
Jatin Patni
Tel: 91 9911 649 657
jatinpatni(a)gmail.com
www.jatinpatni.co.nr