Is there anyone that can think of a solution for this problem?
cc1: warnings being treated as errors
/Volumes/Scratch/monetdb/stable/monetdb/src/modules/plain/bat.mx: In function 'local_itoa':
/Volumes/Scratch/monetdb/stable/monetdb/src/modules/plain/bat.mx:1253: warning: format '%zd' expects type 'signed size_t', but argument 4 has type 'ssize_t'
I mean... is there a difference between "signed size_t" and "ssize_t"?
Ehrm.
[Orion:src/modules/plain] fabian% uname -a
Darwin Orion.local 7.9.0 Darwin Kernel Version 7.9.0: Wed Mar 30 20:11:17 PST 2005; root:xnu/xnu-517.12.7.obj~1/RELEASE_PPC Power Macintosh powerpc PowerBook4,3 Darwin
[Orion:src/modules/plain] fabian% gcc --version
gcc (GCC) 4.0.1 (Apple Computer, Inc. build 5341)
Copyright (C) 2005 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Probably just need to compile without -Werror.
Dear all,
For demo purposes we are in need for implementations of some XQuery
standard functions, especially fn:tokenize() and fn:base-uri().
Are there plans for implementing more (or all) standard functions?
Best, Djoerd.
Hi monetdb-developers,
I just build the MonetDB-Venus-SuperBall-SR1, the build of the c code
went ok (nice!) except for the java jdbc driver I do find checking for
javac itself in the build log but not the actual use of it. Do i build
correctly?
/var/tmp/MonetDB-1182546749/log :
Fri Jun 22 23:12:29 CEST 2007
called with arguments: --prefix=/tmp/frans/MonetDB --enable-sql
--enable-xquery
Calling javac (1.6.0) with all/most files does not help:
javac nl/cwi/monetdb/jdbc/*.java
nl/cwi/monetdb/jdbc/MonetDataSource.java:57: cannot find symbol
symbol : class MonetDriver
location: class nl.cwi.monetdb.jdbc.MonetDataSource
private MonetDriver driver;
^
etc...
The only file with that name is
/var/tmp/MonetDB-1182546749/MonetDB-client/clients/src/java/src/nl/cwi/monetdb/jdbc/MonetDriver.java.in
but renaming doesnt help.
and for an other file:
nl/cwi/monetdb/jdbc/MonetClob.java:35: nl.cwi.monetdb.jdbc.MonetClob is
not abstract and does not override abstract method
getCharacterStream(long,long) in java.sql.Clob
public class MonetClob implements Clob {
^
Did the jdbc changed with java 1.6, is the code compatible?
Howto proceed?
Thanks in advance
Frans Verster
I've read some recent emails about stable having
issues so forgive me if this is expected behaviour at
the moment.
When I run monetdb-install.sh --enable-sql
--prefix=/var/lib/MonetDB --enable-optimise
--cvs=stable
I receive the following cvs error:
cvs [export aborted]: no such tag stable
Please refer to the full log at
/var/tmp/MonetDB-1182455148/log
If you believe this is an error in the script or the
software,
file a bug and attach the logfile.
exporting MonetDB from CVS failed!
monetdb-install.sh: line 204: glibtool: command not
found
Am I doing something wrong?
Dear James,
Can you specify the system you are working on, e.g. hardware/os.
You are really beating the system. I don't know a system that
can handle a TPC-H SF100 out of the box. This size often
require a multistage process and careful setting of the system
parameters.
But, let's see what you have done and learn from your experience.
James Laken wrote:
> Dear MonetDB Developers and Users,
>
> My TPC-H SF100 test is still producing interesting problems. After
> several hours of work I have managed to import near 420 million record
> to the lineitem table (7x 60 million record slice). Accidentally I
Ok, you perform a sliced base load. i.e. 7 x a SF-10
> killed the import process, and stopped the server process. I have tried
Did you stop and restart the server between the loads? If not, then
from a recovery point of view all 420M are stored in a single log file
and become the target of a single reload. It behaves as if you loaded
the 7x60M as a single batch.
Killing a database process is of course hard. In that case, the
recovery process has to reload the data and enters a really expensive
part of TPC-H: ensure correctness of the integrity relationships.
Protection against this is hard, because it requires that integrity
rules enforcement should either be disabled (the method persued in MySQL).
> to restart the server process but after three hours of intensive
> processing the sql module still not started. Please note that the
> initialization process allocated nearly all memory and swap.
This is what we expect. Your tables require a lot of space, because
MonetDB does not automatically partitioning it. (That's scheduled
for an upcoming release ;-))
>
> I have attached a gdb to the server process and the execution stacks
> looks like this:
>
> Program received signal SIGINT, Interrupt.
> 0x00002b240cb1c2a4 in file_read () from /usr/lib/libstream.so.0
> (gdb) where
> #0 0x00002b240cb1c2a4 in file_read () from /usr/lib/libstream.so.0
> #1 0x00002b240cb1b744 in stream_readLngArray () from
> /usr/lib/libstream.so.0
> #2 0x00002b240c67a27f in lngRead () from /usr/lib/libbat.so.0
> #3 0x00002b240c788e90 in logger_readlog () from /usr/lib/libbat.so.0
> #4 0x00002b240c789b0c in logger_create () from /usr/lib/libbat.so.0
> #5 0x00002aaaaab42658 in store_init () from
> /usr/lib/MonetDB5/lib/lib_sql.so
> #6 0x00002aaaaab179b3 in mvc_init () from /usr/lib/MonetDB5/lib/lib_sql.so
> #7 0x00002aaaaaacf9f4 in SQLinit () from /usr/lib/MonetDB5/lib/lib_sql.so
> #8 0x00002b240bf89e3a in initScenario () from /usr/lib/libmal.so.0
> #9 0x00002aaaaaacf968 in SQLsession () from
> /usr/lib/MonetDB5/lib/lib_sql.so
> #10 0x00002b240bf6536e in runMALsequence () from /usr/lib/libmal.so.0
> #11 0x00002b240bf6697b in runMAL () from /usr/lib/libmal.so.0
> #12 0x00002b240bf5f3f3 in MALengine () from /usr/lib/libmal.so.0
> #13 0x00002b240bf5e3ab in callString () from /usr/lib/libmal.so.0
> #14 0x0000000000402a65 in main ()
> (gdb) c
> Continuing.
>
> Program received signal SIGINT, Interrupt.
> 0x00002b240d6c31a0 in malloc () from /lib/libc.so.6
> (gdb) where
> #0 0x00002b240d6c31a0 in malloc () from /lib/libc.so.6
> #1 0x00002b240c675c04 in GDKmallocmax () from /usr/lib/libbat.so.0
> #2 0x00002b240c675da9 in GDKmalloc () from /usr/lib/libbat.so.0
> #3 0x00002b240c6787ee in strRead () from /usr/lib/libbat.so.0
> #4 0x00002b240c788e90 in logger_readlog () from /usr/lib/libbat.so.0
> #5 0x00002b240c789b0c in logger_create () from /usr/lib/libbat.so.0
> #6 0x00002aaaaab42658 in store_init () from
> /usr/lib/MonetDB5/lib/lib_sql.so
> #7 0x00002aaaaab179b3 in mvc_init () from /usr/lib/MonetDB5/lib/lib_sql.so
> #8 0x00002aaaaaacf9f4 in SQLinit () from /usr/lib/MonetDB5/lib/lib_sql.so
> #9 0x00002b240bf89e3a in initScenario () from /usr/lib/libmal.so.0
> #10 0x00002aaaaaacf968 in SQLsession () from
> /usr/lib/MonetDB5/lib/lib_sql.so
> #11 0x00002b240bf6536e in runMALsequence () from /usr/lib/libmal.so.0
> #12 0x00002b240bf6697b in runMAL () from /usr/lib/libmal.so.0
> #13 0x00002b240bf5f3f3 in MALengine () from /usr/lib/libmal.so.0
> #14 0x00002b240bf5e3ab in callString () from /usr/lib/libmal.so.0
> #15 0x0000000000402a65 in main ()
>
> Any idea?
>
> Regards,
> J.
>
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> MonetDB-users mailing list
> MonetDB-users(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/monetdb-users
A bug fix release for MonetDB XQuery 0.18.0 has been done. Two smallish
bugs have been fixed in this release:
- compiling an XQuery query took much longer than warranted due to a
configuration error;
- pftijah was accidentally omitted from the Windows builds.
The promised MonetDB/SQL release will remain a promise. The current
head of the Stable branch is still not yet stable, unfortunately.
--
Sjoerd Mullender
Martin Kersten wrote:
> Update of /cvsroot/monetdb/MonetDB5/src/modules/mal
> In directory sc8-pr-cvs16.sourceforge.net:/tmp/cvs-serv14370
>
> Modified Files:
> tablet.mx
> Log Message:
> There is no guarantee that the table descriptor has a BAT
> assigned to the first field. So you may not perform a BATcount.
After this checkin (I still have to check if I did not change the code).
The crash in the mserver with the skyserver data is gone. I will run
more experiments to check it.
Regards,
Romulo
>
>
> Index: tablet.mx
> ===================================================================
> RCS file: /cvsroot/monetdb/MonetDB5/src/modules/mal/tablet.mx,v
> retrieving revision 1.93
> retrieving revision 1.94
> diff -u -d -r1.93 -r1.94
> --- tablet.mx 16 Jun 2007 07:54:02 -0000 1.93
> +++ tablet.mx 19 Jun 2007 05:54:56 -0000 1.94
> @@ -1495,7 +1495,7 @@
> TABLETload_file(Tablet * as, bstream *b, stream *out)
> {
> int res = 0, done = 0;
> - size_t i = 0;
> + size_t i = 0, tuples=0;
> char *sep = as->format[as->nr_attrs - 1].sep;
> int seplen = as->format[as->nr_attrs - 1].seplen;
>
> @@ -1551,9 +1551,10 @@
> as->error=0;
> GDKerror("TABLETload_file: read error "
> "(after loading %d records)\n",
> - BATcount(as->format[0].c));
> + tuples);
> res = -1;
> }
> + tuples++;
> break;
> }
> end = b->buf + b->len;
>
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> Monetdb-checkins mailing list
> Monetdb-checkins(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/monetdb-checkins
Dear All,
The problems reported on loading have our full attention. They are hard
to reproduce. It takes a long time to reach the point were
it breaks. BUT, we can reproduce (hopefully) the bug
(It seems a memory overwrite). Due to a local science meeting
and priority (they pay for the development of MonetDB),
the concerted frontal attack on the bug will start upcoming Wednesday.
Please stay with us.
regards, Martin
James Laken wrote:
> I forgot to mention that the load script use the following formula:
> COPY N RECORDS INTO table from 'path/to/importfile' ... where N is the
> number of records in the inputfile.
>
> Regards,
> Zoltan
>
>
>
> Colin Foss wrote:
>> James,
>>
>> I've had success avoiding "swap-of-death" with COPY
>> INTO by specifying the number of records on each
>> statement.
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> MonetDB-users mailing list
> MonetDB-users(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/monetdb-users
Hi,
I am trying to import the TPC-H dataset (SF100) to the database without
success. The import method is the same as in the benchmark/tpch, except
the number of expected records in the load script and the load script
execution (executed line-by-line in the console). The machine is an dual
core AMD64 (64bit OS), with 4G ram and 8 disk raid0 for storage.
The import process consume nearly all memory and nearly all the swap
(please note that the number of expected records is specified after the
COPY command). In fact I have to restart the server process after each
table import to prevent the "swap to death" state.
However it seems that the lineitem table import will fail, no matter
what I do. I have tried it with different client (eg.: mjclient with
-Xbatching mode), sliced lineitem.tlb file, with the same result.
I have noticed that two or three hours after the issued lineitem COPY
command the mserver5 process does not consume CPU any more. The attached
strace shows me the following:
[pid 4843] select(6, [5], NULL, NULL, {0, 500}) = 0 (Timeout)
[pid 4843] select(6, [5], NULL, NULL, {0, 500}) = 0 (Timeout)
[pid 4843] select(6, [5], NULL, NULL, {0, 500}) = 0 (Timeout)
Any idea? Is there any way to import huge datasets (TB) in bulk mode to
the database? For example the postgresql database has a feature that it
can import the data without write ahead logging, nearly at disk speed.
The dataset can be sliced up per column, so a direct column copy would
be possible.
Regards,
J.