Hi,
Is there a special reason there are no i686 RPM's anymore like there
were until 4.6.0_rc2?
I think it's a bit weird there are RPM's for the hobby OS 'FedoraCore'
but not for the much used server OS 'RedHat Enterprise 3 (and 4)'?
The FedoraCore (which Core?) needs libstdc++.so.6 whereas RHEL 3 only
has libstdc++.so.5 ...
There also seems to be no SRPM's.
Cheers,
--
Henk van Lingen, Systems & Network Administrator (o- -+
Dept. of Computer Science, Utrecht University. /\ |
phone: +31-30-2535278 v_/_
http://henk.vanlingen.net/http://www.tuxtown.net/netiquette/
Hello,
I am trying to pipe output from my application using bulk load.
If application produced output like this:
CREATE TABLE test (col1 int);
COPY 3 RECORDS INTO "test" FROM stdin USING DELIMITERS '\t';
1
2
NULL
COMMIT;
I could pipe it to MapiClient -lsql -umonetdb -Pmonetdb, it worked OK.
But application does not know how many rows it will produce, until
dump out all the records.
When I tried to skip record count I got failed assertion error (a bug
?). Anyway, it would fail when encountered COMMIT; at the end.
At the same time COPY without record count can load data from a file,
but I do not want to put data into temp table and then do load because
i want them run in parallel.
Could you suggest a way to do the loading ?
--
Best regards,
Andrei Martsinchyk mailto:andrei.martsinchyk@gmail.com
Hello all,
I was wondering if there is any documentation about the API
for XQuery (i.e., how to write xqueries from a Java
program).
Also, I am interested in using some features of XLink and
XUpdate on top of my XML database. Is support for any of
these in progress?
Thanks in advance,
Alexandra
______________________________________________________
Click here to donate to the Hurricane Katrina relief effort.
http://store.yahoo.com/redcross-donate3/
Hello All,
Does SQL frontent support updating one table with data from another table ?
More formally, if I have two tables
create table t1 (id1 int, val1 varchar(255)); and
create table t2 (id2 int, val2 varchar(255)); ,
can I execute statement like this (PostgreSQL syntax):
update t1 set val1=val2 from t2 where id1=id2; ?
Thanks in advance.
--
Best regards,
Andrei Martsinchyk mailto:andrei.martsinchyk@gmail.com
We have been experimenting with the MonetDB/XQuery at our institure and
we have some
issues.
We have loaded 727 XML documents into MonetDB with the shred_doc() command:
shred_doc("{xml file 1}, "1")
shred_doc("{xml file 2}, "2")
etc...
These files are small (4 a 5 kilobytes).
When we query these files with the following query the result takes a
long time to complete (2 minutes):
for $i in ("0", "1", .... "727")
return $i
Can anyone explain why looping over 727 documents is so slow? We have
collected all 727 XML documents
into 1 XML file and loaded this in MonetDB and this is a lot faster.
The test machine is a 2.8 GHz P4 with 1 GB of memory.
Another isue is related to the size of the database on the harddisk.
When we fist load the 727 XML documents
into the database the database directory contains about 47.000 files and
is 50 megabytes in size. When
we have excuted a number of querys the size of this directory increases
to 1.7 gigabytes!! Can anyone explain
this behaviour? Is MonetDB generating somekind of dynamic indices?
Thanks for your replys,
Bastiaan Naber