We are Chinese users, and hoping Monetdb will support multi-byte character encodings like UTF-8 and GBK.
_________________________________________________________________
More than messages–check out the rest of the Windows Live™.
http://www.microsoft.com/windows/windowslive/
Hi there,
I'm trying to use the copy into with csv file (utf-8 encoded) and I
encounter some problem.
The create statement is:
CREATE TABLE "sa3e7e9f4d666429ea2c5bb61a532f5b9" ("A" DATE ,"B" VARCHAR
(22) ,"C" DECIMAL (12, 5) ,"D" DECIMAL (12, 5) ,"E" DECIMAL (12, 5) ,"F"
DECIMAL (12, 5) )
The copy into statement is:
COPY INTO "sa3e7e9f4d666429ea2c5bb61a532f5b9"
FROM
'T:\\Prism\\Applications\\Desktop\\PrismDesktop\\bin\\Debug\\LocalRepository\\ImportedCSV\\ImportedCSV0.csv'
USING DELIMITERS '|' NULL AS 'A56A9FC261A143a48F6019F602ABF409';
And csv is attached.
Any advice?!?!?!
tnx,
Alfred.
http://www.nabble.com/file/p21200441/ImportedCSV0.csv ImportedCSV0.csv
--
View this message in context: http://www.nabble.com/Copy-Into-problem-tp21200441p21200441.html
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi there,
I'm trying to use the copy into with csv file (utf-8 encoded) and I
encounter some problem.
The create statement is:
CREATE TABLE "sa3e7e9f4d666429ea2c5bb61a532f5b9" ("A" DATE ,"B" VARCHAR
(22) ,"C" DECIMAL (12, 5) ,"D" DECIMAL (12, 5) ,"E" DECIMAL (12, 5) ,"F"
DECIMAL (12, 5) )
The copy into statement is:
COPY INTO "sa3e7e9f4d666429ea2c5bb61a532f5b9"
FROM
'T:\\Prism\\Applications\\Desktop\\PrismDesktop\\bin\\Debug\\LocalRepository\\ImportedCSV\\ImportedCSV0.csv'
USING DELIMITERS '|' NULL AS 'A56A9FC261A143a48F6019F602ABF409';
And csv is attached.
Any advice?!?!?!
tnx,
Alfred.
http://www.nabble.com/file/p21193416/ImportedCSV0.csv ImportedCSV0.csv
--
View this message in context: http://www.nabble.com/COPY-INTO-problem-tp21193416p21193416.html
Sent from the monetdb-users mailing list archive at Nabble.com.
Dear developers,
I'm currently running the CVS head of Dec 11, 2008. I apologies in advance
for missing on details. If needed I'll provide more.
Thank you, and
lots of wonderful wishes for Xmas and for the New Year!!!
(this is the last email you'll get from me this year, I promise :))
l.
Observation 1:
When trying to run big numbers of clients simultaneously, the server
crushes. For me it crushed with 70 and 100 clients fired at the same time.
Observation 2:
After running one query in a multi-user scenario (N clients/threads
simultaneously) the memory footprint of mserver (% of the memory that the
server occupies) grew. After repeating the experiment M times the memory
grew each time with a constant. Running the same query sequentially the same
number of times leaves the footprint of mserver constant. Could it be a
faulty memory cleaning?
Query: q3.xq
let $col := fn:collection("MotiesTweedeKamer")
let $years := fn:distinct-values(
for $date in $col//hiddendatum
return fn:substring(fn:string($date),1,4))
for $y in $years
order by $y ascending
return <result year="{$y}" count="{
count($col//document[fn:substring(fn:string(.//hiddendatum),1,4) = $y])
}"/>
N=50
M=1
$ perl runNclients.pl N=50
$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28563 lafanasi 20 0 6266m 5.1g 100m S 0 26.0 10:50.71
Mserver
M=2
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28563 lafanasi 20 0 7546m 5.9g 100m S 0 30.1 21:42.67
Mserver
...
N=60
M=10
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
31182 lafanasi 20 0 17.3g 8.3g 100m S 0 42.3 263:45.37 Mserver
...
M=30
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
31182 lafanasi 20 0 22.7g 9.6g 100m S 0 48.8 395:08.87
Mserver
Sequential run:
N=1
M=50 times
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28372 lafanasi 20 0 839m 430m 100m S 1 2.1 6:26.65
Mserver
M=100 times
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
28372 lafanasi 20 0 839m 430m 100m S 1 2.1 6:26.65
Mserver
Observation 3:
The times it takes to run 10 Tijah search queries simultaneously is the same
as the time it takes to run them sequentially. The percentage of CPU used in
both cases by mserver is also similar. Does tijah support multi-users?
Query: q2.xq
let $opt := <TijahOptions ft-index="polietiekedata" ir-model="NLLR"/>
let $c := collection("HAN")
let $qid := tijah:query-id($c, "//spreker[about(.,%KEYWORD%)]", $opt)
for $res in tijah:nodes($qid)
return <pair>{( string($res/@naam), tijah:score($qid, $res))}</pair>
20 users/threads
stopMserver,startMserver (to make sure that the server freed the memory)
$time Run20Threads(mclient -lx q2.xq)
0.134u 0.118s 0:20.01 1.1% 0+0k 0+8io 0pf+0w
sequential run, 20 times
stopMserver, startMserver
$ time for i in `seq 1 10`; do mclient -lx q2.xq; done
0.060u 0.067s 0:19.74 0.6% 0+0k 0+8io 0pf+0w
Observation 4:
The mserver memory footprint grows very fast when running Tijah queries.
When the footprint reaches 98% the query processing time gets really slow or
the server crushes.
Query: the same as above
Sequential run, M=90 times
$stopMserver, startMserver
$for i in `seq 1 10`; do mclient -lx q2.xq; done
$ top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24527 lafanasi 20 0 26.2g 19g 80m S 66 98.0 4:24.57 Mserver
Hello
MonetDB Nov2008 release, built with all standard options on RedHat EL 5
(2.6.18-92.el5 #1 SMP), on HP Proliant DL380 8-way, 16GB RAM / 16GB swap,
with dbfarm on a RAID0 internal 7-disk array.
I'm doing bulk load tests (using mclient to run COPY INTO table FROM file),
repeatedly loading 500,000 row (68M) CSV files into a fact table, while
periodically running a suite of test queries in a separate process to test
query times as the fact table grows.
The problem is that as the fact table grows to around 80Million records, the
database starts to consume all available system memory and swap, the system
thrashes and comes to its knees. I also get database corruption causing
foreign key violations. (More details below)
Has anyone else seen this? Any ideas if it's a bug, memory leak? Or could
this be caused by operator error? Suggestions?
Cheers
Bob
PS More detailed account of the problem below:
Everything starts off fine, files load in 8-10 seconds, queries quite fast.
As the table starts to fill, the queries (not unexpectedly) start to slow.
But (unexpectedly) the memory consumption of the mserver5 process steadily
increases.
At some point, around 80Million records, the system performance rapidly
starts to degrade. Query times dramatically increase, load time also
dramatically increases, and the entire system thrashes and becomes unusable.
Running 'top' shows that mserver5 has consumed vurtually all physical
memory, and most of the swap too.
top - 16:33:34 up 28 days, 23:52, 8 users, load average: 1.68, 1.51, 1.96
Tasks: 224 total, 1 running, 223 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 1.5%sy, 0.0%ni, 85.9%id, 12.4%wa, 0.0%hi, 0.0%si,
0.0%st
Mem: 16439196k total, 16298404k used, 140792k free, 2300k buffers
Swap: 18876364k total, 13315960k used, 5560404k free, 9824352k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
32380 monetdb 18 0 54.6g 14g 9.4g S 12 94.9 107:19.46 mserver5
29217 bostr 15 0 290m 5552 2264 S 1 0.0 12:01.37 gnome-terminal
The first time this happened, I stopped the load scripts, restarted the
database, and that helped things for a little while, but it wasn't long
before the memory was all consumed again, and performance went to the dogs.
Then loads started failing with foreign-key violation errors.. Inspection
showed a corrupted dimension table that somehow ended up with a bunch of bad
values in it..
So I blew away, and recreated the database.
This time I tried adding the line 'gdk_nr_threads=1' to the monetdb5.conf
file - having seen a defect report related to 'COPY INTO
' being buggy and this being the solution.
But the same thing happened again.. it ran fine again until I had about
100Million rows, then memory problems/thrashing. Restarted the server again,
and shortly after the data loads failed again with foreign-key violations
and a corrupted dimension table..
--
View this message in context: http://www.nabble.com/mserver-memory-grows%2C-system-thrashes-and-hangs-tp2…
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi MonetDB users,
I downloaded Monet Fedora RPMs from:
http://monetdb.cwi.nl/testing/projects/monetdb/Stable/.DailyBuilds./2008
1209/RPMs/Fedora9.i386.oid32/
I installed these with yum on my Fedora fc9 installation, running within
VMWare on my Windows XP 32 bit machine.
When I startup Mserver everything seems to work well. I start with an
empty dbfarm.
Then I run the following scenario:
- Add 3 XML documents with pf:add-doc() (some of them updatable)
- Add 2 more XML documents with pf:add-doc()
- Update one of these docs: do insert <something/> into
doc("a.xml")/rootelement
- Send a sync() command with mclient mil.
- Check the content of the database with pf:documents() (is OK)
- Knock down Mserver: kill <pid>
- Startup Mserver again (don't ask me why I do it like this)
After this scenario I cannot gain access to Mserver with mclient:
!xchange_challenge: frontend xquery not found
If I wait half an hour before killing Mserver this works fine.
Questions:
- Does sync() do what it is supposed to do?
- Why is shutdown() not implemented?
- What is the right way to shutdown Mserver when it is started as a
daemon in the background?
- How do I guarantee the database to remain intact with the
abovementioned scenario?
Regards,
Hans van Rijswijk
Netherlands Forensic Institute