Hi there,
I asked something like this question last year and I'm basically
checking up on the progress since then. I hope this mail is not
interpreted as demanding anything; I can see you guys are doing
impressive work, I'm just trying to give feedback from the perspective
of an application developer drooling over the potential of XML databases..
I can see on your website that collections() support is not there yet
(marked as "will", though).
I also looked at this page:
http://monetdb.cwi.nl/XQuery/Overview/Roadmap/index.html
'many small documents' is not listed on this roadmap.
On another roadmap page however:
http://monetdb.cwi.nl/Development/Roadmap/index.html
I do see the following entry (for the september release this year):
XQuery: support for large numbers of small documents
This gives me hope. :) Is that still the plan? If so, it might be useful
to update the XQuery roadmap with that tidbit from the main MonetDB
roadmap, as I might not be the only one who is looking for such features.
My use case is that of a CMS - large amounts of smaller XML documents
that it would be nice to do efficient queries over. Updating small parts
of documents (XUpdate) would be nice, but whole-document updates would
be okay already, though that might actually mean something like XUpdate
to you guys, as you might deal with collections internally as something
close to one giant document.
Monet is tantalizing because it promises supreme xquery performance but
the applications I can imagine do need support for large quantities of
smaller documents. If there's a version under development somewhere that
supports this feature I'd be happy to try playing with it and give some
feedback.
Thanks for the attention!
Regards,
Martijn
Hi:
I have installed MonetDB (Operationg system is windows xp) and the ODBC
driver.I want to connet the MonetDB server through ODBC driver,so I must do
some configuration work,for example the "databasename,username,and so
on".But the configuration work is forbidded(When I click the "MonetDB" in
the windows ODBC Manager,nothing happned to let me do configure work,and,not
error message )?could you tell me why,and,how to confiigure the ODBC?
--
View this message in context: http://www.nabble.com/Question-about-MonetDB-ODBC-driver-configuration-tf19…
Sent from the monetdb-users forum at Nabble.com.
Hi Peter Chen,
I'm not very familiar with ODBC. I'm sending your question to the
monetdb-users mailing list. I expect someone there to be able to better
answer your question.
Regards
On 12-07-2006 09:08:59 +0800, liudawei wrote:
> Hi Groffen:
>
> I have install the MonetDB to my computer (Operationg system is windows) and the ODBC driver.I want to connet the MonetDB server through ODBC driver,so I must do some configuration work.But the configuration work is forbidded,could you tell me why,and,how to confiigure the ODBC?
>
> Fabian <Fabian.Groffen(a)cwi.nl> 写道: Hi,
>
> Thanks for your interest in MonetDB. The technical documentation of the
> MonetDB kernel can be found on our webpage, http://monetdb.cwi.nl/ for
> MIL: http://monetdb.cwi.nl/TechDocs/FrontEnds/mil/index.html
>
> The SQL documentation is limited to how to get yourself going
> http://monetdb.cwi.nl/TechDocs/FrontEnds/SQL/index.html
> Our SQL syntax is conforming to the SQL/99 standard. We currently have
> not made a manual for the SQL/99 standard. A little help I can give you
> is that PostgreSQL is fairly SQL/99 conformant as well, and as such
> PostgreSQL queries are a good starting point for MonetDB/SQL.
>
> Regards,
> Fabian
>
>
>
> liudawei wrote:
> > Hi:
> > I am a starter on MonetDB,I am interesting in this excllent Database.My
> > questions are is:
> > 1) where I can get the manual of Monet Server and SQL Front End.
> >
> >
> > best regards
> > Peter Chen
Hi all,
> I read the discussion, and I might have experienced the same kind of problem
> before. I think there is some problem with the naming (when a document is
> previously shredded with the same name as a 'cached' document). At the time
> I experienced the problem I couldn't figure out what the exact problem was,
> but removing the dbfarm worked for me:
>
> So perhaps you could try to:
> - stop Mserver
> - completely remove the dbfarm-folder (defaultpath is
> "$PREFIX/var/MonetDB/dbfarm") (hopefully you don't have important XML
> documents stored in the database ;)
> - start Mserver
> - and then run the large query (without running any other queries first)?
Thank you very much, everything works fine now! Just removed all dbfarm subdirectories and the error message disappeared. And no, there were no important XML documents stored in the database, only some XMark generated streams ;-)!
Nevertheless: I never shredded any document before encountering the problem the first time. Before I /only/ used direct loading from the query. I just tried shredding to solve the problem...
Hope this helps to locate the bug!
Cheers
Michael
_____________________________________________________________________
Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
http://smartsurfer.web.de/?mc=100071&distributionid=000000000071
Hello Stefan,
thank you very much for your engagement!
> In the meantime, I have some more questions for you:
>
> Which version of SuSE Linux and which version of gcc do you use?
I am running SuSe Linux 10.0 and gcc (GCC) 4.0.2 20050901 (prerelease) (SUSE Linux).
> Which exact version of the MonetDB source do you use?
> - The released tar-balls from SourceForge
> (MonetDB-4.12.0.tar.gz & MonetDB-XQuery-0.12.0.tar.gz)?
Yes, I used the tar balls. During installation I encountered some problems with the bzip library file, but after manual reinstallation of this package from sources everything worked fine.
If I can help you debugging, e.g. by running any tests, please let me know.
Kind Regards
Michael
_____________________________________________________________________
Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
http://smartsurfer.web.de/?mc=100071&distributionid=000000000071
On Sun, Jul 09, 2006 at 05:27:17PM +0200, Michael Schmidt wrote:
> Hi Stefan,
>
> 2 more things:
>
> - I suppose you already posted my mail at MonetDB-users. I can't see them in the archives but I guess they are updated each night, am I right?
I honestly don't know, when SF updates the mailinglist archives, but my mail
(quoting your original mail) is (now) in there:
http://sourceforge.net/mailarchive/forum.php?thread_id=21306336&forum_id=42…
> - I just tried loading the XML-document using "shred_doc". It's the same error message, so the crash is independent from the query.
hm, strange --- as I just said, it works fine for me on my Athlon 64 x2
running FC4...
I'll check another Athlon 64 with SuSE 9.3 later this week...
Stefan
> Thanks again for your help!
> Michael
> ______________________________________________________________________________
> Mit WEB.DE iNews werden Sie über die Ergebnisse der wichtigsten WM-Begegnungen
> per SMS informiert: http://freemail.web.de/features/inews.htm/?mc=021202
--
| Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl |
| CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ |
| 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 |
| The Netherlands | Fax : +31 (20) 592-4312 |
Hi Michael,
I just double-checked your problem on my AMD Athlon 64 X2 running 64-bit
Fedora Core 4, and everything works fine. I can shred and query XMarks
documents up to 1.1 GB without problems (didn't try larger ones ;-)).
We also have an Athlon 64 running 64-bit SuSE Linux 9.3 --- but I haven't
had time to check that one, yet; I'll try to check it the coming days
(a 500kB document works fine on that machine in our nightly testing...).
In the meantime, I have some more questions for you:
Which version of SuSE Linux and which version of gcc do you use?
Which exact version of the MonetDB source do you use?
- The released tar-balls from SourceForge
(MonetDB-4.12.0.tar.gz & MonetDB-XQuery-0.12.0.tar.gz)?
- The release source RPMs from SourceForge
(MonetDB-4.12.0-1.src.rpm & MonetDB-XQuery-0.12.0-1.src.rpm)?
- Source tar-balls or source RPMs from our daily-builds site
(http://monetdb.cwi.nl/testing/projects/monetdb/Stable/.DailyBuilds./)?
- A CVS checkout?
(Which branch/tag?)
Cheers,
Stefan
ps: I felt free to get let the Monetdb-users mailing list participate in the
discussion, again...
On Sun, Jul 09, 2006 at 05:14:23PM +0200, Michael Schmidt wrote:
> Hi Stefan,
>
> > Sorry for the inconvenience!
> > We'll update the info on the web site.
> > And I feel free to propagate this discussion to the Monetdb-users list.
>
> Thank you very much, I will also register there...
>
> > Wrt. your actual problem, I have a couple of questions, maily requesting
> > more information to analyse the problem:
> >
> > What kind of system are you running MonetDB/XQuery on?
> > CPU (type, 32- or 64-bit)?
> > amount of memory?
> > free disk space?
> > operating system (type, 32- or 64-bit)?
>
> I'm running SuSe Linux 64-bit, here my /proc/cpuinfo... If there are any more details you need to know feel free to ask me again.
>
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 15
> model : 4
> model name : Intel(R) Pentium(R) 4 CPU 3.00GHz
> stepping : 3
> cpu MHz : 2992.716
> cache size : 2048 KB
> physical id : 0
> siblings : 2
> core id : 0
> cpu cores : 1
> fpu : yes
> fpu_exception : yes
> cpuid level : 5
> wp : yes
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl est cid cx16 xtpr
> bogomips : 5994.87
> clflush size : 64
> cache_alignment : 128
> address sizes : 36 bits physical, 48 bits virtual
> power management:
>
>
> > How did you install MonetDB/XQuery? Using binary (.rpm or .msi) packages or
> > compiling from source (tarball or from CVS)?
>
> I compiled them from source.
>
> > Which queries are you running? Which of them are working with up to 50MB but
> > fail with 100MB or more?
>
> It's the same for all queries. I'm running the XMark test Queries #1, #6, #8, #13 and #20. Here one example Query: (XMark #1):
>
> query1>
> { for $b in doc("/home/.../xmark.xml")/site/people/person
> where $b/person_id = 'person0'
> return
> <result> {$b/name} </result> }
> </query1>
>
> > Did you load/shred your document explicitely via the "shred_doc" MIL command
> > on the Mserver console, or did you have them loaded/shreded implicitely on
> > the fly via the fn:doc() XQuery function?
>
> As running benchmarks where loading time should be included, explicit console loading is unsuitable. As seen above I load them from the query.
>
> > In the latter case, you could try whether the failing queries do work in
> > larger documents once you increase the XML document cache limit by staring
> > Mserver with "--set xquery_cacheMB=<size>" where size is large than the
> > default 100, e.g., just larger than your largest document.
> > (Admittedly, there is only very little "well hidden" documentation about the
> > "--set" option(s) of Mserver; we're working on that...)
>
> I tested "--set xquery_cacheMB=500" without success.
>
> Kind regards
> Michael
> _____________________________________________________________________
> Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
> http://smartsurfer.web.de/?mc=100071&distributionid=000000000071
>
>
--
| Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl |
| CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ |
| 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 |
| The Netherlands | Fax : +31 (20) 592-4312 |
Hi Michael,
thank you very much for being interested in and using MonetDB/XQuery,
as well as for reporting your problem!
Apparently, the mailinglist information on the FAQ page of the
MonetDB/XQUery web site is indeed a bit misleading.
There are actually no specific MonetDB/*XQuery* mailing lists,
but "only" general mailing lists for MonetDB that are supposed
to be used for all communication about/around MonetDB and all
of it's companions and front-ends, o.a., MonetDB/SQL & MonetDB/XQuery;
see http://monetdb.cwi.nl/News/MailChannels/ for details.
Sorry for the inconvenience!
We'll update the info on the web site.
And I feel free to propagate this discussion to the Monetdb-users list.
Wrt. your actual problem, I have a couple of questions, maily requesting
more information to analyse the problem:
What kind of system are you running MonetDB/XQuery on?
CPU (type, 32- or 64-bit)?
amount of memory?
free disk space?
operating system (type, 32- or 64-bit)?
How did you install MonetDB/XQuery? Using binary (.rpm or .msi) packages or
compiling from source (tarball or from CVS)?
Which queries are you running? Which of them are working with up to 50MB but
fail with 100MB or more?
Did you load/shred your document explicitely via the "shred_doc" MIL command
on the Mserver console, or did you have them loaded/shreded implicitely on
the fly via the fn:doc() XQuery function?
In the latter case, you could try whether the failing queries do work in
larger documents once you increase the XML document cache limit by staring
Mserver with "--set xquery_cacheMB=<size>" where size is large than the
default 100, e.g., just larger than your largest document.
(Admittedly, there is only very little "well hidden" documentation about the
"--set" option(s) of Mserver; we're working on that...)
If this helps, then there might be a bug somewhere in our document caching
code...
Kind regards,
Stefan
On Sun, Jul 09, 2006 at 03:54:30PM +0200, Michael Schmidt wrote:
> Hi all,
>
> I'm confused as I expected a MonetDB XQuery users mailing list instead of
> a developer list. However, this list was linked in the MonetDB/XQuery FAQ
> section so I thought it would be worth trying...
>
> Here is my problem: I'm just running some XQuery benchmarks using MonetDB
> together with the XQuery module. Installation was ok and I have chosen a
> set of queries to be evaluated against data generated with the xmlgen
> tool. Query processing works fine for 10MB and 50MB documents, but when
> processing large documents (100MB and 200MB) the client reports an error
> message. I even tried to figure out the source of the problem using trace
> option -t but do not understand the listed error messages. Here is the
> final error message reported by the client:
>
> ERROR: [rename]: 29 times inserted nil due to errors at tuples "pre_size", "pre_level", "pre_prop", "pre_kind", "qn_uri", "qn_prefix".
> ERROR: [rename]: first error was:
> ERROR: rename(<tmp_351>,pre_size2): operation failed
> ERROR: interpret_unpin: [rename] bat=171,stamp=-1285 OVERWRITTEN
> ERROR: BBPdecref: tmp_253 does not have pointer fixes.
> ERROR: interpret_params: +(param 2): evaluation error.
>
> Once again: evaluation of the same queries works fine on small streams but
> fails on large streams. By the way, I've installed the latest offcial
> release of both the server and the XQuery module.
>
> Is this a bug or are there any default memory limits causing the crash? I
> tried to figure out what the --set option is good for but was not able to
> detect any documentation on supported options.
>
> Kind regards
> Michael
> _____________________________________________________________________
> Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
> http://smartsurfer.web.de/?mc=100071&distributionid=000000000071
>
>
>
> -------------------------------------------------------------------------
> Using Tomcat but need to do more? Need to support web services, security?
> Get stuff done quickly with pre-integrated technology to make your job easier
> Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
> http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
> _______________________________________________
> Monetdb-developers mailing list
> Monetdb-developers(a)lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/monetdb-developers
>
--
| Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl |
| CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ |
| 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 |
| The Netherlands | Fax : +31 (20) 592-4312 |
Hello,
Is my conclusion correct that I should not use MonetDB (yet) as a
transaction processing XML database? (inserts and deletes of lots of XML
snippets, xqueries on the complete database content, by multiple
processes)
Could you perhaps suggest an alternative solution? Any familiarity with
Berkely DB XML by Sleepycat for example?
Btw. we need a C++ API for this.
Thanks in advance.
Sander