Hi,
Sometimes I get !MALException:setScenario:Scenario not initialized 'sql'
from the server when I try to start it in scenarios that usually work
without any problems. I can't reproduce it consistently, but sometimes I get
it. What does this exception mean?
Thanks.
--
View this message in context: http://www.nabble.com/Meaning-of%3A-Scenario-not-initialized-%27sql%27-tp26…
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi,
I've installed monetdb on *Ubuntu 9.04 Server* with apt-get. These are
the contents of php client
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/php5-monetdb-client
/usr/share/doc/php5-monetdb-client/changelog.Debian.gz
/usr/share/doc/php5-monetdb-client/copyright
I need the moduels, where are they? Thanks. Dariusz.
I have currently installed a version of MonetDB XQuery for Linux and want to
bulkload data like this:
http://monetdb.cwi.nl/XQuery/Documentation/Bulk-Loading-a-Collection.html#B…
For that purpose, I used the following command inside the Client:
for $d in doc("/monetdbfiles.xml")//doc
return
pf:add-doc(fn:concat($d/@path,$d/@name), fn:string($d/@name), "abap10k", 0)
<>
It references the following, definitely existing XML file:
<dir>
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CCDEF">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CCIMP">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CCMAC">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CI">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM001">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM002">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM003">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM004">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM005">
</dir>
But unfortunately an error is thrown (I also tried it unter Windows and it
was the same):
MAPI = root@localhost:50000
QUERY = for $d in doc("/monetdbfiles.xml")//doc
ERROR = !ERROR: err:FODC0002, Error retrieving resource (no such
document "/monetdbfiles.xml").
What could be the reason for that?
Hi,
I've been trying to build the MonetDB5 module from the Feb2010 branch from
CVS on Windows.
However, I am getting this:
.\..\..\..\..\src\modules\kernel\group.mx(1330) : warning C4267: '=' :
conversion from 'size_t' to 'var_t', possible loss of data
.\..\..\..\..\src\modules\kernel\group.mx(1330) : warning C4267: '=' :
conversion from 'size_t' to 'var_t', possible loss of data
This did not happen in the November releases on Windows.
Any ideas on how to approach this? Do I need to use some special flags?
Thanks.
--
View this message in context: http://old.nabble.com/size_t-vs.-var_t-on-Windows-tp27396298p27396298.html
Sent from the monetdb-users mailing list archive at Nabble.com.
Hi list,
I'm using MonetDB/XQuery to run queries on sets of XML documents of all
kinds (build files, process descriptions). These sets can range from a
few dozen to a few thousand documents, so the speed of MonetDB can be a
real boon.
However, when shredding these documents there may be a few errors.
Sometimes a namespace URI is reported as not well formed, sometimes
shredding 'hangs' (esp. when using the batch-import method from the
manual), sometimes I can't find a reason.
After such an error, the database seems to be corrupted. Sometimes
queries for which collections are in the database give errors,
sometimes an XPath query doesn't finish. In most cases, the ERROR=....
from mclient doesn't give a proper error (ascii garbage).
I reallize these reports are vague. However, even though this happens
a lot, reproducing the exact sequence can be hard. (Importing a few
100 documents, waiting for the import error to occur, using the right
query to trigger the error...) But as it is now, I don't even know if
these things are known, have work-arounds, or require fixing the xml
documents upfront.
Are there people on the list with experience with this that are willing
to help me out?
Regards,
Xander.
The MonetDB team at CWI/MonetDB BV is pleased to announce the
Nov2009-SP2 bug fix release of the MonetDB suite of programs.
More information on this release will be available at
<http://monetdb.cwi.nl/Development/Releases/Nov2009/>.
Fixes include:
- No need to require bash for running the *-config scripts. (SF bug
#2914563)
- MonetDB5 development package now contains all development include
files. (SF RFE #2929558)
- Added support for compiling on OpenBSD (contributed, i.e. not tested
by us).
- General code improvements, partly inspired by running Coverity Scan.
- mclient -D will exit with a non-zero exit code if dumping failed.
(SF bug #2925674)
- The Python interface now uses the same socket options as the C
version, giving a boost in performance. (SF bug #2925750)
- Fixed a bug in the Python interface to deal with fields with
embedded newline characters. (SF bug #2917219)
- Added initial support to the Python client interface for specifying
the encoding.
- Fixed a bug in execute argument parsing in Python interface.
- Added missing .sql files to the Windows installer.
- In MonetDB/SQL, fixed bug in not(invalidtable). (SF bug #2927174)
--
Sjoerd Mullender
Hello.
I've managed to pull together, in my scarce free time, an OCaml binding
for MonetDB. It is now working and I have written a very simple MAL
client for an embedded database:
> yziquel@seldon:~/git/ocaml-monetdb5$ utils/malclient.native
> io.print(1);
> 1
> io.print(2);
> 2
I've got a small question concerning MAL. As I see it, MAL is currently
a human-readable language. Is there somewhere in the MonetDB system a
place where it is compiled to a more space-efficient representation? Or
is interpreted as such?
All the best,
--
Guillaume Yziquel
http://yziquel.homelinux.org/
I have currently installed a version of MonetDB XQuery for Linux and want to
bulkload data like this:
http://monetdb.cwi.nl/XQuery/Documentation/Bulk-Loading-a-Collection.html#B…
For that purpose, I used the following command inside the Client:
for $d in doc("/monetdbfiles.xml")//doc
return
pf:add-doc(fn:concat($d/@path,$d/@name), fn:string($d/@name), "abap10k", 0)
<>
It references the following, definitely existing XML file:
<dir>
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CCDEF">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CCIMP">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CCMAC">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CI">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM001">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM002">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM003">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM004">
<doc path="/home/epic/tarari_svn/benchmarks/abap/ABAP/"
name="ACL_AUNIT_RESULT==============CM005">
</dir>
But unfortunately an error is thrown (I also tried it unter Windows and it
was the same):
MAPI = root@localhost:50000
QUERY = for $d in doc("/monetdbfiles.xml")//doc
ERROR = !ERROR: err:FODC0002, Error retrieving resource (no such
document "/monetdbfiles.xml").
What could be the reason for that?
Hi,
Are there any plans on the horizon to roll compression into Monet? The X-100
project looked really interesting in this regard, but as I understand it,
that work has been transferred into VectorWise.
If there are no plans, is this because it's completely antithetical to the
monet architecture (from the papers it seems like X-100 was, to some degree
at least, 'integrated' in), or more due to lack of resources?
My motivating example here is OLAP: I frequently have 1 relatively large
fact table and then many much smaller dimensional tables.
If optional compression were available, it we be nice to compress all or
some of the BATs for the fact table columns and then have the others work as
usual.
(Well, at least this sounds good, maybe it makes no sense). Another
motivation is there seems to be a lot of anecdotal evidence for companies
moving from larger big iron servers to more numerous, smaller machines - so
it would be really nice to have this capability for more memory constrained
settings.
I understand on a basic level how compression conflicts with the relatively
simple approach monet uses to load BATs (e.g. memory map), but, dwelling in
ignorance, I blithely assume there could be some solution not as complex as
X-100 if one were to accept a significant performance cost. For example:
decompressing BAT data on the fly as part of a BATiterator. I probably don't
have the skills to implement even a basic on-the-fly decompression approach
like this, but just wondering aloud: how hard a problem is this?
Thanks,
Jason