Hi,
I'm a new comer with some questions about basic conceptions of MonetDB.
1. The executor
I've ever found the executor of postgreSQL at src\backend\executor,
but cannot make sure where it is like that in MonetDB src.
MonetDB\MonetDB\src\gdk or MonetDB5-server\MonetDB5\src\mal ?
3. MAL
MAL (MonetDB Assembly Language) is a script tool only for MonetDB or
a common script language that can be applied to other softwares.
Why it is a must in MonetDB ?
4. .mx files
In MonetDB, .mx files are compiled to make .h and .c files generated.(in
this way ?)
Why is this routines ? (not just coding in .h and .c files directly)
Yel
----
Best Regards
richardroky(a)gmail.com
Dear all,
hello,all, I download the windows installer of monetdb,and install,and
then start MonetDB server, my computer has 4 cpus.and with 32bit windows xp
,but the MonetDB server only start one thread.when I run MonetDB in Linux
with 2 cpus, it start 2 threads. so why? why I am in Windows xp with 4 cpus,
it only create 1 thread ?
Hi all, i'm a new user of MonetDB. I worked on MonetDB/SQL and I can use
well it. Now I need to use MonetDB directly with MAL. But i'm not so able to
do that. I run the client with this command on linux:
mclient -lmal -d database
So I can write MAL istruction. But I don't need to create a procedure in MAL
that doesn't access to my data.
What I need to do is that:
I need to give the BAT data of my DataBase and put all data on my
application: example C++ ora Java array, after that I need to manipulate
this BAT with my application and update the original BAT.
How can I do that?
summarizing i need:
- Access to the BAT of the table of my Database.
- Print it on monitor
- Put it in array, file, stack, or any DataStruct with any programming
language.
- Set my manipulated BAT again in MonetDB.
All of this is possible?
Someone can help me?
I'm sorry if I don't know the netiquette of your mailing list but I need to
do that and I don't know where I could be helped.
Thanks.
Gaetano.
On Thu, Jul 22, 2010 at 11:53:22AM +0200, Romulo Goncalves wrote:
> Changeset: 7a88e64c34b1 for MonetDB
> URL: http://dev.monetdb.org/hg/MonetDB?cmd=changeset;node=7a88e64c34b1
> Modified Files:
> sql/src/test/BugTracker-2010/Tests/connectto.Bug-2548.stable.err
> sql/src/test/BugTracker-2010/Tests/connectto.Bug-2548.stable.out
> Branch: Jun2010
> Log Message:
>
> It seems the err output for the test was not complete.
> In the approval of the test two sql statements were missing, therefore, the err output was created without the errors related to that two statements.
Thanks for taking care of this.
My reason for not approving the error messages in the first place was that I
was not sure about the intention of the test, and hence, whether and which
error messages are to be expected.
I now understand from your approval that the error messages indeed match the
intention of the test.
I wonder, though, whether the error message is indeed what it should be.
Saying "failed: Success" (see below) seems at least "odd" to me ...
Stefan
>
> diffs (32 lines):
>
> diff -r 6908fb1d95e4 -r 7a88e64c34b1 sql/src/test/BugTracker-2010/Tests/connectto.Bug-2548.stable.err
> --- a/sql/src/test/BugTracker-2010/Tests/connectto.Bug-2548.stable.err Thu Jul 22 11:02:07 2010 +0200
> +++ b/sql/src/test/BugTracker-2010/Tests/connectto.Bug-2548.stable.err Thu Jul 22 11:53:14 2010 +0200
> @@ -71,9 +71,15 @@
> # 13:12:55 > mclient -lsql -ftest -i -e --host=rig --port=39884
> # 13:12:55 >
>
> -MAPI = monetdb@rig:39884
> +MAPI = monetdb@alviss:36310
> QUERY = connect to default;
> ERROR = !CONNECT TO: DEFAULT is not supported!
> +MAPI = monetdb@alviss:36310
> +QUERY = connect to 'whatever' port 50001 database 'nonexisting' USER 'monetdb' PASSWORD 'monetdb' LANGUAGE 'mal';
> +ERROR = !IOException:mapi.connect:Could not connect: gethostbyname failed: Success
^^^^^^ ^^^^^^^
!!!!!! ???????
> +MAPI = monetdb@alviss:36310
> +QUERY = disconnect 'whatever';
> +ERROR = !DISCONNECT CATALOG: no such db_alias 'whatever'
>
> # 13:12:55 >
> # 13:12:55 > Done.
> diff -r 6908fb1d95e4 -r 7a88e64c34b1 sql/src/test/BugTracker-2010/Tests/connectto.Bug-2548.stable.out
> --- a/sql/src/test/BugTracker-2010/Tests/connectto.Bug-2548.stable.out Thu Jul 22 11:02:07 2010 +0200
> +++ b/sql/src/test/BugTracker-2010/Tests/connectto.Bug-2548.stable.out Thu Jul 22 11:53:14 2010 +0200
> @@ -23,8 +23,6 @@
> # 13:12:55 > mclient -lsql -ftest -i -e --host=rig --port=39884
> # 13:12:55 >
>
> -! to be provided / approved !
> -
> # 13:12:55 >
> # 13:12:55 > Done.
> # 13:12:55 >
> _______________________________________________
> Checkin-list mailing list
> Checkin-list(a)monetdb.org
> http://mail.monetdb.org/mailman/listinfo/checkin-list
>
--
| Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl |
| CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ |
| 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 |
| The Netherlands | Fax : +31 (20) 592-4199 |
Dear everyone,
Some days ago,I debug monetdb on linux ,use eclipse + gdb, but now I
want to debug and compile the monetdb on windows, please tell me how would i
will do to compile and debug the monetdb.
Best regards!
Dear all,
Recently,I use TPC-H test the Monetdb, On windows ,so I use "copy
,, records,,,into,,from" to load data to Monetdb,when the size of the file
is about 700M,It is OK.But when the size is 1.4G,the Monetdb server is auto
stop when loading. But when I am in Linux, it is no problem. So I want to
ask why it is wrong on windows when the file is so large?
Best regards,
The MonetDB team at CWI/MonetDB BV is pleased to announce the
Jun2010-SP1 bug fix release of the MonetDB suite of programs.
Lots of problems have been fixed, the most important one being the fix
in the handling of database upgrades of databases created with the
Feb2010 release to the current version.
More information (including release notes) on this release is available
at <http://monetdb.cwi.nl/Development/Releases/Jun2010/>.
The download location has changed to
<http://dev.monetdb.org/downloads/>. Please fix any bookmarks you may have.
--
Sjoerd Mullender
Dear Lefteris,
Thank you for your reply. Your explanation will help me much.
In addition, would you give me some guidelines on how to make the
6 different indices of the triple table be understood by the SQL/SPARQL
query optimizer and used accordingly. If I want to implement this
feature by myself, in which files the code should be added or modified.
BTW, in the future, will you implement the MonetDB/SPARQL as a new
supported front-end language? Will you consider to translate SPARQL to MAL
directly, instead of through SQL to MAL (but this way you have to develop
a SPARQL specific optimizer, instead of making use of the SQL optimizer)?
Thank you so much.
Best regards,
Xin Wang
>From: Lefteris <lsidir(a)gmail.com>
>Reply-To:
>To: wangx <wangx(a)tju.edu.cn>
>Subject: Re: [Monetdb-developers] RDF data management in MonetDB/SQL
>Date:Fri, 16 Jul 2010 10:29:53 +0200
>
>Hi,
>
>your understanding is in general correct. The availability of the six
>different indices of the triple table should be understood by the
>SQL/SPARQL query optimizer and used accordingly. However, the
>MonetDB/RDF module has not been announced yet, and that is because it
>has not finished yet. You are using a piece of code which is
>experimental and unfinished. As such, the only way to test Monet and
>its capabilities on RDF data is to manually write the SQL query in
>such a way that you use the correct order of the triple table on the
>correct join.
>
>In the future of course, this will be done by the optimizer and the
>user will only have to write simple SPARQL queries referring to the
>name of the graph, instead of the underlying storage schema, but until
>then I am afraid you will have to do it by hand. The good news is that
>if you are using RDF data and testing queries that have been published
>previously on papers as benchmarks, most likely someone else will have
>done already the translation to a correct SQL query (since most
>experiments on newly build engines that do not support SPARQL use this
>method).
>
>Hope this helps you a bit,
>
>lefteris
>
>2010/7/15 wangx
:
>> Hi MonetDB developers,
>> I have a question about RDF data management in MonetDB/SQL. The comment of
>> sql.rdfshred says "shredding an RDF data file from location results in 7 new
>> tables (6 permutations of SPO and a mapping) ... We can then query with SQL
>> queries the RDF triple store by quering tables gid_spo, gid_pso etc., ...".
>> In my option, if the spo table is considered the triples table, the other 5
>> tables (sop, pso, pos, osp, ops) (except the mapping table) can be viewed as
>> indexes of the triples table spo.
>> When I writeSQLto query the shredded RDF data in the triples table, I
>> have two ways.The first way is toonly use spo table tomake self-joins.
>> The second way is touse all 6 tables to make joins. I noticed that
>> "MonetDB/SQL Reference Manual" says that "The heart is the MonetDB server,
>> which comes with the following innovative features. ... Index selection,
>> creation and maintenance is automatic". IfI use6 tables (as indexes)
>> explicitly to make joins, it seems that I write the query plan by myself.
>> However, I think this work should be done by the SQL optimizer using
>> statistics from the system catalog. I wondered if these tables have
>> alreadybeen specified as indexes in the internal code, or if there is a way
>> to specify it so that the optimizer can use them as indexes to generate
>> query plans. I am not sure if my understanding is correct. I will appreciate
>> any help from developers. Thank you in advance.
>>
>> Best regards,
>> Xin Wang
>> ------------------------------------------------------------------------------
>> This SF.net email is sponsored by Sprint
>> What will you do first with EVO, the first 4G phone?
>> Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first
>> _______________________________________________
>> Monetdb-developers mailing list
>> Monetdb-developers(a)lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/monetdb-developers
>>
>>
>
On Thu, Jul 15, 2010 at 03:25:39PM +0200, Fabian Groffen wrote:
> Changeset: dfcab86b0cb0 for MonetDB
> URL: http://dev.monetdb.org/hg/MonetDB?cmd=changeset;node=dfcab86b0cb0
> Modified Files:
> clients/ChangeLog
> clients/src/mapiclient/mclient.mx
> Branch: default
> Log Message:
>
> implement timing per server response for results and updates for SQL formatter
>
[...]
> diff -r df828b8fc84c -r dfcab86b0cb0 clients/src/mapiclient/mclient.mx
> --- a/clients/src/mapiclient/mclient.mx Thu Jul 15 15:09:22 2010 +0200
> +++ b/clients/src/mapiclient/mclient.mx Thu Jul 15 15:11:07 2010 +0200
> @@ -342,6 +342,26 @@
> }
> }
>
> +static char htimbuf[32];
> +static char *
> +timerHuman()
> +{
> + long t = t1 - t0;
> +
> + if (t / 1000 < 950) {
> + snprintf(htimbuf, 32, "%ld.%03ldms", t / 1000, t % 1000);
> + return(htimbuf);
> + }
> + t /= 1000;
> + if (t / 1000 < 60) {
> + snprintf(htimbuf, 32, "%ld.%02lds", t / 1000, t % 1000);
> + return(htimbuf);
> + }
> + t /= 1000;
> + snprintf(htimbuf, 32, "%ldm %lds", t / 60, t % 60);
> + return(htimbuf);
> +}
> +
> /* The Mapi library eats away the comment lines, which we need to
> * detect end of debugging. We overload the routine to our liking. */
>
[...]
While I like the new timing feature as well as the convenient formatting for
human readability, I am wondering about two potential changes / extensions:
(a) While the time formatting is convenient for human readbility, it might
be less convenient for automatic (post-)processing, e.g., creating
performance figures. For that, plain unformatted microseconds (in
addition to the human-readable format) might be more convenient.
(b) The "old" (still existing) "Timer" information is sent to stderr,
thus, one can seem the Timer output without the need to grep through
(potentially large) output, e.g., by simply redirecting the output to a
file or even /dev/null --- obviously, aligning stderr & stdout in case
of large multi-query scripts is then left to the user.
The new timing output goes (together with the queries status summary
like how many result tuples were produced) to stdout (only).
(c) As the announcement (changelog) says, the new timing output is only
produced with the "sql" formatter (i.e., not with, e.g., "raw" or
"test", and hence, also not by default with `-s <sql-statement>` (unless
one explicitly requests `-f sql`).
For (a) & (b) would it be an option to consider to also send the plain
unformatted times in microseconds to stderr to (better) support automatic
post-processing of the timings?
To cover also (c), we could consider doing that with all formatters, not
only with "sql".
... just some ideas to think about and discuss ...
Stefan
--
| Dr. Stefan Manegold | mailto:Stefan.Manegold@cwi.nl |
| CWI, P.O.Box 94079 | http://www.cwi.nl/~manegold/ |
| 1090 GB Amsterdam | Tel.: +31 (20) 592-4212 |
| The Netherlands | Fax : +31 (20) 592-4199 |
Hi MonetDB developers,
I have a question about RDF data management in MonetDB/SQL. The comment of sql.rdfshred says "shredding an RDF data file from location results in 7 new tables (6 permutations of SPO and a mapping) ... We can then query with SQL queries the RDF triple store by quering tables gid_spo, gid_pso etc., ...". In my option, if the spo table is considered the triples table, the other 5 tables (sop, pso, pos, osp, ops) (except the mapping table) can be viewed as indexes of the triples table spo.
When I write SQL to query the shredded RDF data in the triples table, I have two ways. The first way is to only use spo table to make self-joins. The second way is to use all 6 tables to make joins. I noticed that "MonetDB/SQL Reference Manual" says that "The heart is the MonetDB server, which comes with the following innovative features. ... Index selection, creation and maintenance is automatic". If I use 6 tables (as indexes) explicitly to make joins, it seems that I write the query plan by myself. However, I think this work should be done by the SQL optimizer using statistics from the system catalog. I wondered if these tables have already been specified as indexes in the internal code, or if there is a way to specify it so that the optimizer can use them as indexes to generate query plans. I am not sure if my understanding is correct. I will appreciate any help from developers. Thank you in advance.
Best regards,
Xin Wang