if there's any chance you might release SP1 soon, i'd appreciate it :)
this user is hitting (resolved) bug
3443<http://bugs.monetdb.org/show_bug.cgi?id=3443>.
thanks!
---------- Forwarded message ----------
From: lightest <noreply-comment(a)blogger.com>
Date: Sun, Feb 23, 2014 at 7:53 PM
Subject: [asdfree by anthony damico] New comment on analyze the program for
international student asse....
To: ajdamico(a)gmail.com
lightest <http://www.blogger.com/profile/12197082513968468249> has left a
new comment on your post "analyze the program for international student
asse...<http://www.asdfree.com/2013/12/analyze-program-for-international.html>":
Hi Anthony,
I've tried to run to codes, and when I run lines such as those of
MICombine() functions, sometimes there is an error like the following:
In R:
Error in .mapiRequest(conn, paste0("s", statement, ";"), async = async) :
error writing to socket (10054)
Error in .mapiRequest(conn, paste0("s", statement, ";"), async = async) :
error writing to socket (10054)
Error in .mapiRequest(conn, paste0("s", statement, ";"), async = async) :
error writing to socket (10054)
>
========================================
In the console:
# MonetDB 5 server v11.17.9 "Jan2014"
# Serving database 'pisa', using 4 threads
# Compiled for x86_64-pc-winnt/64bit with 64bit OIDs dynamically linked
# Found 7.913 GiB available main-memory.
# Copyright (c) 1993-July 2008 CWI.
# Copyright (c) August 2008-2014 MonetDB B.V., all rights reserved
# Visit http://www.monetdb.org/ for further information
# Listening for connection requests on mapi:monetdb://127.0.0.1:50007/
# MonetDB/JAQL module loaded
# MonetDB/SQL module loaded
>!FATAL: 40000!COMMIT: transation commit failed (perhaps your disk is
full?) exi
ting (kernel error: !ERROR: BATsubselect: invalid argument: b must have a
dense
head.
)
Press any key to continue . . .
=======================================
Publish<http://www.blogger.com/comment-moderate-confirm.do?blogID=69830612920483473…>
Delete<http://www.blogger.com/comment-moderate-confirm.do?blogID=69830612920483473…>
Mark as spam<http://www.blogger.com/comment-moderate-confirm.do?blogID=69830612920483473…>
Moderate<http://www.blogger.com/blogger.g?blogID=6983061292048347330#pendingcomments>comments
for this blog.
Posted by lightest to asdfree by anthony damico
<http://www.asdfree.com/>at February 23, 2014 at 7:53 PM
On 02/25/2014 05:56 PM, Mrunal Gawade wrote:
> Also try to tune your virtual memory settings by playing around with the following parameters.
> This changes the flushing behavior of memory pages.
>
> Do not use the parameters below directly as it is.
>
> Look at what are your existing parameters and tune to delay flushing. The parameters below, will
> delay flushing for an extreme time, which might not be good in your case as file system might get
> out of sync. But as a test you can try these parameters to see if they solve the problem, and then
> use correct parameters after playing around with them.
>
> sysctl -w vm.swappiness=0
> sysctl -w vm.dirty_expire_centisecs=6000000
> sysctl -w vm.dirty_background_ratio=90
> sysctl -w vm.dirty_writeback_centisecs=6000000
> sysctl -w vm.dirty_ratio=90
The current value for these parameters are:
vm.swappiness = 60
vm.dirty_expire_centisecs = 3000
vm.dirty_background_ratio = 10
vm.dirty_writeback_centisecs = 500
vm.dirty_ratio = 20
I will read more about the effects of these settings and try to tune the
system.
Also... about the hugepages, these are Debian 6.0.4 servers and the only
hugepages parameters I can find are these:
/proc/sys/vm/nr_hugepages
/proc/sys/vm/hugetlb_shm_group
/proc/sys/vm/hugepages_treat_as_movable
/proc/sys/vm/nr_overcommit_hugepages
and all of them are set to "0".
Thanks for your help.
--
/**
* Luis Neves
* @e-mail: luis.neves(a)co.sapo.pt
* @xmpp: lfs_neves(a)sapo.pt
* @web: <http://about.me/luisneves>
* @tlm: +351 962 057 656
* --
* As opiniões expressas por mim não são
* necessariamente opiniões em que eu acredito
* --
*/
Hi all,
I've run into a behaviour that I would like to understand and possibly
disable.
Two times per day the mserver daeomon starts trashing the disk
furiously, I see that the size on disk of data directory goes up to 115
GB from 85 GB and then comes back again to the same initial size, during
this process disk utilization goes trough the roof. Can anyone tell me
what is monetdb doing? There are no import/export process running at
this times, it seems that monetdb is doing some kind of data
compaction... is there anyway to disable this?
$ mserver5 --version
MonetDB 5 server v11.15.7 "Feb2013-SP2" (64-bit, 64-bit oids)
Thanks!
--
Luis Neves
Hai Imad,
On Feb 20, 2014, at 10:16 , imad hajj chahine <imad.hajj.chahine(a)gmail.com> wrote:
> Hi Ying,
>
> Thank you for your update, i will try to have an instance of monetdb
> on linux machine and check the difference. Can you please specify the
> optimal machine configuration for deploy (since we are using key value
> store model, target number of row per table > 1 billion): CPU,RAM,SSD.
This largely depends on the size of your hot data set, the type of queries, and the query load.
It's important to have sufficient RAM to hold the how hot dataset.
SSDs are always nice to have, but probably doesn't qualify the costs if you can have sufficient RAM.
> Also do running monetdb in cluster will help to optimize queries
> response or its just for fail-over purpose? does monetdb support
> physical table data-partitioning? any reference on how monetdb
> operates in cluster mode, like data partitioning, query plans, merging
> results,...
For this, you probably want to have a look at the possibilities to query "REMOTE DATABASES" and creating "MULTIPLEX-FUNNELS" here:http://www.monetdb.org/Documentation/monetdbd-man-page
With kind regards,
Jennie
>
> Thank you
>
>>>>
>>>> On 17/02/14 11:44, imad hajj chahine wrote:
>>>>
>>>> Hi,
>>>>
>>>> I have the following query that do a pivot for the data stored in different tables, the time of the query is acceptable as long as the number of table to join is up to 10, 10 to 15 the query take more than 90 seconds to return, above 15 it
>>>> does not
>>>> return.
>>>> Is there something i am missing, or i have to slice my query in smaller joins and the join back the results?
>>>>
>>>> PS: the max number of records in each table is < 200k
>>>>
>>>> select tval1.value as countall,tval2.value as ana1,tval3.value as ana2,tval4.value as count5,tval5.value as min5,tval6.value as max5,tval7.value as sum5,tval8.value as avg5,tval9.value as count6,tval10.value as min6,tval11.value as
>>>> max6,tval12.value as
>>>> sum6,tval13.value as avg6,tval14.value as count7,tval15.value as min7,tval16.value as max7,tval17.value as sum7,tval18.value as avg7
>>>> from "RPME".t_entity_cache tec join
>>>> "RPME".t_value_int_cache tval1 on tec.id_schema=tval1.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval1.id_____entity and tval1.id_attribute=1 join
>>>> "RPME".t_value_date_cache tval2 on tec.id_schema=tval2.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval2.id_____entity and tval2.id_attribute=2 join
>>>> "RPME".t_value_string_cache tval3 on tec.id_schema=tval3.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval3.id_____entity and tval3.id_attribute=3 join
>>>> "RPME".t_value_int_cache tval4 on tec.id_schema=tval4.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval4.id_____entity and tval4.id_attribute=4 join
>>>> "RPME".t_value_numeric_cache tval5 on tec.id_schema=tval5.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval5.id_____entity and tval5.id_attribute=5 join
>>>> "RPME".t_value_numeric_cache tval6 on tec.id_schema=tval6.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval6.id_____entity and tval6.id_attribute=6 join
>>>> "RPME".t_value_numeric_cache tval7 on tec.id_schema=tval7.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval7.id_____entity and tval7.id_attribute=7 join
>>>> "RPME".t_value_numeric_cache tval8 on tec.id_schema=tval8.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval8.id_____entity and tval8.id_attribute=8 join
>>>> "RPME".t_value_int_cache tval9 on tec.id_schema=tval9.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval9.id_____entity and tval9.id_attribute=9 join
>>>> "RPME".t_value_numeric_cache tval10 on tec.id_schema=tval10.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval10.id_____entity and tval10.id_attribute=10 join
>>>> "RPME".t_value_numeric_cache tval11 on tec.id_schema=tval11.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval11.id_____entity and tval11.id_attribute=11 join
>>>> "RPME".t_value_numeric_cache tval12 on tec.id_schema=tval12.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval12.id_____entity and tval12.id_attribute=12 join
>>>> "RPME".t_value_numeric_cache tval13 on tec.id_schema=tval13.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval13.id_____entity and tval13.id_attribute=13 join
>>>> "RPME".t_value_int_cache tval14 on tec.id_schema=tval14.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval14.id_____entity and tval14.id_attribute=14 join
>>>> "RPME".t_value_numeric_cache tval15 on tec.id_schema=tval15.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval15.id_____entity and tval15.id_attribute=15 join
>>>> "RPME".t_value_numeric_cache tval16 on tec.id_schema=tval16.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval16.id_____entity and tval16.id_attribute=16 join
>>>> "RPME".t_value_numeric_cache tval17 on tec.id_schema=tval17.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval17.id_____entity and tval17.id_attribute=17 join
>>>> "RPME".t_value_numeric_cache tval18 on tec.id_schema=tval18.id_schema and tec.id <http://tec.id> <http://tec.id> <http://tec.id>=tval18.id_____entity and tval18.id_attribute=18
>>>> where tec.id_schema=3
>>>>
>>>>
>>>> ___________________________________________________
>>>> users-list mailing list
>>>> users-list(a)monetdb.org <mailto:users-list@monetdb.org> <mailto:users-list@monetdb.org <mailto:users-list@monetdb.org>__>
>>>> https://www.monetdb.org/____mailman/listinfo/users-list <https://www.monetdb.org/__mailman/listinfo/users-list> <https://www.monetdb.org/__mailman/listinfo/users-list <https://www.monetdb.org/mailman/listinfo/users-list>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
Hi,
I followed the description in load bulk data to load a 200MB csv file into
monetdb.
It worked on my desktop (64-bit windows 7 system, 32GB memory), but not on
my laptop (64-bit windows 8 system, 2GB memory). I also tried with a
windows 7 8GB memory computer, it didn't work either. I followed the same
procedure on all computers.
There is no other error except "failed to import data". I'm wondering
what's the cause of loading error.
BTW, data fields in this data include timestamp, char, text, char...
Thanks ahead.
Best regards,
Summer
Hello, I’m doing a comparision on a couple of databases including monetdb . But
to maintain uniformity , I wanted to either turn off the indexing feature
in monetdb (or) to know what all indices are being maintained. Thanks in advance .
The MonetDB team at CWI/MonetDB BV is pleased to announce the
Jan2014 feature release of the MonetDB suite of programs.
More information about MonetDB can be found on our website at
<http://www.monetdb.org/>.
For details on this release, please see the release notes at
<http://www.monetdb.org/Downloads/ReleaseNotes>.
As usual, the download location is <http://dev.monetdb.org/downloads/>.
Jan 2014 feature release
Build Environment
* Created development packages for RPM based systems and
Debian/Ubuntu containing the files needed to create extensions to
the SQL front end.
* Removed Mx, the literate programming tool. All code for the server
is now pure C.
* Created packages for RPM based systems and Debian/Ubunty containing
the MonetDB interface to the GNU Scientific Library (gsl).
* We no longer install the .la files in our Fedora/Debian/Ubuntu
packages.
Client Package
* ODBC: Implemented {fn scalar()} and {interval ...} escapes.
python2
* Changed defaults for connecting (defaults to unix socket now).
* Unix sockets partially working for control protocol.
* Add support for unix socket.
python3
* Changed defaults for connecting (defaults to unix socket now).
* Unix sockets partially working for control protocol.
* Add support for unix socket.
R
* The R connector is now distributed in the source code packages.
MonetDB Common
* The join code has been completely rewritten. It now optionally uses
candidate lists, like the select code that was introduced in the
previous release.
* A new indexing structure for range selections on unsorted data has
been added.
* The vmtrim thread is no longer started by default on 64 bit
architectures. The vmtrim thread monitors memory usage and drops
BATs from memory when memory gets tight. However, in the age of
large address spaces and virtual memory, the kernel does a good
enough job. And in addition to dropping BATs, the thread also
destroyed indexing structures which would have to be recreated the
next time they were needed.
* Cleaned up some of the parameters dealing with memory usage.
* If available on the system, we now use atomic instructions for
latching.
* Removed some unused fields in the atomDesc structure. This change
requires a complete recompilation of the whole suite.
* Replaced the mutex implementation for both GNU C and Visual Studio
with a home-grown implementation that uses atomic instructions
(__sync_*() in gcc, _Interlocked*() in VS).
SQL
* Added support for quantiles (generalization of median). Usage:
SELECT quantile(column_name,0.25) FROM table_name; The value should
be in the range 0..1.
Bug Fixes
* 3040: Wrong NULL behavior in EXCEPT and INTERSECT
* 3092: ODBC client doesn't support scalar function escape
* 3198: SIGSEGV insert_string_bat (b=0x7fffe419d0a0,
n=0x7fffc4006010, append=0) at gdk_batop.c:196
* 3210: Unexpected concurrency conflict when inserting to 2 tables
simultaneously and querying one of them
* 3273: Add support to Python DBAPI package for timetz, inet and url
types
* 3285: no such table 'queryHistory'
* 3298: GDKmmap messages and monetdb start db takes very long
* 3354: Introduce query time-out
* 3371: (i)like generates batloop instead of algebra.likesubselect
* 3372: Large group by queries never complete - server at 100%
cpu(all cores) until MonetDB stopped
* 3383: Bad performance with DISTINCT GROUP BY
* 3391: Bad performance with GROUP BY and FK with out aggregate
function
* 3393: "COPY .. INTO ..." - escape of string quotes
* 3399: server crashed on simple (malformed) query
* 3401: inconsistent/strange handling of invalid dates (e.g.
2013-02-29) in where clause
* 3403: NOT NULL constraint can't be applied after deleting rows with
null values
* 3404: Assertion `h->storage == STORE_MMAP' failed.
* 3408: nested concat query crashed server
* 3411: (disguised) BETWEEN clause not recognised. Hence no
rangejoin.
* 3412: Boolean expressions in WHERE clause, result in incorrect
resulsts
* 3417: Nested Common Table Expressions Crash
* 3418: Segmentation fault on a query from table expression
* 3419: Database does not start after upgrade
* 3420: Database does not start after upgrade
* 3423: Group by alias with distinct count doesn't work
* 3425: Temporal extraction glitches
* 3427: Consistent use of current_timestamp and now()
* 3428: Aggregation over two columns is broken
* 3429: SAMPLE on JOIN result crashes server
* 3430: Wrong temporary handling
* 3431: SQLGetInfo returns incorrect value for SQL_FN_NUM_TRUNCATE
* 3432: MonetDB SQL syntax incompatible with SQL-92 <delimited
identifier> syntax
* 3435: INDEX prevents JOIN from discovering matches
* 3436: COPY INTO from file containing leading Byte Order Mark (BOM)
causes corruption
Hi,
I have the following query that do a pivot for the data stored in different
tables, the time of the query is acceptable as long as the number of table
to join is up to 10, 10 to 15 the query take more than 90 seconds to
return, above 15 it does not return.
Is there something i am missing, or i have to slice my query in smaller
joins and the join back the results?
PS: the max number of records in each table is < 200k
select tval1.value as countall,tval2.value as ana1,tval3.value as
ana2,tval4.value as count5,tval5.value as min5,tval6.value as
max5,tval7.value as sum5,tval8.value as avg5,tval9.value as
count6,tval10.value as min6,tval11.value as max6,tval12.value as
sum6,tval13.value as avg6,tval14.value as count7,tval15.value as
min7,tval16.value as max7,tval17.value as sum7,tval18.value as avg7
from "RPME".t_entity_cache tec join
"RPME".t_value_int_cache tval1 on tec.id_schema=tval1.id_schema and
tec.id=tval1.id_entity
and tval1.id_attribute=1 join
"RPME".t_value_date_cache tval2 on tec.id_schema=tval2.id_schema and
tec.id=tval2.id_entity
and tval2.id_attribute=2 join
"RPME".t_value_string_cache tval3 on tec.id_schema=tval3.id_schema and
tec.id=tval3.id_entity and tval3.id_attribute=3 join
"RPME".t_value_int_cache tval4 on tec.id_schema=tval4.id_schema and
tec.id=tval4.id_entity
and tval4.id_attribute=4 join
"RPME".t_value_numeric_cache tval5 on tec.id_schema=tval5.id_schema and
tec.id=tval5.id_entity and tval5.id_attribute=5 join
"RPME".t_value_numeric_cache tval6 on tec.id_schema=tval6.id_schema and
tec.id=tval6.id_entity and tval6.id_attribute=6 join
"RPME".t_value_numeric_cache tval7 on tec.id_schema=tval7.id_schema and
tec.id=tval7.id_entity and tval7.id_attribute=7 join
"RPME".t_value_numeric_cache tval8 on tec.id_schema=tval8.id_schema and
tec.id=tval8.id_entity and tval8.id_attribute=8 join
"RPME".t_value_int_cache tval9 on tec.id_schema=tval9.id_schema and
tec.id=tval9.id_entity
and tval9.id_attribute=9 join
"RPME".t_value_numeric_cache tval10 on tec.id_schema=tval10.id_schema and
tec.id=tval10.id_entity and tval10.id_attribute=10 join
"RPME".t_value_numeric_cache tval11 on tec.id_schema=tval11.id_schema and
tec.id=tval11.id_entity and tval11.id_attribute=11 join
"RPME".t_value_numeric_cache tval12 on tec.id_schema=tval12.id_schema and
tec.id=tval12.id_entity and tval12.id_attribute=12 join
"RPME".t_value_numeric_cache tval13 on tec.id_schema=tval13.id_schema and
tec.id=tval13.id_entity and tval13.id_attribute=13 join
"RPME".t_value_int_cache tval14 on tec.id_schema=tval14.id_schema and
tec.id=tval14.id_entity
and tval14.id_attribute=14 join
"RPME".t_value_numeric_cache tval15 on tec.id_schema=tval15.id_schema and
tec.id=tval15.id_entity and tval15.id_attribute=15 join
"RPME".t_value_numeric_cache tval16 on tec.id_schema=tval16.id_schema and
tec.id=tval16.id_entity and tval16.id_attribute=16 join
"RPME".t_value_numeric_cache tval17 on tec.id_schema=tval17.id_schema and
tec.id=tval17.id_entity and tval17.id_attribute=17 join
"RPME".t_value_numeric_cache tval18 on tec.id_schema=tval18.id_schema and
tec.id=tval18.id_entity and tval18.id_attribute=18
where tec.id_schema=3
Hi,
How to get the last inserted id with JDBC, i see its supported in mapi
library but i am unable to get it with JDBC interface
using st.getGeneratedKeys().
Thanks