Hi Stefan,
Thanks for your reply!
We have run the query a few times with different size of data. There we
used 16G RAM(actually 13.5G was used), and find the size of 10G's data
is the critical point that can run the query. All of the data files'
size are listed below, each file name is a table name(there are only a
few tables are refered -- store_sales, date_dim, item, customer,
catalog_sales, web_sales):
7.4K call_center.dat
1.6M catalog_page.dat
212M catalog_returns.dat
2.9G catalog_sales.dat
27M customer_address.dat
64M customer.dat
77M customer_demographics.dat
9.9M date_dim.dat
77B dbgen_version.dat
149K household_demographics.dat
328B income_band.dat
2.6G inventory.dat
28M item.dat
61K promotion.dat
1.7K reason.dat
1.1K ship_mode.dat
27K store.dat
323M store_returns.dat
3.8G store_sales.dat
4.9M time_dim.dat
1.2K warehouse.dat
19K web_page.dat
98M web_returns.dat
1.5G web_sales.dat
12K web_site.dat
So we guess that the monetdb has no memory management?
For the output of `mserver5 --version` is:
MonetDB 5 server v11.27.13 "Jul2017-SP4" (64-bit, 128-bit integers)
Copyright (c) 1993 - July 2008 CWI
Copyright (c) August 2008 - 2018 MonetDB B.V., all rights reserved
Visit https://www.monetdb.org/ for further information
Found 17.0GiB available memory, 40 available cpu cores
Libraries:
libpcre: 8.38 2015-11-23 (compiled with 8.38)
openssl: OpenSSL 1.0.2g 1 Mar 2016 (compiled with OpenSSL 1.0.2g 1
Mar 2016)
libxml2: 2.9.3 (compiled with 2.9.3)
Compiled by: monetdb(a)MonetDB-0.0 (x86_64-pc-linux-gnu)
Compilation: gcc -g -O2
Linking : /usr/bin/ld -m elf_x86_64
And the size of processes is not limited.
To let you reproduce the problem conveniently, I'll provide more details
here:
you can get tpc-ds from its website(we use version 2.6.0).
Install the tpc-ds, access the directory v2.6.0/tools and run `./dsdgen
-scale 10 -dir /home/monetdb/tpc-ds_test_data10G` to generate the data.
When data has been generated, using the script /expe.sh/ to create
tables and load the data. The query script is 123.tpcds.23.sql.(The
syntaxs of other queries that tpc-ds generates is not suitable for
monetdb all, we don't modify them all when the problem occurred).
One more question, I can't get your reply email, so I don't know how to
reply you, for this case, I could only send a new mail echo time.
Thanks!
Regards,
Rancho
I'm trying to debug n issue I have with data disappearing after a database
restart.
Because part of the process for creating the table involves a mixed Mapi /
JDBC approach as described in monetdb-java/example/SQLcopyinto.java, I
started from this example.
When I run it though, I get a final count of 0 (should be 100).
Can anyone confirm?
Mar2018, JDBC 2.28
Roberto
The MonetDB team at CWI/MonetDB BV is pleased to announce the
Aug2018 feature release of the MonetDB suite of programs.
More information about MonetDB can be found on our website at
<https://www.monetdb.org/>.
For details on this release, please see the release notes at
<https://www.monetdb.org/Downloads/ReleaseNotes>.
As usual, the download location is <https://dev.monetdb.org/downloads/>.
Aug 2018 feature release (11.31.7)
MonetDB5 Server
* The lsst module was moved to a separate repository
(https://dev.monetdb.org/hg/MonetDB-lsst/).
Build Environment
* Build the MonetDB-cfitsio RPM and libmonetdb5-server-cfitsio
Debian/Ubuntu package.
* On Windows, the separate MonetDB5-Geom installer has been
incorporated into the main MonetDB5-SQL installer and is therefore
no longer available as a separate download.
Merovingian
* Added a "logrotate" configuration file. See
/etc/logrotate.d/monetdbd.
* Changed the monetdb profilerstart command to be more robust. If the
server or stethoscope crashed before, the pid file is still there,
so the next time we try to start stethoscope, it will fail. Now the
profilerstart command will check if a stethoscope process with the
recorded pid is running. If not, we start stethoscope, assuming
that something went wrong before.
* Changed the monetdb stop command to try to stop stethoscope before
stoping the server. The error conditions that can arrise from
attempting to stop stethoscope are:
+ The database is not running.
+ The profilerlogpath is not set.
+ The profiler.pid file does not exist or is inaccessible.
+ The contents of the profiler.pid are not valid.
+ Shutdown of stethoscope did not succeed.
+ Removing the profiler.pid file failed.
In all the cases, the attempt to stop the server can continue
normally, so we actually ignore any errors that rise from the
attempt to stop stethoscope.
Client Package
* ODBC: Implemented SQL_ATTR_QUERY_TIMEOUT parameter in
SQLSetStmtAttr.
* ODBC SQLGetInfo now returns a positive numeric value for InfoTypes:
SQL_MAX_COLUMN_NAME_LEN, SQL_MAX_DRIVER_CONNECTIONS,
SQL_MAX_IDENTIFIER_LEN, SQL_MAX_PROCEDURE_NAME_LEN,
SQL_MAX_SCHEMA_NAME_LEN, SQL_MAX_TABLE_NAME_LEN and
SQL_MAX_USER_NAME_LEN.
* Added a '-f rowcount' option in mclient to repress printing the
actual data of a resultset, but only print the number of returned
tuples
Stream Library
* Added support for lz4 compressed files in the stream library
MonetDB Common
* Hash indexes are now persistent across server restarts.
* The macros bunfastapp and tfastins and variants no longer set the
dirty flag of the heap they write to. This now needs to be done
separately (and preferably outside of the inner loop).
* Removed batDirty flag from BAT record. Its function is completely
superseded by batDirtydesc and the dirty flags on the various
heaps.
* Removed "masksize" argument of function BAThash.
* A whole bunch of functions that took an int argument that was used
as a Boolean (true/false) value now take a value of type bool. The
functions BATkeyed, BATordered and BATordered_rev now return a bool
instead of an int.
* Removed the tdense property: it's function is completely replaced
by whether or not tseqbase is equal to oid_nil.
Testing Environment
* Removed helper programs Mtimeout and MkillUsers: they have long
been superseded by timeout handling by Mtest.py itself.
SQL Frontend
* Removed deprecated table producing system functions:
sys.dependencies_columns_on_functions()
sys.dependencies_columns_on_indexes()
sys.dependencies_columns_on_keys()
sys.dependencies_columns_on_triggers()
sys.dependencies_columns_on_views()
sys.dependencies_functions_on_functions()
sys.dependencies_functions_on_triggers()
sys.dependencies_keys_on_foreignkeys()
sys.dependencies_owners_on_schemas()
sys.dependencies_schemas_on_users()
sys.dependencies_tables_on_foreignkeys()
sys.dependencies_tables_on_functions()
sys.dependencies_tables_on_indexes()
sys.dependencies_tables_on_triggers()
sys.dependencies_tables_on_views()
sys.dependencies_views_on_functions()
sys.dependencies_views_on_triggers() They are replaced by new
system dependency_* views: sys.dependency_args_on_types
sys.dependency_columns_on_functions
sys.dependency_columns_on_indexes sys.dependency_columns_on_keys
sys.dependency_columns_on_procedures
sys.dependency_columns_on_triggers sys.dependency_columns_on_types
sys.dependency_columns_on_views
sys.dependency_functions_on_functions
sys.dependency_functions_on_procedures
sys.dependency_functions_on_triggers
sys.dependency_functions_on_types sys.dependency_functions_on_views
sys.dependency_keys_on_foreignkeys sys.dependency_owners_on_schemas
sys.dependency_schemas_on_users
sys.dependency_tables_on_foreignkeys
sys.dependency_tables_on_functions sys.dependency_tables_on_indexes
sys.dependency_tables_on_procedures
sys.dependency_tables_on_triggers sys.dependency_tables_on_views
sys.dependency_views_on_functions
sys.dependency_views_on_procedures sys.dependency_views_on_views
* Implemented group_concat(X,Y) aggregate function which also
concatenates a column of strings X, but using a supplied string Y
as the separator. This function is also a SQL extension.
* Implemented group_concat(X) aggregate function which concatenates a
column of strings using a comma as a separator. This function is
not featured in the SQL standard.
Bug Fixes
* 4020: Importing timestamp with zone from copy into
* 6506: Improper performance counters
* 6556: Sqlitelogictest division by zero on COALESCE call
* 6564: Changes to the Remote Table definition
* 6575: Sqlitelogictest crash on groupby query with coalesce call
* 6579: Sqlitelogic test infinite loop while compiling SQL query
* 6586: Sqlitelogictest crash on complex aggregation query
* 6593: Poor performance with like operator and escape clause
* 6596: Multicolumn aggregation very slow after ANALYZE when
persistent hashes are enabled
* 6605: Sqlitelogictest set queries with wrong results
* 6606: Misleading parameter name in generate_series function
* 6610: Sqlitelogictest algebra.rangejoin undefined
* 6611: Cannot compile with GCC 8.1 and --enable-debug=no
* 6612: Implement BLOB handling in python UDFs
* 6618: dependency column on sequence violated by DROP SEQUENCE
* 6621: SELECT FROM REMOTE TABLE WHERE <> returns wrong results
* 6624: "Cannot use non GROUP BY column in query results without an
aggregate function" when using aggregate function in both HAVING
and ORDER BY clauses.
* 6625: OR in subselect causes the server to crash with segmentation
fault
* 6627: stddev_pop inconsistent behaviour
* 6628: User cannot insert into own local temporary table
* 6629: CREATE TABLE IF NOT EXISTS returns 42000!
* 6630: Sqlitelogictest cast NULL to integer failing
* 6632: Dataflow causes crash when THRnew fails
* 6633: ILIKE clauses don't work on certain characters
* 6635: monetdbd exits due to "Too many open files" error
* 6637: Within a transaction, \d after an error causes mclient to
exit
* 6638: (sequences of) mkey.bulk_rotate_xor_hash() can generate NIL
from non-NIL making multi-col joins return wrong results
* 6639: COMMENT ON TABLE abc IS NULL invalidly sets the remark column
to null where remark column is defined as NOT NULLable
Hello,
Application Teams are experiencing a behavior where few statements are skipped when executed via scripts, They are running multiple scripts concurrently on different tables.
Please let us know if there are any know issues or a fix for the reported behavior.
Database: MonetDB v11.27.13 (Jul2017-SP4)
ü DROP, CREATE, INSERT - New SQL sequence, however looks like SQL statement are not executing as in order in database as per system catalog table log time entries. We need to check on this.
ü First DROP, INSERT started and then after couple of statements, CREATE SQL started executing.
DROP TABLE IF EXISTS "KEYFOOD_IT_S_22_43321_C";^M
CREATE TABLE "KEYFOOD_IT_S_22_43321_C"(ATTR_NAME VARCHAR(500) NOT NULL, COL_NAME VARCHAR(30) NOT NULL, AVP_KEY INTEGER NOT NULL, ATTR_VALUE VARCHAR(3000), ATTR_LONG_VALUE VARCHAR(3000), ATTR_SHORT_VALUE VARCHAR(100), ATTR_MEDIUM_VALUE VARCHAR(100), PARENT_AVP_KEY INTEGER, SORT_ORDER INTEGER);
INSERT INTO "KEYFOOD_IT_S_22_43321_C" (ATTR_NAME, COL_NAME, AVP_KEY, ATTR_VALUE, ATTR_LONG_VALUE, ATTR_SHORT_VALUE, ATTR_MEDIUM_VALUE, PARENT_AVP_KEY, SORT_ORDER) SELECT ATTR_NAME, COL_NAME, AVP_KEY, ATTR_VALUE, ATTR_LONG_VALUE, ATTR_SHORT_VALUE, ATTR_MEDIUM_VALUE, PARENT_AVP_KEY, SORT_ORDER FROM "IT_DSJ_KEYFOOD_43321" WHERE COL_NAME='S_22_KEY' AND SORT_TYPE='CATG';
We were able to statements being skipped in sys.querylog_catalog and sys.querylog_calls.
Thank You,
Gautham
Hi All
I was looking to use MonetDB with Django framework but couldn't find MonetSQLdb package and all references point to 10 years old git code. Is it possible to use current versions of MonetDB with Django and where can I find required packages with configuration instructions?
Thanks
Alex
Hi,
We are having a problem, for a while, that is difficult to isolate.
Our ETL includes the following steps:
1) Open JDBC connection, add data to a table, commit, close connection
2) Open JDBC connection, some ALTER TALBLE statements, commit, close
connection
Occasionally, we get the following during step 2) :
java.sql.SQLException: ALTER TABLE: set READ or INSERT ONLY not possible
with outstanding updates (wait until updates are flushed)
at
nl.cwi.monetdb.jdbc.MonetConnection$ResponseList.executeQuery(MonetConnection.java:2732)
What puzzles me is that step one is completed when this happens, the commit
is done and the connection is closed (we triple-checked that is actually is
closed).
Notice that inserting a sleep between the to steps makes it work correctly.
As I said, this is hard to isolate and reproduce. Still, can anyone guess
what is exactly happening?
Can it be that a background process is still flushing updates from step 1) ?
Even if the JDBC connection is closed ?
Can we force a blocking flush, so that it doesn't return until it's safe?
If not, how can we know when updates are flushed?
Thanks, Roberto