Hi all,
Not really a bug report because I did not manage to figure out the cause.
However, after upgrading from FC31 to FC32 I could not login any more, due
to SELinux problems. Auto-relabeling did not work, nothing really...
... until I did dnf uninstall MonetDB-selinux.
I came to this point because trying to give systemd services the correct
labels with restorecon failed with an error referencing a monetdb specific
file.
I do not have the details unfortunately, but if you get problems, beware
that MonetDB SELinux package and systemd may interfere in some way beyond
my knowledge of these services.
Best regards,
Arjen
PS: Some output from logs:
sudo ausearch -c monetdb -m AVC,SELINUX_ERR
[..]
----
time->Sat May 2 20:57:01 2020
type=AVC msg=audit(1588445821.693:203): avc: denied { open } for
pid=1232 comm="monetdbd" path="/etc/resolv.conf" dev="dm-0" ino=3409775
scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:default_t:s0 tclass=file permissive=1
----
time->Sat May 2 21:12:56 2020
type=AVC msg=audit(1588446776.043:1194): avc: denied { execute } for
pid=2861 comm="(monetdbd)" name="monetdbd" dev="dm-0" ino=2147256
scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1
trawcon="unconfined_u:object_r:monetdbd_exec_t:s0"
----
time->Sat May 2 21:12:56 2020
type=AVC msg=audit(1588446776.043:1195): avc: denied { execute_no_trans }
for pid=2861 comm="(monetdbd)" path="/usr/bin/monetdbd" dev="dm-0"
ino=2147256 scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1
trawcon="unconfined_u:object_r:monetdbd_exec_t:s0"
----
time->Sat May 2 21:12:56 2020
type=AVC msg=audit(1588446776.044:1196): avc: denied { map } for
pid=2861 comm="monetdbd" path="/usr/bin/monetdbd" dev="dm-0" ino=2147256
scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1
trawcon="unconfined_u:object_r:monetdbd_exec_t:s0"
----
time->Sat May 2 21:12:56 2020
type=AVC msg=audit(1588446776.714:1197): avc: denied { remove_name } for
pid=1232 comm="monetdbd" name="merovingian.pid" dev="tmpfs" ino=34369
scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:unlabeled_t:s0 tclass=dir permissive=1
trawcon="system_u:object_r:monetdbd_var_run_t:s0"
----
time->Sat May 2 21:12:56 2020
type=AVC msg=audit(1588446776.714:1198): avc: denied { unlink } for
pid=1232 comm="monetdbd" name="merovingian.pid" dev="tmpfs" ino=34369
scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1
----
time->Sat May 2 21:12:56 2020
type=AVC msg=audit(1588446776.714:1199): avc: denied { write } for
pid=1232 comm="monetdbd" name=".merovingian_lock" dev="dm-0" ino=5899443
scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:unlabeled_t:s0 tclass=file permissive=1
trawcon="system_u:object_r:monetdbd_lock_t:s0"
----
time->Sat May 2 21:13:15 2020
type=AVC msg=audit(1588446795.214:1209): avc: denied { read } for
pid=2925 comm="(monetdbd)" name="passwd" dev="dm-0" ino=524514
scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:var_t:s0
tclass=file permissive=1
----
time->Sat May 2 21:13:15 2020
type=AVC msg=audit(1588446795.214:1210): avc: denied { open } for
pid=2925 comm="(monetdbd)" path="/var/lib/sss/mc/passwd" dev="dm-0"
ino=524514 scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:var_t:s0 tclass=file permissive=1
----
time->Sat May 2 21:13:15 2020
type=AVC msg=audit(1588446795.214:1211): avc: denied { map } for
pid=2925 comm="(monetdbd)" path="/var/lib/sss/mc/passwd" dev="dm-0"
ino=524514 scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:var_t:s0 tclass=file permissive=1
----
time->Sat May 2 21:14:24 2020
type=AVC msg=audit(1588446864.487:1281): avc: denied { read } for
pid=3072 comm="(monetdbd)" name="passwd" dev="dm-0" ino=524514
scontext=system_u:system_r:init_t:s0 tcontext=system_u:object_r:var_t:s0
tclass=file permissive=1
----
time->Sat May 2 21:14:24 2020
type=AVC msg=audit(1588446864.487:1282): avc: denied { open } for
pid=3072 comm="(monetdbd)" path="/var/lib/sss/mc/passwd" dev="dm-0"
ino=524514 scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:var_t:s0 tclass=file permissive=1
----
time->Sat May 2 21:14:24 2020
type=AVC msg=audit(1588446864.487:1283): avc: denied { map } for
pid=3072 comm="(monetdbd)" path="/var/lib/sss/mc/passwd" dev="dm-0"
ino=524514 scontext=system_u:system_r:init_t:s0
tcontext=system_u:object_r:var_t:s0 tclass=file permissive=1
--
====================================================================
ICIS, office M1.00.05 Radboud University
Mercator 1 Faculty of Science
Toernooiveld 212 arjen(a)cs.ru.nl
NL-6525 EC Nijmegen, The Netherlands +31-(0)24-365 2354
===================== http://www.informagus.nl/ ====================
--
====================================================================
ICIS, office M1.00.05 Radboud University
Mercator 1 Faculty of Science
Toernooiveld 212 arjen(a)cs.ru.nl
NL-6525 EC Nijmegen, The Netherlands +31-(0)24-365 2354
===================== http://www.informagus.nl/ ====================
Hi!
I have deployed several MonetDB databases on different customers, the
database is excellent haven't had that much problems, but I'm starting
feeling that the database is becoming "fragile"
Yesterday I have 2 crashes... Firstly, a database of 2.4TB of data... Monet
crashed... and the only information I have received was:
ERR cli_mx[28693]: #main thread:!ERROR: BBPcheckbats: file
/opt/monetdb/cli_mx/cli_mx/bat/52/5236.tail too small (expected 137803304,
actual 137166848)
2020-05-19 11:10:00 MSG cli_mx[28693]: !ERROR: BBPcheckbats: file
/opt/monetdb/cli_mx/cli_mx/bat/52/5236.tail too small (expected 137803304,
actual 137166848)
2020-05-19 11:08:11 ERR control[28629]: (local): failed to fork mserver:
database 'cli_mx' has crashed after starting, manual intervention needed,
check monetdbd's logfile (merovingian.log) for details
Obviously that info Ive got it from merovingian.log.... so, which is the
manual intervention needed... dont know!
We run a backup all the nights (we rsync the database farm against backup
server). Dont know the reason, but it seems that at moment of the backup,
eventhought the database was working fine... The backup was done. So when I
received the "crashed" notice.. I tried to recover from the backup.... but
it seems the copy was already failing. So I had to start from zero...
uploading all the information again.
it was a very tought job, once I completed (at 1am in the morning), I
stopped the db to do backup.... Again... the database has crashed!!!! never
again started!!!!
In this case the error was the following:
2020-05-19 23:57:52 MSG merovingian[2379]: sending process 2389 (database
'cli_mx') the TERM signal
2020-05-19 23:57:53 MSG merovingian[2379]: database 'cli_mx' has shut down
2020-05-19 23:57:53 MSG control[2379]: (local): stopped database 'cli_mx'
2020-05-19 23:57:54 MSG merovingian[2379]: database 'cli_mx' (2389) has
exited with exit status 0
So the 3rd time, needed to upload the information again.. I have just
finished!... Now, obviously I am scared to stop and start the db... and
till this moment i dont have a backup of the farm directory.
Can someone please help me to find the problem why the database has
crashed?
The only thing that I found on the /var/log/messages was the following (may
be this may help to find the issue)
May 19 23:03:49 mx-pve kernel: [4716718.867312] mserver5[19453]: segfault
at 138 ip 00007fad7b66b1e9 sp 00007ffe648e5600 error 6 in
lib_sql.so[7fad7b532000+1bc000]
May 19 23:10:03 mx-monet kernel: [4717092.595561] mserver5[22348]: segfault
at 138 ip 00007f07572d41e9 sp 00007ffd9f5aa950 error 6 in
lib_sql.so[7f075719b000+1bc000]
May 19 23:15:03 mx-monet kernel: [4717392.674941] mserver5[24044]:
segfault at 138 ip 00007f4e7c0041e9 sp 00007ffeab6e8e90 error 6 in
lib_sql.so[7f4e7becb000+1bc000]
May 19 23:16:57 mx-monet kernel: [4717506.863046] mserver5[24675]:
segfault at 138 ip 00007f8710d891e9 sp 00007ffc71a0e4a0 error 6 in
lib_sql.so[7f8710c50000+1bc000]
Do you have an idea what can I do ???!
The version Im using is: v11.35.19 (Nov2019-SP3) on a Linux Centos 7.
Ariel
Hi!
May be some can help me. We are using the ILIKE operator to filter data.
When we use a couple of "%" at a certain point the db stops filtering
correctly.
We started with *ILIKE '%guate%' *it found 7 records, then we changed
to *ILIKE
'%guate%**com% *it found 1 record (which is ok)... finally we changed to *ILIKE
'%guate%mo%**com%**' *and now it didn't find anything :'(
Please see the attached examples...
*A*) sql>SELECT DISTINCT nts_area_name
FROM sb_rmsv.nts_nt_sum_report
WHERE nts_area_name *ILIKE '%guate%'*;
+---------------------------+
| nts_area_name |
+===========================+
| Guatemala |
| Guatemala-Mobile |
| Guatemala-Mobile Comcel |
| Guatemala-Mobile Movistar |
| Guatemala-Mobile PCS |
| Guatemala-Telgua |
| Guatemala-Telefonica |
+---------------------------+
*B*) sql>SELECT DISTINCT nts_area_name
FROM sb_rmsv.nts_nt_sum_report
WHERE nts_area_name *ILIKE '%guate%**com%;*
+-------------------------+
| nts_area_name |
+=========================+
| Guatemala-Mobile Comcel |
+-------------------------+
*C*) sql>SELECT DISTINCT nts_area_name
FROM sb_rmsv.nts_nt_sum_report
WHERE nts_area_name *ILIKE '%guate%mo%**com%**'*;
+---------------+
| nts_area_name |
+===============+
+---------------+
0 tuples
Has anyone configured an scalable MonetDB using Docker cluster on Amazon
Web Services or other cloud environment? I'm looking into setting this
up for one of our project around COBID-19 and essentially need to:
- have my number of MonetDB instances to go up and down with user demand
- use a load balancer to spread user queries across multiple instances
I had in mind that all the instances would just point to a single dbfarm
directory (all the user queries are read only), but is this an option or
will cause problems (locks, caching, etc.)?
Based on what I understand from Cluster Management
(https://www.monetdb.org/Documentation/Cookbooks/SQLrecipes/ClusterManagement)
or Lazy Replication
(https://www.monetdb.org/Documentation/Cookbooks/SQLrecipes/WorkloadCaptureR…),
I need to replicate the database for each instance, which is not really
practical.
This document seems to hint that using the "Replication Services" model
would be an option, but then I end up with any copies of the same
database, and this will require a significant amount of extra disk space
(the transaction logs + one copy per instance).
http://www.exanest.eu/pub/2016_RTPBD_Monetdb.pdf
Any guidance, suggestion, example would be greatly appreciated.
best
*P
i am very interested in using MonetDB as OLAP solution,But now I have a few questions。
1. MonetDB cluster the largest scale ?
2. any customer case study ?