Hi,

I forgot to mention, that the data providing process is more or less sleeping – cpu usage about 0.5%.

I just switched the database to postgres to double-check, that my provider is not slowing down:

21:17:43 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
21:17:44 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 0.811 seconds - totally inserted 49400 messages
21:17:44 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
21:17:45 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 0.799 seconds - totally inserted 49600 messages
21:17:45 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
21:17:46 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 0.812 seconds - totally inserted 49800 messages

Looks good so far…

I’ve already read this post: https://www.monetdb.org/pipermail/users-list/2015-December/008723.html
Yes – all the Batch-Insert of the 200 messages are performed in one transaction…

Best regards,

Alex

Von: users-list <users-list-bounces+alexandergordt=adtelligence.de@monetdb.org> im Auftrag von "Alexander Gordt (ADT)" <alexandergordt@adtelligence.de>
Antworten an: Communication channel for MonetDB users <users-list@monetdb.org>
Datum: Donnerstag, 7. Januar 2016 um 19:27
An: Communication channel for MonetDB users <users-list@monetdb.org>
Betreff: Monitoring MonetDB performance

Hello together,

I’ve got a performance issue with my project, where the DB performance is going down for what ever reason.

The activity monitor of OSX shows me the following numbers about the mserver5-process:
The CPU usage is very low (15-25% of one core!), the IO traffic on my ssd is fair (about 10-40 MB/s), but the speed slows down.
I’ve had a look in the merovingian.log, but there’s only the connection shown.

For the first 5000 messages, the CPU usage of mserver5 was about 70-110% - why is it going down?…

How could I analyze, why the database performance is going down?
Is it possible, that there is an issue with the JDBC-driver, that the performance is decreasing?
I attache a log file where you can see that the performance is quite good for some time, but then goes down for whatever reason.

18:40:46 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 15.711 seconds - totally inserted 5200 messages
18:40:46 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
18:41:03 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 17.189 seconds - totally inserted 5400 messages
18:41:03 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
18:41:27 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 23.834 seconds - totally inserted 5600 messages
18:41:27 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
18:42:19 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 52.13 seconds - totally inserted 5800 messages
18:42:19 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
18:43:09 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 49.892 seconds - totally inserted 6000 messages
18:43:09 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
18:43:59 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 49.786 seconds - totally inserted 6200 messages
18:43:59 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
18:45:31 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 92.389 seconds - totally inserted 6400 messages
18:45:31 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
18:47:26 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 114.412 seconds - totally inserted 6600 messages
18:47:26 [main] INFO  ModelPersistenceTest - Starting Batch-Insert for of 200 message
18:49:24 [main] INFO  ModelPersistenceTest - Finished BatchInsert of 200 message in 118.324 seconds - totally inserted 6800 messages

One message results in 10-50 insert statements in different tables and the 200 messages are processed in one transaction.
The absolute performance isn’t nice, but that is another topic – I’ve read about "copy into“ and that kind of stuff – but that gonna be part in a later optimization.

I tried to find out, if there is a long running process, but „select * from sys.queue;“ didn’t show me anything of interest – the result shows only this select.
Is there a way to gather some statistics about executed queries and their runtime?
Or about blocking resources…

Thanks in advance for bringing any light in the black box :)

Best regards,

Alex