Hi Stefan,

Thank you for your message.

The size of the data transferred is approximately 25 GB and consists of around 10 tables. The server running MonetDB has 6 GB of memory and 2 CPU cores. At the moment I don't know what the exact bottleneck is yet, except that it takes too long (> 30 minutes) and makes the database unresponsive for actual user queries. I'm just exploring some initial options and so I was wondering what part of the hardware (memory, CPU cores, I/O speed) would have the biggest influence.

Best regards,
Dennis



On Wed, Oct 15, 2014 at 11:54 AM, Stefan Manegold <Stefan.Manegold@cwi.nl> wrote:
Dear Dennis,

To answer your question(s) and advise on hardware upgrades, Wcd need ti know more, in particular current hardware config, data volumes, and most importantly what the current bottleneck is (i/o speed, memory limits, CPU speed?) ...

Best,
Stefan


On October 15, 2014 9:08:14 AM CEST, Dennis Pallett <dennis@pallett.nl> wrote:
Hi all,

We're experiencing some problems during bulk loading of new data into our MonetDB database. We use COPY INTO to load several CSV files every day but this loading takes (too) long but more importantly the database is completely unresponsive with regards to any incoming queries therefore completely disabling our production environment.

I'm currently looking into improving this proces and the hardware of our MonetDB server is obviously one of the starting points. Does anyone know which specs should be upgraded? Would adding more core's/CPU's have the most positive effect? Or perhaps more memory?

Any advice or insight would be greatly appreciated!

Best regards,
Dennis Pallett



users-list mailing list
users-list@monetdb.org
https://www.monetdb.org/mailman/listinfo/users-list

--
| Stefan.Manegold@CWI.nl | DB Architectures (DA) |
| www.CWI.nl/~manegold/ | Science Park 123 (L321) |
| +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |