Hi,

I'm trying to copy large amounts of data into MonetDB, but the processes would get stuck in the middle.

The following is what I did. I have 6 tables, all merge tables partitioned by range of time. And there are hundreds of data files, each has almost 10 million rows and is of size about 3.5 GB. For each table, I use the following command to copy data:

    `cat file.txt | xargs -n1 -P1 -I {} sh -c "mclient -p 50000 -s \"copy 10000000 records into tbx from stdin best effort\" - < '{}'" `

where file.txt contains paths of files to be loaded.

So 6 mclient connections are created to copy data in parallel. But mclient processes would get stuck after loading more than 200 million rows each table. Then I have to kill the processes and restart the database.

The environment is CentOS 7.3, 512 GB RAM. MonetDB is compiled using the latest master branch. So does anyone know why this happened? Does MonetDB have a limit on the size of data one instance can hold? Or if there is something wrong with my practice?

Thanks in advance!
Yinjie Lin