Table size does not decrease despite records deletion
Dear All, Is there a way to truncate the table size when deleting some of the records? In my application, I delete and insert ~1 million records daily. But effectively, it should result in few hundreds additional records. But, I see the size of the table increases by a millions ( that can be seen in storage(), as well as hard disk size ). While the "select count(*) from table" reflects correct number. Is there a way, I can stop increase in table size in my scheme?
Regards, Manish
The problem is that I am having ~70M rows in a table, but its bat storage size is showing 170M rows. Although the count * still returns 70M, but memory is increasing very heavy each day. As described in previous mail, each day, ~1M rows are deleted from table and ~1.1M added, but it actually increases the size by 1.1M. Please let me know if something wrong in this way of updation of monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta gahoimnshg@gmail.com wrote:
Dear All, Is there a way to truncate the table size when deleting some of the records? In my application, I delete and insert ~1 million records daily. But effectively, it should result in few hundreds additional records. But, I see the size of the table increases by a millions ( that can be seen in storage(), as well as hard disk size ). While the "select count(*) from table" reflects correct number. Is there a way, I can stop increase in table size in my scheme?
Regards, Manish
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/SQLreference/Transactions
Would drop the records after ascii dump and recreate table.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta gahoimnshg@gmail.com wrote:
The problem is that I am having ~70M rows in a table, but its bat storage size is showing 170M rows. Although the count * still returns 70M, but memory is increasing very heavy each day. As described in previous mail, each day, ~1M rows are deleted from table and ~1.1M added, but it actually increases the size by 1.1M. Please let me know if something wrong in this way of updation of monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta gahoimnshg@gmail.com wrote:
Dear All, Is there a way to truncate the table size when deleting some of the records? In my application, I delete and insert ~1 million records daily. But effectively, it should result in few hundreds additional records. But, I see the size of the table increases by a millions ( that can be seen in storage(), as well as hard disk size ). While the "select count(*) from table" reflects correct number. Is there a way, I can stop increase in table size in my scheme?
Regards, Manish
On 26 Mar 2017, at 08:04, Manish gupta gahoimnshg@gmail.com wrote:
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/SQLreference/Transactions
Would drop the records after ascii dump and recreate table.
You can also temporarily store the data of the <table to be cleaned> in a <backup table>. Then do a “delete from <table to be cleaned>”, which simply wipe out all data, incl. garbage from <table to be cleaned>. Finally, put all data from <backup table> back to <table to be cleaned>.
This is mostly probably (much) faster than an ASCII dump and restore.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta gahoimnshg@gmail.com wrote: The problem is that I am having ~70M rows in a table, but its bat storage size is showing 170M rows. Although the count * still returns 70M, but memory is increasing very heavy each day. As described in previous mail, each day, ~1M rows are deleted from table and ~1.1M added, but it actually increases the size by 1.1M. Please let me know if something wrong in this way of updation of monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta gahoimnshg@gmail.com wrote: Dear All, Is there a way to truncate the table size when deleting some of the records? In my application, I delete and insert ~1 million records daily. But effectively, it should result in few hundreds additional records. But, I see the size of the table increases by a millions ( that can be seen in storage(), as well as hard disk size ). While the "select count(*) from table" reflects correct number. Is there a way, I can stop increase in table size in my scheme?
Regards, Manish
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
Hi Ying, Thanks for response. Would not this method need 2 copies of same data in DB at the same time ( until original table is finally deleted fully )? Would it demand more memory, as my dataset is quite large.
Regards, Manish
On Sun, Mar 26, 2017 at 12:02 PM, Ying Zhang Y.Zhang@cwi.nl wrote:
On 26 Mar 2017, at 08:04, Manish gupta gahoimnshg@gmail.com wrote:
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/SQLreference/Transactions
Would drop the records after ascii dump and recreate table.
You can also temporarily store the data of the <table to be cleaned> in a <backup table>. Then do a “delete from <table to be cleaned>”, which simply wipe out all data, incl. garbage from <table to be cleaned>. Finally, put all data from <backup table> back to <table to be cleaned>.
This is mostly probably (much) faster than an ASCII dump and restore.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta gahoimnshg@gmail.com
wrote:
The problem is that I am having ~70M rows in a table, but its bat
storage size is showing 170M rows. Although the count * still returns 70M, but memory is increasing very heavy each day. As described in previous mail, each day, ~1M rows are deleted from table and ~1.1M added, but it actually increases the size by 1.1M.
Please let me know if something wrong in this way of updation of monetDB
tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta gahoimnshg@gmail.com
wrote:
Dear All, Is there a way to truncate the table size when deleting some of the
records? In my application, I delete and insert ~1 million records daily. But effectively, it should result in few hundreds additional records. But, I see the size of the table increases by a millions ( that can be seen in storage(), as well as hard disk size ). While the "select count(*) from table" reflects correct number. Is there a way, I can stop increase in table size in my scheme?
Regards, Manish
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
Being a database system, MonetDB keeps persistent data on disk. Being a memory-optimized databse system, MonetDB tries to use as much memory as available to acheive high query processing performance. However, it should not "demand" more than available.
A binary copy of your table in MonetDB will be smaller (and faster) than serializing to text (CSV).
Stefan
----- On Mar 26, 2017, at 9:05 AM, Manish gupta gahoimnshg@gmail.com wrote:
Hi Ying, Thanks for response. Would not this method need 2 copies of same data in DB at the same time ( until original table is finally deleted fully )? Would it demand more memory, as my dataset is quite large.
Regards, Manish
On Sun, Mar 26, 2017 at 12:02 PM, Ying Zhang < Y.Zhang@cwi.nl > wrote:
On 26 Mar 2017, at 08:04, Manish gupta < gahoimnshg@gmail.com > wrote:
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/SQLreference/Transactions
Would drop the records after ascii dump and recreate table.
You can also temporarily store the data of the <table to be cleaned> in a <backup table>. Then do a “delete from <table to be cleaned>”, which simply wipe out all data, incl. garbage from <table to be cleaned>. Finally, put all data from <backup table> back to <table to be cleaned>.
This is mostly probably (much) faster than an ASCII dump and restore.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta < gahoimnshg@gmail.com > wrote: The problem is that I am having ~70M rows in a table, but its bat storage size is showing 170M rows. Although the count * still returns 70M, but memory is increasing very heavy each day. As described in previous mail, each day, ~1M rows are deleted from table and ~1.1M added, but it actually increases the size by 1.1M. Please let me know if something wrong in this way of updation of monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta < gahoimnshg@gmail.com > wrote: Dear All, Is there a way to truncate the table size when deleting some of the records? In my application, I delete and insert ~1 million records daily. But effectively, it should result in few hundreds additional records. But, I see the size of the table increases by a millions ( that can be seen in storage(), as well as hard disk size ). While the "select count(*) from table" reflects correct number. Is there a way, I can stop increase in table size in my scheme?
Regards, Manish
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
Thanks, let me try it.
I hope, the suggestion is this
sql>insert into t2 ( id, name ) select id, name from t1; 2 affected rows (121.998ms) sql>drop t1;
Regards, Manish
On Sun, Mar 26, 2017 at 3:11 PM, Stefan Manegold Stefan.Manegold@cwi.nl wrote:
Being a database system, MonetDB keeps persistent data on disk. Being a memory-optimized databse system, MonetDB tries to use as much memory as available to acheive high query processing performance. However, it should not "demand" more than available.
A binary copy of your table in MonetDB will be smaller (and faster) than serializing to text (CSV).
Stefan
----- On Mar 26, 2017, at 9:05 AM, Manish gupta gahoimnshg@gmail.com wrote:
Hi Ying, Thanks for response. Would not this method need 2 copies of same data in
DB at
the same time ( until original table is finally deleted fully )? Would it demand more memory, as my dataset is quite large.
Regards, Manish
On Sun, Mar 26, 2017 at 12:02 PM, Ying Zhang < Y.Zhang@cwi.nl > wrote:
On 26 Mar 2017, at 08:04, Manish gupta < gahoimnshg@gmail.com > wrote:
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/SQLreference/Transactions
Would drop the records after ascii dump and recreate table.
You can also temporarily store the data of the <table to be cleaned> in a <backup table>. Then do a “delete from <table to be cleaned>”, which simply wipe out all
data,
incl. garbage from <table to be cleaned>. Finally, put all data from <backup table> back to <table to be cleaned>.
This is mostly probably (much) faster than an ASCII dump and restore.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta < gahoimnshg@gmail.com >
wrote:
The problem is that I am having ~70M rows in a table, but its bat
storage size
is showing 170M rows. Although the count * still returns 70M, but
memory is
increasing very heavy each day. As described in previous mail, each
day, ~1M
rows are deleted from table and ~1.1M added, but it actually increases
the size
by 1.1M. Please let me know if something wrong in this way of updation of
monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta < gahoimnshg@gmail.com >
wrote:
Dear All, Is there a way to truncate the table size when deleting some of the
records? In
my application, I delete and insert ~1 million records daily. But
effectively,
it should result in few hundreds additional records. But, I see the
size of the
table increases by a millions ( that can be seen in storage(), as well
as hard
disk size ). While the "select count(*) from table" reflects correct
number. Is
there a way, I can stop increase in table size in my scheme?
Regards, Manish
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
-- | Stefan.Manegold@CWI.nl | DB Architectures (DA) | | www.CWI.nl/~manegold/ | Science Park 123 (L321) | | +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) | _______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
One bad thing that I lose indexing and other info that monetdb gathered during my queries previously. Anyways, probably deletes and updates were anyway marking those dirty and candidates for recalculations?
Regards, Manish
On Sun, Mar 26, 2017 at 5:11 PM, Manish gupta gahoimnshg@gmail.com wrote:
Thanks, let me try it.
I hope, the suggestion is this
sql>insert into t2 ( id, name ) select id, name from t1; 2 affected rows (121.998ms) sql>drop t1;
Regards, Manish
On Sun, Mar 26, 2017 at 3:11 PM, Stefan Manegold Stefan.Manegold@cwi.nl wrote:
Being a database system, MonetDB keeps persistent data on disk. Being a memory-optimized databse system, MonetDB tries to use as much memory as available to acheive high query processing performance. However, it should not "demand" more than available.
A binary copy of your table in MonetDB will be smaller (and faster) than serializing to text (CSV).
Stefan
----- On Mar 26, 2017, at 9:05 AM, Manish gupta gahoimnshg@gmail.com wrote:
Hi Ying, Thanks for response. Would not this method need 2 copies of same data
in DB at
the same time ( until original table is finally deleted fully )? Would
it
demand more memory, as my dataset is quite large.
Regards, Manish
On Sun, Mar 26, 2017 at 12:02 PM, Ying Zhang < Y.Zhang@cwi.nl > wrote:
On 26 Mar 2017, at 08:04, Manish gupta < gahoimnshg@gmail.com > wrote:
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/SQLreference/
Transactions
Would drop the records after ascii dump and recreate table.
You can also temporarily store the data of the <table to be cleaned> in
a
<backup table>. Then do a “delete from <table to be cleaned>”, which simply wipe out
all data,
incl. garbage from <table to be cleaned>. Finally, put all data from <backup table> back to <table to be cleaned>.
This is mostly probably (much) faster than an ASCII dump and restore.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta < gahoimnshg@gmail.com
wrote:
The problem is that I am having ~70M rows in a table, but its bat
storage size
is showing 170M rows. Although the count * still returns 70M, but
memory is
increasing very heavy each day. As described in previous mail, each
day, ~1M
rows are deleted from table and ~1.1M added, but it actually increases
the size
by 1.1M. Please let me know if something wrong in this way of updation of
monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta < gahoimnshg@gmail.com
wrote:
Dear All, Is there a way to truncate the table size when deleting some of the
records? In
my application, I delete and insert ~1 million records daily. But
effectively,
it should result in few hundreds additional records. But, I see the
size of the
table increases by a millions ( that can be seen in storage(), as well
as hard
disk size ). While the "select count(*) from table" reflects correct
number. Is
there a way, I can stop increase in table size in my scheme?
Regards, Manish
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
-- | Stefan.Manegold@CWI.nl | DB Architectures (DA) | | www.CWI.nl/~manegold/ | Science Park 123 (L321) | | +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) | _______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
(Automatic) Indexes would indeed be invalidated / dropped with updates and then recreated on-the-fly when required.
In case your table is like a ring buffer (oldest data gets deleted first when new data arrives), you might want to consider merge tables after all; then you can drop the "chunk-wise" (i.e., per base table slice), not requiring the hand-made "vacuum" by copying your data.
We'd need to understand the origin of the performance problems, but for that, we'd need your traces ...
Stefan
----- On Mar 26, 2017, at 1:53 PM, Manish gupta gahoimnshg@gmail.com wrote:
One bad thing that I lose indexing and other info that monetdb gathered during my queries previously. Anyways, probably deletes and updates were anyway marking those dirty and candidates for recalculations?
Regards, Manish
On Sun, Mar 26, 2017 at 5:11 PM, Manish gupta < gahoimnshg@gmail.com > wrote:
Thanks, let me try it.
I hope, the suggestion is this
sql>insert into t2 ( id, name ) select id, name from t1; 2 affected rows (121.998ms) sql>drop t1;
Regards, Manish
On Sun, Mar 26, 2017 at 3:11 PM, Stefan Manegold < Stefan.Manegold@cwi.nl > wrote:
Being a database system, MonetDB keeps persistent data on disk. Being a memory-optimized databse system, MonetDB tries to use as much memory as available to acheive high query processing performance. However, it should not "demand" more than available.
A binary copy of your table in MonetDB will be smaller (and faster) than serializing to text (CSV).
Stefan
----- On Mar 26, 2017, at 9:05 AM, Manish gupta gahoimnshg@gmail.com wrote:
Hi Ying, Thanks for response. Would not this method need 2 copies of same data in DB at the same time ( until original table is finally deleted fully )? Would it demand more memory, as my dataset is quite large.
Regards, Manish
On Sun, Mar 26, 2017 at 12:02 PM, Ying Zhang < Y.Zhang@cwi.nl > wrote:
On 26 Mar 2017, at 08:04, Manish gupta < gahoimnshg@gmail.com > wrote:
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/SQLreference/Transactions
Would drop the records after ascii dump and recreate table.
You can also temporarily store the data of the <table to be cleaned> in a <backup table>. Then do a “delete from <table to be cleaned>”, which simply wipe out all data, incl. garbage from <table to be cleaned>. Finally, put all data from <backup table> back to <table to be cleaned>.
This is mostly probably (much) faster than an ASCII dump and restore.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta < gahoimnshg@gmail.com > wrote: The problem is that I am having ~70M rows in a table, but its bat storage size is showing 170M rows. Although the count * still returns 70M, but memory is increasing very heavy each day. As described in previous mail, each day, ~1M rows are deleted from table and ~1.1M added, but it actually increases the size by 1.1M. Please let me know if something wrong in this way of updation of monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta < gahoimnshg@gmail.com > wrote: Dear All, Is there a way to truncate the table size when deleting some of the records? In my application, I delete and insert ~1 million records daily. But effectively, it should result in few hundreds additional records. But, I see the size of the table increases by a millions ( that can be seen in storage(), as well as hard disk size ). While the "select count(*) from table" reflects correct number. Is there a way, I can stop increase in table size in my scheme?
Regards, Manish
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
-- | Stefan.Manegold@CWI.nl | DB Architectures (DA) | | www.CWI.nl/~manegold/ | Science Park 123 (L321) | | +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) | _______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
Yes, that is what I had also thought earlier, and created a base and moving window architecture. Partial data modifications ( by deleting same key old data in this window and inserting new data happens for last 3 months data, and not prior to that). While my base version would have older data ( last 3 year data). 3 months data would remain in window table, and everyday 1 day data ( that becomes old enough, moves to base table). And finally a merge table was union of these two tables. Window table was easier to delete and recreate as it has only 1/50 of base table size. Unfortunately, the performance cost was so deep in merge tables, that I moved to base table only. I will share some traces, so that root cause could be found.
Thanks for all the help.
Regards, Manish
On Sun, Mar 26, 2017 at 5:57 PM, Stefan Manegold Stefan.Manegold@cwi.nl wrote:
(Automatic) Indexes would indeed be invalidated / dropped with updates and then recreated on-the-fly when required.
In case your table is like a ring buffer (oldest data gets deleted first when new data arrives), you might want to consider merge tables after all; then you can drop the "chunk-wise" (i.e., per base table slice), not requiring the hand-made "vacuum" by copying your data.
We'd need to understand the origin of the performance problems, but for that, we'd need your traces ...
Stefan
----- On Mar 26, 2017, at 1:53 PM, Manish gupta gahoimnshg@gmail.com wrote:
One bad thing that I lose indexing and other info that monetdb gathered
during
my queries previously. Anyways, probably deletes and updates were anyway marking those dirty and candidates for recalculations?
Regards, Manish
On Sun, Mar 26, 2017 at 5:11 PM, Manish gupta < gahoimnshg@gmail.com >
wrote:
Thanks, let me try it.
I hope, the suggestion is this
sql>insert into t2 ( id, name ) select id, name from t1; 2 affected rows (121.998ms) sql>drop t1;
Regards, Manish
On Sun, Mar 26, 2017 at 3:11 PM, Stefan Manegold <
Stefan.Manegold@cwi.nl >
wrote:
Being a database system, MonetDB keeps persistent data on disk. Being a memory-optimized databse system, MonetDB tries to use as much memory as available to acheive high query processing performance. However, it should not "demand" more than available.
A binary copy of your table in MonetDB will be smaller (and faster) than serializing to text (CSV).
Stefan
----- On Mar 26, 2017, at 9:05 AM, Manish gupta gahoimnshg@gmail.com
wrote:
Hi Ying, Thanks for response. Would not this method need 2 copies of same data
in DB at
the same time ( until original table is finally deleted fully )? Would
it
demand more memory, as my dataset is quite large.
Regards, Manish
On Sun, Mar 26, 2017 at 12:02 PM, Ying Zhang < Y.Zhang@cwi.nl > wrote:
On 26 Mar 2017, at 08:04, Manish gupta < gahoimnshg@gmail.com > wrote:
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/
SQLreference/Transactions
Would drop the records after ascii dump and recreate table.
You can also temporarily store the data of the <table to be cleaned> in
a
<backup table>. Then do a “delete from <table to be cleaned>”, which simply wipe out
all data,
incl. garbage from <table to be cleaned>. Finally, put all data from <backup table> back to <table to be cleaned>.
This is mostly probably (much) faster than an ASCII dump and restore.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta < gahoimnshg@gmail.com
wrote:
The problem is that I am having ~70M rows in a table, but its bat
storage size
is showing 170M rows. Although the count * still returns 70M, but
memory is
increasing very heavy each day. As described in previous mail, each
day, ~1M
rows are deleted from table and ~1.1M added, but it actually increases
the size
by 1.1M. Please let me know if something wrong in this way of updation of
monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta < gahoimnshg@gmail.com
wrote:
Dear All, Is there a way to truncate the table size when deleting some of the
records? In
my application, I delete and insert ~1 million records daily. But
effectively,
it should result in few hundreds additional records. But, I see the
size of the
table increases by a millions ( that can be seen in storage(), as well
as hard
disk size ). While the "select count(*) from table" reflects correct
number. Is
there a way, I can stop increase in table size in my scheme?
Regards, Manish
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
-- | Stefan.Manegold@CWI.nl | DB Architectures (DA) | | www.CWI.nl/~manegold/ | Science Park 123 (L321) | | +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) | _______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
-- | Stefan.Manegold@CWI.nl | DB Architectures (DA) | | www.CWI.nl/~manegold/ | Science Park 123 (L321) | | +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) | _______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
On 26 Mar 2017, at 08:04, Manish gupta gahoimnshg@gmail.com wrote:
OK, I got there is no vacuum cleaning algo implemented yet. https://www.monetdb.org/Documentation/Manuals/SQLreference/Transactions
Would drop the records after ascii dump and recreate table.
You can also temporarily store the data of the <table to be cleaned> in a <backup table>. Then do a “delete from <table to be cleaned>”, which simply wipe out all data, incl. garbage from <table to be cleaned>. Finally, put all data from <backup table> back to <table to be cleaned>.
This is mostly probably (much) faster than an ASCII dump and restore.
Regards, Manish
On Sat, Mar 25, 2017 at 5:25 PM, Manish gupta gahoimnshg@gmail.com wrote: The problem is that I am having ~70M rows in a table, but its bat storage size is showing 170M rows. Although the count * still returns 70M, but memory is increasing very heavy each day. As described in previous mail, each day, ~1M rows are deleted from table and ~1.1M added, but it actually increases the size by 1.1M. Please let me know if something wrong in this way of updation of monetDB tables.
Regards, manish
On Sat, Mar 25, 2017 at 2:05 PM, Manish gupta gahoimnshg@gmail.com wrote: Dear All, Is there a way to truncate the table size when deleting some of the records? In my application, I delete and insert ~1 million records daily. But effectively, it should result in few hundreds additional records. But, I see the size of the table increases by a millions ( that can be seen in storage(), as well as hard disk size ). While the "select count(*) from table" reflects correct number. Is there a way, I can stop increase in table size in my scheme?
Regards, Manish
users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
participants (3)
-
Manish gupta
-
Stefan Manegold
-
Ying Zhang