Dear experts,

Does anyone know how to take output from DELETE and feed it to INSERT INTO another table?

Since now my pipeline ran into a certain point that 37 seconds of the total time 48 seconds
spent on the following queries:

INSERT INTO old_to_old SELECT uniqueid, targetid, distance_arcsec FROM associatedsource WHERE uniqueid IN (SELECT old_uniqueid FROM one_to_many) ORDER BY uniqueid, targetid;
42650 affected rows (20.0s)
DELETE FROM associatedsource  WHERE uniqueid IN (SELECT old_uniqueid FROM one_to_many);
42650 affected rows (16.6s)
where associatedsource table is a large table with 1219234829 rows which is the size till the end of second day, while one_to_many table is quite small only 30 rows.
associatedsource will accumulate processed data day by day, till the end of first day it has 603553616 rows.

As associatedsource table grows, this two queries has severely degraded my pipeline, 
on first day the time for them were 9.1s and 7.8s.
We can see monetdb server engine (randomly?) access the same data twice, for a large table and limited time allocated to a run of pipeline this is very expensive.
if we can combine the queries into a single then the logic about which rows to move will be neater and improve 
performance twice.

Thank you and best wishes,
Meng