Hi,
with the Oct2014 release of MonetDB (I did no (yet) check any other version), I just came across the following:
It appears that while timestamps are rendered with 6 decimal digits, i.e., suggesting micro-seconds accuracy/resolution, only milli-seconds are considered:
sql>select cast('1312-11-10 12:11:10.123456' as timestamp) , cast('1312-11-10 12:11:10' as timestamp) + 0.123456 , cast('1312-11-10 12:11:10' as timestamp) + interval '0.123456' second; +----------------------------+----------------------------+----------------------------+ | L1 | L2 | sql_add_single_value | +============================+============================+============================+ | 1312-11-10 12:11:10.123000 | 1312-11-10 12:11:10.123000 | 1312-11-10 12:11:10.123000 | +----------------------------+----------------------------+----------------------------+ 1 tuple (5.459ms)
Is this a bug or a "feature"?
Thanks! Stefan