Timestamp accuracy: milli-seconds or micro-seconds

Stefan Manegold Stefan.Manegold at cwi.nl
Fri Nov 7 16:13:25 CET 2014


Hi,

with the Oct2014 release of MonetDB
(I did no (yet) check any other version),
I just came across the following:

It appears that while timestamps are rendered with 6 decimal digits,
i.e., suggesting micro-seconds accuracy/resolution,
only milli-seconds are considered:

sql>select cast('1312-11-10 12:11:10.123456' as timestamp) , cast('1312-11-10 12:11:10' as timestamp) + 0.123456 , cast('1312-11-10 12:11:10' as timestamp) + interval '0.123456' second;
+----------------------------+----------------------------+----------------------------+
| L1                         | L2                         | sql_add_single_value       |
+============================+============================+============================+
| 1312-11-10 12:11:10.123000 | 1312-11-10 12:11:10.123000 | 1312-11-10 12:11:10.123000 |
+----------------------------+----------------------------+----------------------------+
1 tuple (5.459ms)

Is this a bug or a "feature"?


Thanks!
Stefan

-- 
| Stefan.Manegold at CWI.nl | DB Architectures   (DA) |
| www.CWI.nl/~manegold/  | Science Park 123 (L321) |
| +31 (0)20 592-4212     | 1098 XG Amsterdam  (NL) |




More information about the developers-list mailing list