Bug 6707

Summary: Transaction mode forces roll back if error
Product: SQL Reporter: Peter Prib <peter.prib>
Component: allAssignee: SQL devs <bugs-sql>
Status: REOPENED ---    
Severity: normal CC: kutsurak, niels, pedrotadim
Priority: Normal    
Version: 11.33.3 (Apr2019)   
Hardware: Other   
OS: Windows   

Description Peter Prib 2019-05-27 08:23:34 CEST
User-Agent:       Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36
Build Identifier: 

Not sure why when in transaction mode one is forced to roll back UOW when sustain duplicate on insert.  Actually not sure why one cannot decide rather than being forced.  Old coding technique was to try insert and if failed then issue update or visa versa.  Not sure why one is not allowed to decide on an error if to proceed or not. 

Reproducible: Always

Steps to Reproduce:
Actual Results:  
#client13:!ERROR:SQLException:assert:M0M29!INSERT INTO: PRIMARY KEY constraint 'share_daily_history.pk1' violated
3.#client13:!ERROR:SQLException:sql.execute:25005!Current transaction is aborted (please ROLLBACK)

Expected Results:  
First insert fails due to duplicate key
As insert failed issue an update

Normal SQL coding practice since last century.  Avoids need to do select and has major performance advantages.  Obviously implementation of upsert would be better or merge if didn't have bug. Although the expected behavior should be the coder determines whether to rollback or commit.  Not clear if this behavior extends to other types of errors.
Comment 1 Pedro Ferreira 2019-06-04 21:19:42 CEST
Hello Peter, for this situation you could try to use the Savepoints feature from SQL. Thy are allow to rollback into a previous defined intermediate point within the transaction.

Comment 2 Peter Prib 2019-06-05 00:16:31 CEST
Not sure how this would fix the behavior unless you are saying the product isn't consistent in how it handles SQL failure within transaction mode.  Makes no sense.  If correct then there should be an easy fix to the bug.
Comment 3 Niels Nes cwiconfidential 2019-06-11 20:01:45 CEST
errors indeed force  a rollback and that is likely to stay.
Comment 4 Peter Prib 2019-06-12 00:57:21 CEST
Are you saying that you aren't going to align with standard practice by all main RDBMSs?  Plus not allow normal practice before advent of merge command.  

Suggest you are stopping freedom of choice by the programmer to decide whether there is a need to rollback or commit.  Part of the point for rollback/commit.  As the issuing of the rollback command is forced it seems a redundant/pointless step.
Comment 5 Panagiotis Koutsourakis cwiconfidential 2019-06-12 13:10:32 CEST
Hi Peter, 

MonetDB uses Optimistic Concurrency Control
(https://en.wikipedia.org/wiki/Optimistic_concurrency_control). The reason is
that MonetDB is optimized for large analytical workloads (think read only
queries that involve tables with billions of rows). This unfortunately means that concurrent writes (inserts/updates/deletions) in the same table are rejected by the transaction manager. While freedom of choice by the user is an important concern for RDBMS developers, two even more important concerns are data integrity (you don't want to lose your data) and performance (you want your queries to finish before the heat death of the universe). Optimistic Concurrency Control is a compromise that allows MonetDB to be really good at a particular workload profile, at the expense of inconvenience in other types of workloads. This is the reason why this is unlikely to change.

If you do not want to issue the rollback commands manually you could use
autocommit mode and it will rollback failed transactions automatically. Usually programmers want some more control in order to perform some kind of cleanup before the transaction is rolled back. 

I hope this helps clear things up.
Comment 6 Peter Prib 2019-06-13 01:31:07 CEST
Must admit your reasoning doesn't align with my knowledge of other DBMS. All main stream DBMS's offer optimistic locking on the reasoning it enables a higher performance option. These DBMS's can handle massive works loads.  One is given the choice of locking levels to suit work load profile and data profile.  For example, if the data quality is high and immutable events then there is no locking required.

Don't see why it can't be readily enabled as it should be an extensions of save points.  One could readily be given the choice and choose if the performance differential becomes significant.  I suggest there is more work in rolling back the entire unit of work and reapplying the corrected unit of work.  One of annoyances about the use of optimistic locking with leads to data quality issues if not programmed correctly.