I am optimizing a script I wrote last year that reads documents from a source Couch db, modified the doc and writes the new doc into a destination Couch db.
So the previous version of the script did the following 1.read a document from source db 2.modify document 3.writes doc into destination db
What I'm trying to do is to pile the docs to write in a list and then write a bulk of the (let's say 100) to the destination db to optimize perfomances.
What I found out is that when the bulk upload has to write a list of docs into the destination db if there is a doc in the list which has an "_id" which does not exist into the destination db, then that document won't be written.
The return value will have "success: true" even if after they copy happened there is no such doc in the destination db.
I tried disabling "delayed_commits" and using the flag "all_or_nothing" but nothing has changed. Cannot find info on stackoverflow / documentation so I'm quite lost.
Thanks
To the future generations: what I was experiencing is known Bug and it should be fixed in the next release.
https://issues.apache.org/jira/browse/COUCHDB-1415
The actual workaround is to write a document that is slighty different each time. I added a useless field called "timestamp" that has as value the timestamp of when i run my script.