I have a table, ('markdown') in supabase with a multi-column unique named constraint (i.e. the combination of the values of these three columns must be unique for a given row), let's call it 'unique-user-path-base_url'. I would like to insert a batch of rows into the table and update the existing rows if there is a conflict on my named constraint. I can't find any documentation on how to
I'm using the supabase python client to upsert rows into this table as follows:
supabase.table(supabase_table_name).upsert([{
'user': '...',
'path': '...',
'base_url': '...'
}, ...]).execute()
However, when one of these row insertions fails due to the constraint, the request fails. I read that I should be defining handling behavior for this using postgres ON CONFLICT, which I believe is done by adding the named argument on_conflict
to the upsert
call. I did some research on this but the only examples I can find are for cases where your constraint is just a unique column, or multiple unique columns (not a named constraint, or multi-column unique constraint), and I can't see any documentation for on_constraint at all in the supabase-py docs. I do see some information about it in the supabase js client, but again, not related to my use case. I've tried passing the name of my named constraint as the named argument like this:
supabase.table(supabase_table_name).upsert([{
'user': '...',
'path': '...',
'base_url': '...'
}, ...], on_conflict='unique-user-path-base_url').execute()
but it doesn't seem to work, I get an error that there's no column with the name that I passed.
Anybody know if this is possible or how it's supposed to be done?