I am using nzload in Netezza for loading data. I have a file with internal data encoded as latin-1. This data loads fine into varchar fields and retains the special characters. However, get below mentioned error when loading the same data into nvarchar fields:
bad #: input row #(byte offset to last char examined) [field #, declaration] diagnostic, "text consumed"[last char examined]
1: 1(314) [54, NVARCHAR(255)] invalid UTF-8 sequence - bad continuation byte, ""[0x53 0xC3 0x4F]
In this case, it's choking on the 'ã' in São Paulo. Is there an environment setting customer need to specify to insert latin-1 data into an nvarchar field?
kapdb.admin(admin)=> show server_encoding; NOTICE: Current server encoding is LATIN9 SHOW VARIABLE
I wouldn't recommend you to change encoding on server level, it will affect other communication with server.
You can load to a staging table to a varchar column firstly and then merge to your target table which is nvarchar as you need.