The data beyond 32762 characters is getting truncated when I am loading the data in my BigSQL table. My table definition is as below :

CREATE hadoop TABLE schema_name.table_name ( column1 VARCHAR(50), column2 INTEGER, column3 STRING, loaddate TIMESTAMP ) STORED AS PARQUET;

The for column3 is getting truncated. Is there a way to store the full data?

1 Answers

Community On

Maybe the CLOB is the answer you're looking for