Redshift external catalog error when copying parquet from s3

1.7k views Asked by At

I am trying to copy Google Analytics data into redshift via parquet format. When I limit the columns to a few select fields, I am able to copy the data. But on including few specific columns I get an error:

ERROR: External Catalog Error. Detail: ----------------------------------------------- error: External Catalog Error. code: 16000 context: Unsupported column type found for column: 6. Remove the column from the projection to continue. query: 18669834 location: s3_request_builder.cpp:2070 process: padbmaster [pid=23607] -----------------------------------------------

I know the issue is most probably with the data, but I am not sure how can I debug as this error is not helpful in anyway. I have tried changing data types of the columns to super, but without any success. I am not using redshift spectrum here.

2

There are 2 answers

2
Sandeep Singh On BEST ANSWER

I found the solution. In the error message it says Unsupported column type found for column: 6. Redshift column ordinality starts from 0. I was counting columns from 1, instead of 0 (my mistake). So this means issue was with column 6 (which I was reading as column 7), which was a string or varchar column in my case. I created a table with just this column and tried uploading data in just this column. Then I got

redshift_connector.error.ProgrammingError: {'S': 'ERROR', 'C': 'XX000', 'M': 'Spectrum Scan Error', 'D': '\n  -----------------------------------------------\n  error:  Spectrum Scan Error\n  code:      15001\n  context:   The length of the data column display_name is longer than the length defined in the table. Table: 256, Data: 1020

Recreating the column with varchar(max) for those columns solved the issue

1
guenhter On

I assume you have semistructured data in your parquet (like an array).

In this case, you can have a look at this page at the very bottom https://docs.aws.amazon.com/redshift/latest/dg/ingest-super.html

It says:

If your semistructured or nested data is already available in either Apache Parquet or Apache ORC format, you can use the COPY command to ingest data into Amazon Redshift.

The Amazon Redshift table structure should match the number of columns and the column data types of the Parquet or ORC files. By specifying SERIALIZETOJSON in the COPY command, you can load any column type in the file that aligns with a SUPER column in the table as SUPER. This includes structure and array types.

COPY foo FROM 's3://bucket/somewhere' ... FORMAT PARQUET SERIALIZETOJSON;

For me, the last line

...
FORMAT PARQUET SERIALIZETOJSON;

did the trick.