I am trying to do a simple import of an ASCII file in S3 to a snowflake table. The number of columns in the ASCCI file is the same as the number of columns in the table. I got this to work yesterday using an internal stage in Snowflake, but doing it from S3 doesn't seem to work. Here is what I did:
create or replace stage irdb_demo url='s3://irdbgenericdemodata'
credentials=(aws_key_id='???' aws_secret_key='???');
list @irdb_demo;
It returns one row: 1 s3://irdbgenericdemodata/PAM_PORTFOLIOS.CSV
I can type: select t.$1, t.$2,t.$3,t.$4 from @irdb_demo t;
and it returns the 26 rows that I expect, such as:
PAM 100 "GIC" "Mod Scn Act/Act" etc.
I give it the commands:
create or replace file format mycsvformat
type = 'CSV'
field_delimiter = ','
skip_header = 1
FIELD_OPTIONALLY_ENCLOSED_BY = '"';
copy into Portfolios
from @irdb_demo pattern='PAM_Portfolios.csv'
file_format=mycsvformat;
the result is:
Copy executed with 0 files processed.
When I type:
select * from irdb_generic.prod.portfolios;
I get no rows in the result.
if I type: select * from irdb_generic.information_schema.load_history
I get no results from today.
What am I doing wrong with the command:
copy into Portfolios
from @irdb_demo pattern='PAM_Portfolios.csv'
file_format=mycsvformat;
10/12/2020 update.
Snowflake support said the syntax should be:
copy into Portfolios from @irdb_demo pattern='.PAM_Portfolios.csv.' file_format=mycsvformat;
This should resolve the issue.
To load just a specific file, you would need to use a FILES option rather than a pattern.
Example:
Reference: copy_into