How to create a table in SQL Database from a CSV file in Blob which contain all the column name with its data type through Data Flow or ADF pipeline?

1.1k views Asked by At

I am having a CSV file in my Azure Blob Storage which contain all the column name with its data Data type of respective tables.

I want to create a table in SQL Database from this Blob file with the same column name with its corresponding datatype without doing the mapping.

I have created a table through data flow but I have to set the data type of each column manually. But I don't want to do this. When I create a table it should accept the same data types in the source as well as synch which was given in the CSV file.

[This is CSV file which contain column name with its data type]

[In Data Flow, source it takes all column as string]

When I import the schema it takes full column as ID (int) and data type as String but I want when I import the schema and create a table it will take column name as ID and data type as INT and it will do same all column names for multiple tables.

Please let me know if you have a solution to this problem.

1

There are 1 answers

0
Leon Yue On

In Data Factory, when we copy data from the CSV file, we set first row as column, that means that the first row data will be set as column name, id(int) and Name(varchar). As you know, the default column name data type is String(in Data Factory)/varchar(128)(in SQL database), we can not change it.

enter image description here

We can not create the table with schema as column name! There's no solution to this problem.

But Data Factory will auto help us create the suitable column data type mapping for us.

For example, if your csv file is like this:

enter image description here

Auto schema mapping will convert "1"(string) to 1(int) in Azure SQL database.

Hope this helps.