Blob Storage trigger timing out before reaching max functionTimeout parameter

282 views Asked by At

I'm getting quite a few timeouts as my blob storage trigger is running. It seems to timeout whenever I'm inserting values into an Azure SQL DB. I have just raised the functionTimeout parameter in the host.json file to "functionTimeout": "00:40:00" before running the storage trigger, although I'm seeing timeouts happen within a couple of minutes. Why would this be the case? My function app is on ElasticPremium pricing tier.

EDIT:

System.TimeoutException message:

Exception while executing function: Functions.BlobTrigger2 The operation has timed out.

My connection to the db (I close it at the end of the script):

# urllib.parse.quote_plus for python 3
        params = urllib.parse.quote_plus(fr'Driver={DRIVER};Server=tcp:{SERVER_NAME},1433;Database=newTestdb;Uid={USER_NAME};Pwd={PASSWORD};Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;')
        conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
        engine_azure = create_engine(conn_str,echo=True)
        conn = engine_azure.connect()

This is the line of code that is run before the timeout happens (Inserting to db):

processed_df.to_sql(blob_name_file.lower(), conn, if_exists = 'append', index=False)
0

There are 0 answers