RDS Data API BatchExecute taking significantly longer than standard connection

339 views Asked by At

I have an AWS Lambda function that needs to insert several thousand rows of data into an RDS PostgreSQL database within a Serverless Cluster. Previously I used a normal database connection using psycopg2, but I switched to the RDS Data API in order to improve performance. However, using the Data API, BatchExecute exceeds the 5 minute lambda limit and still fails to commit the transaction in this time. Meanwhile, the psycopg2 solution, which uses a different transfer protocol, inserts all data in under 30 seconds.

Why is this possible? Shouldn't the Data API give superior performance as it doesn't need to establish a connection? Can I change any settings to make the RDS Data API perform suitably?

I don't believe I am reaching any of the data size limits, because the lambda times out rather than explicitly throwing an error. Also, I know that the connection is succeeding, as other small queries are able to execute successfully.

0

There are 0 answers