I've mounted a public s3 bucket to aws ec2 instance using Goofys (kind of similar to s3fs), which will let me access files in the s3 bucket on my ec2 instance as if they were local paths. I want to use these files in my aws lambda function, passing in these local paths to the event parameter in aws lambda in python. Given that AWS lambda has a storage limit of 512 MB, is there a way I can give aws lambda access to the files on my ec2 instance?
AWS lambda really works well for my purpose (I'm trying to calculate a statistical correlation between 2 files, which takes 1-1.5 seconds), so it'd be great if anyone knows a way to make this work.
Appreciate the help.
EDIT:
In my AWS lambda function, I am using the python library pyranges, which expects local paths to files.
You have a few options:
/tmp
folder, using boto3, before invoking pyranges.