How can I make files on s3 bucket mounted to aws ec2 instance using goofys available to aws lambda function?

450 views Asked by At

I've mounted a public s3 bucket to aws ec2 instance using Goofys (kind of similar to s3fs), which will let me access files in the s3 bucket on my ec2 instance as if they were local paths. I want to use these files in my aws lambda function, passing in these local paths to the event parameter in aws lambda in python. Given that AWS lambda has a storage limit of 512 MB, is there a way I can give aws lambda access to the files on my ec2 instance?

AWS lambda really works well for my purpose (I'm trying to calculate a statistical correlation between 2 files, which takes 1-1.5 seconds), so it'd be great if anyone knows a way to make this work.

Appreciate the help.

EDIT:

In my AWS lambda function, I am using the python library pyranges, which expects local paths to files.

1

There are 1 answers

0
Mark B On

In my AWS lambda function, I am using the python library pyranges, which expects local paths to files.

You have a few options:

  • Have your Lambda function first download the files locally to the /tmp folder, using boto3, before invoking pyranges.
  • Possibly use S3Fs to emulate file handles for S3 objects.