Localstack S3 + AWS Java SDK keeps erroring with inexistent Key ID

191 views Asked by At

While this question has been asked multiple times, none of the answers I came across seem to make any difference in my case... (Like this one or this one)

I'm using the latest Java SDK (0.21.11 as of writing this) but no matter what I do, I keep running into the The AWS Access Key Id you provided does not exist in our records. error..

So, I'm running localstack to provide S3 "emulation" to a Java Spring Service. Both of them run on Docker, via docker compose:

version: '3.9'
services:
  localstack:
    container_name: "localstack"
    image: localstack/localstack
    ports:
      - "4566:4566"            # LocalStack Gateway
      - "4510-4559:4510-4559"  # external services port range
    environment:
      # - LOCALSTACK_HOST=host.docker.internal:4566
      - AWS_ACCESS_KEY_ID=localstack
      - AWS_SECRET_ACCESS_KEY=localstack
      - DEBUG=1
      - DOCKER_HOST=unix:///var/run/docker.sock
    volumes:
      - "./init/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh" # ready hook
      - "./volume:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"

  docrenderer:
    image: myapp/docrenderer
    container_name: docrenderer
    ports:
      - "8010:8080" # ssh -R 80:localhost:8009 localhost.run
      - "5010:5005"
    environment:
      - AWS_S3_ACCESS_KEY_ID=localstack
      - AWS_S3_ACCESS_KEY_SECRET=localstack
      - JAVA_TOOL_OPTIONS=-Xdebug -agentlib:jdwp=transport=dt_socket,server=y,address=*:5005,suspend=n
      - SPRING_PROFILES_ACTIVE=default,local,aws
      - IMAGE_STORAGE_BASE_PATH=image/thumbnail
      - IMAGE_STORAGE_BUCKET=example-bucket
      # Replace with base64 encoded google storage service account key (json). See GCP IAM.
      - STORAGE_GOOGLE_CLOUD_CREDENTIALS=ewog(...)Cn0K # dummy-key (base64 encoded)
      - EMULATOR_PUBSUB_URL=pubsub:8681
      - EMULATOR_PUBSUB_PROJECT_ID=project-dev
    depends_on:
      pubsub:
        condition: service_healthy
    deploy:
      resources:
        limits:
          memory: 1g

I clearly set the credentials as localstack (I tried just test before) on the localstack container and then do the same thing to setup the docrenderer spring stuff (which reads the values from env variables and sets them on the application.yml file)

When starting localstack I create the bucket I intend to use:

#!/bin/bash
awslocal s3 mb s3://example-bucket

Then, on my code, I use the S3AsyncClient with the CRT-based client, using the crtBuilder() method, with code that looks like this:

public class AwsS3Configuration {

  public static final Region DEFAULT_AWS_REGION = Region.EU_CENTRAL_1;
  // public static final String AWS_S3_BASE_URL = "s3." + DEFAULT_AWS_REGION.id() + ".amazonaws.com/";
  public static final String AWS_S3_BASE_URL = "s3." + DEFAULT_AWS_REGION.id() + ".localstack:4566";
  // public static final String AWS_ENDPOINT_URI = "http://host.docker.internal:4566";
  @Bean
  public AwsCredentialsProvider awsCredentialsProvider(AwsS3Properties awsS3Properties) {
    return StaticCredentialsProvider.create(
        AwsBasicCredentials.create(awsS3Properties.getAccessKeyId(), awsS3Properties.getSecretAccessKey()));
  }

  @Bean
  public S3AsyncClient s3AsyncClient(AwsCredentialsProvider awsCredentialsProvider) {
    return S3AsyncClient.crtBuilder()
        .region(DEFAULT_AWS_REGION)
        .credentialsProvider(awsCredentialsProvider)
        //.endpointOverride(URI.create(AWS_ENDPOINT_URI))
        .build();
  }

  (...)
}

As you can see, I also tried using the endpoint override, but no matter what I do, I keep hitting the The AWS Access Key Id you provided does not exist in our records. error..

What am I doing wrong??? I don't see anything that would jump out as poorly configured, according to the Configuration page

I also tried using Commandeer to browse/inspect the localstack instance, and it doesn't show me any buckets, though it DOES connect to localstack with those credentials and.

If I ask localstack itself to list the buckets, then it will reply that the bucket exists (and is empty):

root@231b6fd8e215:/opt/code/localstack# awslocal s3api list-buckets
{
    "Buckets": [
        {
            "Name": "example-bucket",
            "CreationDate": "2023-10-31T14:13:16Z"
        }
    ],
    "Owner": {
        "DisplayName": "webfile",
        "ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a"
    }
}

So, what am I doing wrong? Thanks!

0

There are 0 answers