There are parts of your stack that can be run locally and others that can't. If you're using AWS for your cloud infrastructure you most likely use S3 as an object storage. Tool named MinIO provides an S3 compliant API that gives you ability to have an additional piece of puzzle running locally during development.
You can have your app running locally and still use a bunch of services in the cloud, so you ask why do you need it. The answer is simple, you can run a dev environment totally independent on your internet connection (eg. in the plane), you don't have additional costs, you can ship dev configuration in the source code, you can write tests that will create and destroy anything you need.
But not just that, MinIO is production ready software. It means you can run it on top of your infrastructure and still be able to use a lot of ready made libs available for S3.
Here's very simple docker compose configuration that can bootstrap you:
version: '3.5'
volumes:
s3:
services:
minio:
image: minio/minio:latest
volumes:
- s3:/data
ports:
- 9000:9000
environment:
- "MINIO_ACCESS_KEY=test-s3-access-key"
- "MINIO_SECRET_KEY=test-s3-secret-key"
command: "server /data"
createbuckets:
image: minio/mc:latest
depends_on:
- minio
entrypoint: >
sh -c '
sleep 3 &&
mc config host add s3 http://minio:9000 "test-s3-access-key" "test-s3-secret-key" &&
mc mb -p s3/blog &&
mc policy set download s3/blog &&
exit 0
'
It will start a MinIO server using the official Docker image and make it store files in the s3 volume (you can also use local directory if you want to inspect files manually). It will be available on port 9000
and you can use access and secret keys to access programmatically or via MinIO web UI.
MinIO provides only storage infrastructure and basic Web UI. To be able to use full potential of it you need tool shipped as a separate executable to configure it.
Tool is mc
and you can use it to configure local or remote MinIO servers. In this example service named createbuckerts
is responsible to create buckets we need and die. In the entrypoint sleep ensures we give some time to MinIO to start before we start configuring. Then we configure new host (in real life applications you can use mc
to access multiple MinIO servers). You see that we're giving it URL (using Docker service name) and access and secret keys that have rights to change configuration.
Than we're making a bucket in s3
host named blog
.
Then we're setting policy to download to make it possible to read files without additional auth, basically public read. And finally exit shell script.
Using mc
you can do a bunch of other things. Eg. create additional users or policies and assign different policies for different users.