240 lines
6.6 KiB
Markdown
240 lines
6.6 KiB
Markdown
---
|
|
title: "Setting up a publicly accessible Garage bucket"
|
|
slug: /setting-up-public-garage-bucket/
|
|
date: 2025-08-17
|
|
tags: ["s3", "aws", "self-hosting"]
|
|
---
|
|
|
|

|
|
|
|
## What is Garage?
|
|
|
|
Garage is software that enables you to create an S3-like object storage solution
|
|
on servers you maintain yourself.
|
|
|
|
Although your server exists outside of the AWS infrastructure, Garage
|
|
incorporates methods and operations from the S3 API and SDK (e.g. `GetBucket`,
|
|
`ListBuckets` etc. ) and is compatible with the `awscli` tool.
|
|
|
|
## My goals
|
|
|
|
I set Garage up on my VPS as a general resource that would allow me to leverage
|
|
object storage as necessary. My specific motivation was to be able to create a
|
|
publicly accessible bucket of images that I could source from a URL within my
|
|
home-made knowledge-management software ("Eolas").
|
|
|
|
Configuring unauthenticated public access to a bucket is not as straightforward
|
|
as S3 but it is possible, as I will demonstrate.
|
|
|
|
I created a Garage instance accessible at `s3.systemsobcure.net` that I would
|
|
use for authenticated access to my buckets via the S3 API or `awscli`. I also
|
|
created a publicly-accessible bucket as a Garage "website" at
|
|
`eolas.s3.systemsobcure.net`. Resources in this bucket are freely available
|
|
without authentication, for example:
|
|
|
|

|
|
|
|
## Nomenclature
|
|
|
|
An instance of Garage, running on a single server, is called a _node_. Data can
|
|
be replicated on different nodes accross multiple servers.
|
|
|
|
A _layout_ is a designation of the Garage storage topology, similar to a
|
|
partition table on a disk. A layout can span multiple nodes in scenarios where
|
|
data is being replicated. This is known as a _layout cluster_.
|
|
|
|
Once a valid layout has been created on a node, you can then create buckets that
|
|
may be replicated accross nodes.
|
|
|
|
I will be creating a single node layout containing a single bucket where the
|
|
contents are publicly accessible.
|
|
|
|
## Installation
|
|
|
|
I installed Garage and added the binary to the `$PATH` on my VPS running Debian.
|
|
|
|
```sh
|
|
wget https://garagehq.deuxfleurs.fr/_releases/v2.0.0/x86_64-unknown-linux-musl/garage
|
|
chmod +x garage
|
|
sudo mv garage /usr/local/bin
|
|
```
|
|
|
|
## Configuration
|
|
|
|
Garage is configured via a config file at `/etc/garage.toml`:
|
|
|
|
```toml
|
|
metadata_dir = "/data/sqlite/garage-metadata"
|
|
data_dir = "/mnt/storagebox_alpha/garage"
|
|
db_engine = "sqlite"
|
|
|
|
replication_factor = 1
|
|
|
|
rpc_bind_addr = "[::]:3901"
|
|
rpc_public_addr = "127.0.0.1:3901"
|
|
rpc_secret = "redacted"
|
|
|
|
[s3_api]
|
|
s3_region = "garage"
|
|
api_bind_addr = "0.0.0.0:3900"
|
|
root_domain = ".s3.garage.localhost"
|
|
|
|
[s3_web]
|
|
bind_addr = "0.0.0.0:3902"
|
|
root_domain = ".s3.systemsobscure.net"
|
|
index = "index.html"
|
|
|
|
```
|
|
|
|
The key points to note:
|
|
|
|
- I set the `data_dir` to a network-attached storage device rather than the
|
|
harddrive of the VPS.
|
|
|
|
- I set the `replication_factor` to 1 since I will be running a single node
|
|
instance of Garage
|
|
- `s3_api` is the address I will use for authenticated operations. `s3_web` is
|
|
designed to be used as a web GUI for the Garage software however I will be
|
|
using this address for my public buckets which will each be exposed under
|
|
their own `bucket.s3` subdomain on my server.
|
|
|
|
In order to be able to access the addresses over the internet, I needed to
|
|
create configuration files for both the `3900` and `3902` ports in nginx and map
|
|
the local processes to my DNS and SSL certificates.
|
|
|
|
For the web address, the key instructions are as follows:
|
|
|
|
```
|
|
server {
|
|
listen 443 ssl;
|
|
server_name *.s3.systemsobscure.net;
|
|
location / {
|
|
proxy_pass http://172.18.0.1:3902;
|
|
}
|
|
}
|
|
```
|
|
|
|
I have also configured my SSL certificate to include subdomains with the pattern
|
|
`*.s3.systemsobscure.net`.
|
|
|
|
> You'll notice I'm using a very specific IP address (`172.18.0.1`) for the
|
|
> local address rather than `localhost`. This is because my nginx instance runs
|
|
> as a Docker container and `171.18.0.1` is the default address for the Docker
|
|
> bridging network, allowing the containerised instance of nginx to access
|
|
> actual or "bare metal" ports.
|
|
|
|
The config for the API address simply maps `s3.systemsobscure.net` to the local
|
|
`3900` port.
|
|
|
|
```
|
|
server {
|
|
listen 443 ssl;
|
|
server_name s3.systemsobscure.net;
|
|
location / {
|
|
proxy_pass http://172.18.0.1:3900/;
|
|
}
|
|
}
|
|
```
|
|
|
|
With the configuration created and the routing set up I can start the server
|
|
with `garage server` and then check the status:
|
|
|
|
```sh
|
|
$ garage status
|
|
|
|
==== HEALTHY NODES ====
|
|
ID Hostname Address Tags Zone Capacity DataAvail Version
|
|
1234 self-host-server 127.0.0.1:3901 NO ROLE ASSIGNED v2.0.0
|
|
|
|
```
|
|
|
|
> To avoid having to manually start the server every time the server restarts, I
|
|
> created a systemd service to manage this automatically.
|
|
|
|
## Creating a layout and bucket
|
|
|
|
In order to start creating buckets I needed first to create a layout for the
|
|
node:
|
|
|
|
```sh
|
|
garage layout assign -z dc1 -c 500G 1234
|
|
```
|
|
|
|
This creates a layout on my single node 500GB in size (`dc1` denotes a single
|
|
zone).
|
|
|
|
To apply:
|
|
|
|
```sh
|
|
garage layout apply --version 1
|
|
```
|
|
|
|
To create my "eolas" bucket:
|
|
|
|
```sh
|
|
garage bucket create eolas
|
|
```
|
|
|
|
And then, to confirm:
|
|
|
|
```
|
|
$ garage bucket list
|
|
|
|
ID Created Global aliases Local aliases
|
|
<hash> 2025-08-10 eolas
|
|
|
|
$ garage bucket info eolas
|
|
|
|
==== BUCKET INFORMATION ====
|
|
Bucket: <hash>
|
|
Created: 2025-08-10 14:17:22.025 +00:00
|
|
|
|
Size: 38.4 MiB (40.3 MB)
|
|
Objects: 291
|
|
```
|
|
|
|
The bucket exists but in order to access it and any future buckets I need to
|
|
generate an API key that I can use to authenticate with Garage remotely.
|
|
|
|
```sh
|
|
garage key create self-host-key
|
|
```
|
|
|
|
This gives me an access key and secret key that I can add as a profile to the
|
|
`awscli` config on my client machine at `.aws/credentials`:
|
|
|
|
```
|
|
[default]
|
|
aws_access_key_id = <redacted>
|
|
aws_secret_access_key = <redacted>
|
|
|
|
[garage]
|
|
aws_access_key_id = <redacted>
|
|
aws_secret_access_key = <redacted>
|
|
```
|
|
|
|
> Note that the `default` creds are those that I use for interacting with actual
|
|
> AWS services, distinguished from Garage which uses the same software but runs
|
|
> on my server.
|
|
|
|
I then need to give the key access to the "eolas" bucket:
|
|
|
|
```sh
|
|
garage bucket allow \
|
|
--read \
|
|
--write \
|
|
--owner \
|
|
--eolas \
|
|
--key self-host-key
|
|
```
|
|
|
|
With this in place I can start interacting with the bucket on my server:
|
|
|
|
```
|
|
aws --profile garage --endpoint-url https://s3.systemsobscure.net s3 cp test.txt s3://eolas/
|
|
aws --profile garage --endpoint-url https://s3.systemsobscure.net s3 ls s3://eolas/
|
|
2025-08-17 15:28:46 test.txt
|
|
```
|
|
|
|
The file I just created can be accessed on the public internet at
|
|
[https://eolas.s3.systemsobscure.net/test.txt](https://eolas.s3.systemsobscure.net/test.txt).
|