Skip to content

Commit

Permalink
readme updates
Browse files Browse the repository at this point in the history
  • Loading branch information
hyperknot committed Jun 11, 2024
1 parent 989b8fa commit 1ce5c85
Show file tree
Hide file tree
Showing 3 changed files with 33 additions and 31 deletions.
36 changes: 19 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,22 @@ The only way this project can possibly work is to be super focused about what it

3. OFM does not promise worry-free automatic updates for self-hosters. Only use the autoupdate version of http-host if you keep a close eye on this repo.

## What is the tech stack?

There is no tile server running; only Btrfs partition images with 300 million hard-linked files. This was my idea; I haven't read about anyone else doing this in production, but it works really well.

There is no cloud, just dedicated servers. The HTTPS server is nginx on Ubuntu.

## Btrfs images

Production-quality hosting of 300 million tiny files is hard. The average file size is just 450 byte. Dozens of tile servers have been written to tackle this problem, but they all have their limitations.

The original idea of this project is to avoid using tile servers altogether. Instead, the tiles are directly served from Btrfs partition images + hard links using an optimised nginx config. I wrote [extract_mbtiles](scripts/tile_gen/extract_mbtiles) and [shrink_btrfs](scripts/tile_gen/shrink_btrfs) scripts for this very purpose.

This replaces a running service with a pure, file-system-level implementation. Since the Linux kernel's file caching is among the highest-performing and most thoroughly tested codes ever written, it delivers serious performance.

I run some [benchmarks](docs/quick_notes/http_benchmark.md) on a Hetzner server, the aim was to saturate a gigabit connection. At the end, it was able to serve 30 Gbit on loopback interface, on cold nginx cache.

## Code structure

The project has the following parts
Expand Down Expand Up @@ -85,26 +101,14 @@ A very important part, probably needs the most work in the long term future.

#### load balancer script - scripts/loadbalancer

Round Robin DNS based load balancer, script for health checking and updating records.

Pushes warnings to a Telegram bot.
Round Robin DNS based load balancer, script for health checking and updating records. It pushes status messages to a Telegram bot.

Currently it's running in read-only mode, DNS updates need manual confirmation.

## Self hosting

See [self hosting docs](docs/self_hosting.md).

## Btrfs images

Production-quality hosting of 300 million tiny files is hard. The average file size is just 450 byte. Dozens of tile servers have been written to tackle this problem, but they all have their limitations.

The original idea of this project is to avoid using tile servers altogether. Instead, the tiles are directly served from Btrfs partition images + hard links using an optimised nginx config. I wrote [extract_mbtiles](scripts/tile_gen/extract_mbtiles) and [shrink_btrfs](scripts/tile_gen/shrink_btrfs) scripts for this very purpose.

This replaces a running service with a pure, file-system-level implementation. Since the Linux kernel's file caching is among the highest-performing and most thoroughly tested codes ever written, it delivers serious performance.

I run some [benchmarks](docs/quick_notes/http_benchmark.md) on a Hetzner server, the aim was to saturate a gigabit connection. At the end, it was able to serve 30 Gbit on loopback interface, on cold nginx cache.

## FAQ

### Full planet downloads
Expand All @@ -126,10 +130,8 @@ There are three public buckets:

### Domains and Cloudflare

Tiles are currently available on:

- tiles.openfreemap.org - Cloudflare proxied
- direct.openfreemap.org - direct connection, Round-Robin DNS
- `tiles.openfreemap.org` - Cloudflare proxied
- `direct.openfreemap.org` - direct connection, Round-Robin DNS

The project has been designed in such a way that we can migrate away from Cloudflare if needed. This is the reason why there are a .com and a .org domain: the .com will always stay on Cloudflare to host the R2 buckets, while the .org domain is independent.

Expand Down
26 changes: 13 additions & 13 deletions website/assets/style.css
Original file line number Diff line number Diff line change
Expand Up @@ -61,19 +61,19 @@ body {
}

.static,
h1,
h2,
h3,
h4,
h5,
h6,
.col-lbl,
p,
.button-container,
#support-plans-slider {
margin-left: 40px;
margin-right: 40px;
}
h1,
h2,
h3,
h4,
h5,
h6,
.col-lbl,
p,
.button-container,
#support-plans-slider {
margin-left: 40px;
margin-right: 40px;
}

.static {
max-width: 600px;
Expand Down
2 changes: 1 addition & 1 deletion website/blocks/main.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ GitHub: [openfreemap](https://github.com/hyperknot/openfreemap) and [openfreemap

## What is the tech stack?

There is no tile server running; only Btrfs partition images with 300 million hard-linked files. This was my idea; I haven't read about anyone else doing this in production, but it works really well.
There is no tile server running; only Btrfs partition images with 300 million hard-linked files. This was my idea; I haven't read about anyone else doing this in production, but it works really well. (You can read more about it on [GitHub](https://github.com/hyperknot/openfreemap).

There is no cloud, just dedicated servers. The HTTPS server is nginx on Ubuntu.

Expand Down

0 comments on commit 1ce5c85

Please sign in to comment.