Wed 21 May 2025
I've run a few small git forges that've gone through the various fork paths over the years, from Gogs to Gitea to (now) Forgejo. It all runs remarkably well and has been mostly set-and-forget for as long as I can remember, and easily trumps the few times I've had to deal with self hosting something like Gitlab. One weird issue would come and go over the years, though: on first load of a page for Gitea and Forgejo, I'd randomly get an alert about being unable to load the core JS file.
If you just reloaded the page, things would work fine - and frankly that was simpler than bothering to debug it when it appeared. Presumably, the file was just getting cached somehow and subsequent loads would not have issues. I had a few minutes earlier this week to debug it and figured I'd note it down for anyone who's getting confused by it. In my case, this error was actually specific to nginx as a reverse proxy sitting in front of everything.
The Fix
When nginx is proxying a response that's deemed too large, it attempts to buffer it in temporary file. For whatever reason, on my stock Debian 12 image nginx did not have the right permissions to do this. A simple chown to the user that nginx runs under fixes this:
sudo chown -R www-data:www-data /var/cache/nginx
This error as presented in the browser can be confusing; if you have http2 enabled on your host, Chrome & co will report it as an http2 specific issue. If you disable that, it correctly reports it as invalid chunking - i.e, the response is being interrupted due to nginx not being able to write when proxying.
Tue 29 April 2025
I recently found myself writing a quick script to toggle some jobs on a remote server. The server itself runs a bog-standard PostgreSQL instance to store and manage the jobs, and the setup itself isn't so important that I wanted to spend the effort cobbling together an HTTP server and endpoint - just a quick call over SSH should be more than enough. Now, you could do this in any language, framework, or environment - but the overall project is written in Rust, and I like to keep things uniform where it makes sense to do so. After perusing some docs for a minute or two, it seemed relatively straightforward to do - but I realized I hadn't seen any good examples of this floating around, and so I figured I may as well throw one up here.
This won't be as in-depth or complete as other posts on this site, but the general approach is correct and I'm confident that interested parties can glean what they need from here. It's expected that you know what/why you're reading here.
Crates We Need
There's really only two or three crates that we need to be dealing with here.
The latter two crates are key: how do we make tokio-postgres call over SSH?
Step 1: SSH
Let's get our SSH tunnel sorted. A few key variables we need up-front:
// Remember, this is an example - you don't typically want to shove these
// (some or all) into your codebase. You probably want to load these
// from your environment, or however you typically handle sensitive keys.
const SERVER: (&str, u16) = ("your.server.ip.address", 22);
const USER: &str = "your_ssh_username_here";
const KEY_PATH: &str = "path_to_your_ssh_key_here";
const DB_SERVER: (&str, u16) = ("127.0.0.1", 5432);
const DB_USER: &str = "your_postgres_username_here";
const DB_PASSWORD: &str = "your_postgres_password_here";
const DB_NAME: &str = "your_database_name_here";
Next, we can go ahead and start the SSH tunnel. The flow is relatively straightforward: open the connection, connect to Postgres, and then pass the stream to tokio-postgres to use. At time of writing, open_direct_tcpip_channel has no documentation - but it's thankfully an explanatory enough method name that has the same idea as what you'd find in other languages and ecosystems.
use async_ssh2_tokio::{Client, AuthMethod, ServerCheckMethod};
let key = AuthMethod::with_key_file(KEY_PATH, None);
match Client::connect(SERVER, USER, key, ServerCheckMethod::NoCheck).await {
Ok(client) => match client.open_direct_tcpip_channel(DB_SERVER, None).await {
Ok(channel) => {
if let Err(error) = do_pg_stuff(channel.into_stream()).await {
eprintln!("Failed to do PG stuff: {:?}", error);
}
},
Err(error) => {
eprintln!("Unable to open TCP/IP channel: {:?}", error);
}
},
Err(error) => {
eprintln!("Unable to connect via SSH: {:?}", error);
}
}
Step 2: Postgres
With a successful tunnel going, we can complete our Postgres client and execute a simple select to make sure things work. The key part is calling connect_raw, which accepts a stream interface. Rather than deal with type hell, we're going to just take the easy route and let the compiler infer things for us through our parameter S.
use std::marker::Unpin;
use tokio::io::{AsyncRead, AsyncWrite};
use tokio_postgres::{Config, Error, NoTls};
async fn do_pg_stuff<S>(stream: S) -> Result<(), Error>
where
S: AsyncRead + AsyncWrite + Unpin + Send + 'static,
{
let mut config = Config::new();
config.user(DB_USER);
config.password(DB_PASSWORD);
config.dbname(DB_NAME);
let (client, connection) = config.connect_raw(stream, NoTls).await?;
tokio::spawn(async move {
if let Err(error) = connection.await {
eprintln!("Connection error: {:?}", error);
}
});
let rows = client
.query("SELECT $1::TEXT", &[&"hello world"])
.await?;
let value: &str = rows[0].get(0);
println!("{}", value);
Ok(())
}
Put it all together, and you have a Postgres client in Rust over an SSH tunnel. This is definitely simpler with tokio-postgres; if you find yourself wanting to do this with diesel or sqlx, you'll likely need to do some extra legwork with their respective Connection traits to ferry things back and forth.