Running Local LLMs (“AI”) on Old AMD GPUs and Laptop iGPUs (Arch Linux Guide)

Running Local LLMs (“AI”) on Old AMD GPUs and Laptop iGPUs (Arch Linux Guide)

A straightforward guide on how to compile llama.cpp with Vulkan support on Arch Linux (and Arch-based distros like CachyOS, EndeavourOS, etc). This lets you run models on old, officially unsupported AMD cards and Intel iGPUs.

The same steps work on Debian/Ubuntu, but the package names are different.

Here’s how I’m running models on 3 × AMD Radeon RX 580 8 GB (24 GB VRAM total) without ROCm in 2025.

Read More

My CV in Markdown (on GitHub) 📄✨

Hey there!

If you’re curious about who I am, what I do, and what I’ve been up to in life 💻🔧 — my CV is now publicly available on GitHub!

👉 Check it out here: https://github.com/albinhenriksson/cv

It’s written in pure Markdown – clean, readable, version-controlled, and 100% fluff-free. Perfect if you live in the terminal, use git, and like things tidy.


💬 Got feedback?
📬 Want to hire me?
🧩 Have a cool project I should be part of?

Read More

Source Code Now Available

I’ve made the source code to this website public!

If you’re curious about how it’s built, want to fork it, steal some ideas, or just poke around, you can find everything here:

👉 https://github.com/albinhenriksson/ahenriksson.com

It’s a simple Hugo site with a custom theme, a few tweaks, and no unnecessary fluff. Built with love, caffeine, and way too much time spent tweaking config.toml.

Enjoy, and feel free to open an issue or pull request if you find something broken (or want to add something weird).

Read More