The complete guide to installing Nextcloud on Debian 10

Today we’re installing the free and open source cloud platform Nextcloud on a machine running Debian 10. Nextcloud is, in a nutshell, a secure self-hosted replacement for Dropbox, Google Docs and Google Calendar. You should own your data – not the big companies. The guide somewhat resembles the WordPress how-to – as we did then we set up a database, a web server and a website written in PHP that is meant to sit behind a reverse proxy giving it a secure connection to the internet.

Just a note: there are more simple ways of doing this, either by using Docker (link here) or by using Snaps (link here) – but you won’t get the same ability to tweak, configure or add third party apps. And most importantly, you won’t learn how it works.

I. The Database

We start off with installing a relational database management system.

apt -y install mariadb-server mariadb-client

Then we set it up, use a long secure password for the root user.


Now it’s time to create the database and database user Nextcloud will be using.

mysql -u root -p
CREATE USER 'nextcloud_user'@'localhost' IDENTIFIED BY 'super-secure-password'; CREATE DATABASE nextcloud_db; GRANT ALL PRIVILEGES ON nextcloud_db.* TO 'nextcloud_user'@'localhost'; FLUSH PRIVILEGES; QUIT

II. The Web Server

Since Nextcloud is written in PHP, we have to install it (and some extensions).

apt -y install php php-{cli,xml,zip,curl,gd,cgi,mysql,mbstring,imagick,intl}

The last thing we have to install is our web server that will be hosting our Nextcloud instance.

apt -y install apache2 libapache2-mod-php

With that done we have to make a few adjustments to the PHP settings, search for and replace the following values in /etc/php/7.3/apache2/php.ini.

nano /etc/php/7.3/apache2/php.ini
date.timezone = Europe/Stockholm memory_limit = 512M upload_max_filesize = 500M post_max_size = 500M max_execution_time = 300

It’s time to download the latest version of Nextcloud (at the time of writing it is 18).

wget unzip

Then we place it in our website folder (by default /var/www/html) and change the permissions, but first we delete the default website Apache installed.

rm /var/www/html/index.html cd nextcloud/ mv * /var/www/html/ mv .htaccess /var/www/html/ mv .user.ini /var/www/html/ chown -R www-data:www-data /var/www/html chmod -R 755 /var/www/html

Since it’s more secure not storing your data in a subfolder to /var/www we create a dedicated folder for Nextcloud data outside of it.

mkdir /nextcloud-data chown -R www-data:www-data /nextcloud-data

Now we edit our Apache configuration so that it looks something like this – use your own email and website address.

nano /etc/apache2/sites-available/nextcloud.conf
<VirtualHost *:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName ServerAdmin DocumentRoot /var/www/html ServerName <Directory /var/www/html/> Options +FollowSymlinks AllowOverride All Require all granted <IfModule mod_dav.c> Dav off </IfModule> SetEnv HOME /var/www/html SetEnv HTTP_HOME /var/www/html </Directory> # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost>

Then we activate our configuration and enable some needed Apache modules with the following commands.

unlink /etc/apache2/sites-enabled/000-default.conf ln -s /etc/apache2/sites-available/nextcloud.conf /etc/apache2/sites-enabled/ a2enmod rewrite a2enmod headers a2enmod env a2enmod dir a2enmod mime systemctl restart apache2

If your IP address for the server is and your website address is, you have to do the following edits to the Nextcloud config in order to be able to access the website.

nano /var/www/html/config/config.php
'trusted_domains' => array ( 0 => '', 1 => '', 2 => '', ),

The website is now accessible! Head over to it by writing the servers IP address in your browser ( in our example case). Use the database account we created (nextcloud_user), the database (nextcloud_db) and our data folder (/nextcloud-data).

We could now say that we’re finished, but we are not! There still is some tweaks to be made…

III. Additional Fixes

If you head to http://your-ip-address/settings/admin/overview you can see if there are some errors. One I got was:

The database is missing some indexes. Due to the fact that adding indexes on big tables could take some time they were not added automatically. By running “occ db:add-missing-indices” those missing indexes could be added manually while the instance keeps running. Once the indexes are added queries to those tables are usually much faster.

This was fixed with the following commands.

cd /var/www/html/ apt install sudo sudo -u www-data php occ db:add-missing-indices

Another error I got was:

Some columns in the database are missing a conversion to big int. Due to the fact that changing column types on big tables could take some time they were not changed automatically. By running ‘occ db:convert-filecache-bigint’ those pending changes could be applied manually. This operation needs to be made while the instance is offline. For further details read the documentation page about this.

This was fixed with:

sudo -u www-data php occ db:convert-filecache-bigint

Then in order to get “pretty URL:s” we edit the Nextcloud config.

nano /var/www/html/config/config.php
'overwrite.cli.url' => '', 'overwritehost' => '', 'htaccess.RewriteBase' => '/',

Then we update Nextcloud with our new settings.

cd /var/www/html/ sudo -u www-data php occ maintenance:update:htaccess

We’re almost done, but first we have to change how Nextcloud handles background tasks from using ajax to cron. We do this by adding a cronjob and then changing it in the website settings (http://your-ip-address/settings/admin).

crontab -u www-data -e
*/5 * * * * php -f /var/www/html/cron.php

The final step in our journey is to set up caching, it will greatly improve the performance of your cloud. The caching solution we will be using is APCu, let’s install it!

apt install php-apcu systemctl restart apache2

After installing APCu we have to enable it, by writing the following in /var/www/html/config/config.php.

nano /var/www/html/config/config.php
'memcache.local' => '\OC\Memcache\APCu',

Then we enable it in the PHP settings by adding a line to /etc/php/7.3/apache2/php.ini.

nano /etc/php/7.3/apache2/php.ini

And that is it! You are now the owner of your very own cloud. 🙂

P.S. if you find something wrong with the guide please tell me so I can fix it!

Fixing apt update/upgrade on Proxmox (without subscription)

This is an extremely trivial guide – but when installing Proxmox for the first time I would have needed a guide like it.

As default, Proxmox is set to update against their paid enterprise repositories – but without a subscription you have no access to them. So what you have to do is remove the enterprise repository and add the free equivalent. You do this by first deleting the following file.

rm /etc/apt/sources.list.d/pve-enterprise.list

Then we add “deb buster pve-no-subscription” to the file /etc/apt/sources.list.

nano /etc/apt/sources.list
deb buster main contrib deb buster-updates main contrib # Add the line below! deb buster pve-no-subscription # security updates deb buster/updates main contrib

You should now be able to update your server and begin your virtualization adventure.

Set up a log server with Rsyslog on Debian 10

It is very handy to store all your logs in one place, especially in the event of a crash on one of your machines – you can then do the detective work on why it crashed using another computer (that works). It is of course also easier to search for errors across your machines and similar tasks. This is why you need a log server – and today we’re installing one on a Debian 10 lxc container (but it could just as well be installed on a virtual or real machine).

I. Server side

We start off with checking that Rsyslog is running, as it should be installed with the distro.

systemctl status rsyslog

If it isn’t running, install, start and enable it.

apt install rsyslog systemctl start rsyslog systemctl enable rsyslog

Now we’re going to edit it’s configuration file, let’s first make a copy.

cp /etc/rsyslog.conf /etc/rsyslog.conf.old nano /etc/rsyslog.conf

Since we’re enabling both the faster (but unreliable) UDP protocol and the slower (but safer) TCP protocol on the server, we comment out these lines.

# provides UDP syslog reception module(load="imudp") input(type="imudp" port="514") # provides TCP syslog reception module(load="imtcp") input(type="imtcp" port="514")

Now we describe how we want the logs to be stored by defining a template by adding the following.

# Everything should be logged in "/var/log/host/progname.log". $template RemoteLogs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log" # It should be formatted as: '[facility-level].[severity-level] ?RemoteLogs'. *.* ?RemoteLogs # Stop. & ~

Then we restart the service, install a firewall and define the needed rules.

systemctl restart rsyslog apt install ufw ufw enable ufw allow 514/tcp ufw allow 514/udp

And we’re done on the server, now it’s time to configure our computers to send their logs to our server.

II. Client side

It’s time we configure our clients, repeat these steps on all your computers and servers. First, we edit the configuration file.

nano /etc/rsyslog.conf

If we say that the IP address of our server is (change it to whatever IP address your server has) we add the following.

# Log everything on our server. *.* @@

Finally restart Rsyslog and you’re done!

systemctl restart rsyslog

How to install TensorFlow with GPU support for AMD on Debian Buster

The ordinary version of TensorFlow only supports Nvidia CUDA enabled graphic cards, which sucks for us AMD users – but there is a solution! We will be using the open source ROCM project ( (

First we get the needed key and add the source.

wget -qO - | sudo apt-key add echo 'deb [arch=amd64] xenial main' | sudo tee /etc/apt/sources.list.d/rocm.list sudo apt update

Check if your user is a part of the video-group, if not – add it.

groups myuser

Now we install the needed packages and kernel module.

sudo apt install rocm-dkms rocm-libs hipcub miopen-hip sudo reboot

Let’s check if it works.


It’s time to install the tensorflow-rocm package using pip.

pip3 install --user tensorflow-rocm --upgrade

Done! Now we can test a hello world example (stolen from

''' HelloWorld example using TensorFlow library. Author: Aymeric Damien Project: ''' from __future__ import print_function import tensorflow.compat.v1 as tf tf.disable_v2_behavior() # Simple hello world using TensorFlow # Create a Constant op # The op is added as a node to the default graph. # # The value returned by the constructor represents the output # of the Constant op. hello = tf.constant('Hello, TensorFlow!') # Start tf session sess = tf.Session() # Run the op print(

It’s working! I got the following output.

>WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/compat/ disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version. >Instructions for updating: >non-resource variables are not supported in the long term >2020-01-28 11:59:20.257996: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.304648: I tensorflow/core/common_runtime/gpu/] Found device 0 with properties: >name: Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] >AMDGPU ISA: gfx803 >memoryClockRate (GHz) 1.366 >pciBusID 0000:08:00.0 >2020-01-28 11:59:20.470758: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.487690: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.498248: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.505408: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.505622: I tensorflow/core/common_runtime/gpu/] Adding visible gpu devices: 0 >2020-01-28 11:59:20.506136: I tensorflow/core/platform/] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA >2020-01-28 11:59:20.513473: I tensorflow/core/platform/profile_utils/] CPU Frequency: 3792680000 Hz >2020-01-28 11:59:20.514304: I tensorflow/compiler/xla/service/] XLA service 0x519fb60 executing computations on platform Host. Devices: >2020-01-28 11:59:20.514340: I tensorflow/compiler/xla/service/] StreamExecutor device (0): Host, Default Version >2020-01-28 11:59:20.514497: I tensorflow/core/common_runtime/gpu/] Found device 0 with properties: >name: Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] >AMDGPU ISA: gfx803 >memoryClockRate (GHz) 1.366 >pciBusID 0000:08:00.0 >2020-01-28 11:59:20.514536: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.514548: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.514559: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.514569: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library >2020-01-28 11:59:20.514629: I tensorflow/core/common_runtime/gpu/] Adding visible gpu devices: 0 >2020-01-28 11:59:20.514690: I tensorflow/core/common_runtime/gpu/] Device interconnect StreamExecutor with strength 1 edge matrix: >2020-01-28 11:59:20.514702: I tensorflow/core/common_runtime/gpu/] 0 >2020-01-28 11:59:20.514709: I tensorflow/core/common_runtime/gpu/] 0: N >2020-01-28 11:59:20.514837: I tensorflow/core/common_runtime/gpu/] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7539 MB memory) -> physical GPU (device: 0, name: Ellesmere [Radeon RX 470/480/570/570X/580/580X/590], pci bus id: 0000:08:00.0) >b'Hello, TensorFlow!'

My Soundproofed Homelab Closet (where this site is hosted)

Recently I soundproofed my closet, installed a fan in the door, got rid of my pfSense-router and freeNAS-box and virtualized everything.

The door with the fan added.
My homelab and refrigerator.
The homelab explained.

The server is running Proxmox with virtualized machines for an OPNsense-router, a web server (this site), a NAS, a Nextcloud server and a(n?) NGINX reverse proxy. The patch panel goes out to my apartment, connecting to my workstation, a Trådfri gateway and an Ubiquiti UniFi AP AC LR.


Switch: Zyxel GS1008-HP – 8 Port / Gigabit Ethernet / PoE+ / Unmanaged / 60 Watt.
Server: Sun Fire x4170 M2 / 2 x Intel Xeon E5620 @2.40GHz / 96 GB RAM / 6 x 500gb HDD RAIDZ2.

How to enable uploads of files larger than 2MB to your WordPress-site (using NGINX)

I. Configure PHP-FPM

We start by editing the PHP-FPM configuration file php.ini found here on Debian Buster (replace “7.3” with whatever version you’re running).

nano /etc/php/7.3/fpm/php.ini

Add the following lines at the very end of the file. (Check out what this actually does on

upload_max_filesize = 100M post_max_size = 100M

Reload the PHP-FPM service.

systemctl reload php7.3-fpm

II. Configure NGINX

Now we configure the NGINX website configuration file on the host (note: not on the reverse proxy).

nano /etc/nginx/sites-available/

So that it looks something like this. (Find out what the added line does here:

server { listen 80; root /var/www/html/; server_name; location / { index index.php index.html; try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.3-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; client_max_body_size 100M; # Add this line! } }

Now we check if we did anything wrong and if not, reload the NGINX service.

nginx -t systemctl reload nginx

And that should do it. Now I can finally upload those huge images SEO:s love.

How to set up WordPress behind a secure reverse proxy using NGINX

After getting your SSL-certificate and have enabled HTTPS redirection in NGINX, WordPress will not work due to mixed content (HTTP and HTTPS) – you won’t be able to login.

In order to fix this you first have to add this at the very start of your wp-config.php.

define('FORCE_SSL_ADMIN', true); if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https') $_SERVER['HTTPS']='on';

Then in the end of the file add the following, replacing “” with your own URL.

define('WP_HOME',''); define('WP_SITEURL','');

Finally you have to add the following to your NGINX-config in the site’s location block on your reverse proxy.

proxy_set_header X-Forwarded-Proto https;

Now it should be working! You should probably also add the plugin Really Simple SSL for their mixed content fixer.

Make Debian/Ubuntu LXC containers more comfortable to use

I’ve been playing around with Proxmox and LXC containers lately and this is something I do with every container I create for it to be more user friendly.

First enable colors in the terminal.

echo "PS1='\[\033[1;36m\]\u\[\033[1;31m\]@\[\033[1;32m\]\h:\[\033[1;35m\]\w\[\033[1;31m\]\$\[\033[0m\] '" >> ~/.bashrc

Then we enable completions using tab.

echo "source /etc/profile.d/" >> ~/.bashrc

I usually also get locale problems which are fixed with the following.

echo "LC_ALL=en_US.UTF-8" >> /etc/environment echo "en_US.UTF-8 UTF-8" >> /etc/locale.gen echo "LANG=en_US.UTF-8" > /etc/locale.conf locale-gen en_US.UTF-8

Finally, install cron-apt so that the container downloads updates at 04:00 every day. Also set it to email the results of the nightly download no matter what.

apt install cron-apt echo 'MAILON="always"' >> /etc/cron-apt/config

How to install WordPress on Debian 10 with NGINX (LEMP)

This is mostly a guide for myself so I can remember how to install a WordPress-site – but if somebody else finds it helpful that’s great.

We begin with installing the web server, the MySQL relational database management system and PHP.

apt install nginx mariadb-server php

While we’re at it let’s install a firewall that is easy to manage.

apt install ufw

Now we enable the firewall and allow HTTP-traffic (port 80) and HTTPS-traffic (port 443).

ufw enable ufw allow 80/tcp ufw allow 443/tcp

It’s time to configure the MySQL-database. We start by doing a secure installation – choose a good password!


Let’s add a database and a user – whereupon we give that user permissions to use that database.

mysql -u root -p CREATE DATABASE website_db; CREATE USER website_user@localhost IDENTIFIED BY 'super-secure-password'; GRANT ALL PRIVILEGES ON website_db.* TO website_user@localhost; FLUSH PRIVILEGES; QUIT;

Now we create the NGINX-configuration. We will be using HTTP and not HTTPS since it is presumed we have a reverse proxy pointing at the server.

nano /etc/nginx/sites-available/

Add the following configuration. Remember to change to whatever site you are using.

server { listen 80; root /var/www/html/; server_name; location / { index index.php index.html; try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.3-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

And now we enable it by making a soft link of the configuration file to /etc/nginx/sites-enabled/.

unlink /etc/nginx/sites-enabled/default ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/ nginx -t systemctl reload nginx

In order to hide some server info like NGiNX version number and OS variant from prying eyes, uncomment the following line in /etc/nginx/nginx.conf.

server_tokens off;

Time to install some needed PHP packages.

apt install php-curl php-gd php-intl php-mbstring php-soap php-xml php-xmlrpc php-zip php7.3-opcache php7.3-mysql php7.3-cli php7.3-fpm

It’s finally time to download the latest WordPress. We then copy it to our chosen folder and create a config file. Finally we change the permissions.

wget -P /tmp tar xzf /tmp/latest.tar.gz --strip-components=1 -C /var/www/html/ cp /var/www/html/ /var/www/html/ chown -R www-data:www-data /var/www/html/

Use the following command to get your authentication unique keys and salts.

curl -s

You will get something like this.

define('AUTH_KEY', ',?Hmr:|+LZ;)Zsd?w]]F!3||@Ov|b)qeCzvi9oPLs%}XCwb dd#[P^[`KtPFwV3U'); define('SECURE_AUTH_KEY', 'O@{,Mud,nvBEse3u hs]=tmR|WqHs-tMt`_`Se4-&Ol8z,`2^Wz4F04r.ENsWw;|'); define('LOGGED_IN_KEY', '.fQshMa_vEXVm(NhV4V./*H(F#&-Kj~u.4`S6{g,8++{o6%-/tB+/?6]FW_`XpE3'); define('NONCE_KEY', 'YO6;Y-M*h?x31WTpj$6Ff@A&Ewf/FObV IrL(NHmfa EX2|l^V]#;qcI43knKTbQ'); define('AUTH_SALT', 'q)x^?N+%g=.mWD5Aqlgy;A2VaV-&DdJgg.(pKz46}K1G;Q[e^%evH`<.Y^Ikisel'); define('SECURE_AUTH_SALT', 'I_R%nZ(?5s>Y2q*vg-^(Fc;sIM5euDh-H 0DVu?q/P.Mi JmK|A|}XZS@8f60Dvk'); define('LOGGED_IN_SALT', 'vyZ?hUhL<$ZZd;q#`Hj&3U/q1K,y iD2-bRa.gnU6rh?+Vp2O.|Eb,|`^bFX`QQm'); define('NONCE_SALT', ' 7--}-Q/;Ig9<0i/J5! +vTp*WHdGr `nNOAFg05Y(yyO+i+xN7+D06T(oNueMo`');

Copy it and open the WordPress config file.

nano /var/www/html/

Change the these lines.

. . . /** The name of the database for WordPress */ define( 'DB_NAME', 'website_db' ); /** MySQL database username */ define( 'DB_USER', 'website_user' ); /** MySQL database password */ define( 'DB_PASSWORD', 'super-secure-password' ); . . . /**#@+ * Authentication Unique Keys and Salts. * * Change these to different unique phrases! * You can generate these using the {@link secret-key serv$ * You can change these at any point in time to invalidate all existing cookies. This will force all users to have to l$ * * @since 2.6.0 */ define('AUTH_KEY', ',?Hmr:|+LZ;)Zsd?w]]F!3||@Ov|b)qeCzvi9oPLs%}XCwb dd#[P^[`KtPFwV3U'); define('SECURE_AUTH_KEY', 'O@{,Mud,nvBEse3u hs]=tmR|WqHs-tMt`_`Se4-&Ol8z,`2^Wz4F04r.ENsWw;|'); define('LOGGED_IN_KEY', '.fQshMa_vEXVm(NhV4V./*H(F#&-Kj~u.4`S6{g,8++{o6%-/tB+/?6]FW_`XpE3'); define('NONCE_KEY', 'YO6;Y-M*h?x31WTpj$6Ff@A&Ewf/FObV IrL(NHmfa EX2|l^V]#;qcI43knKTbQ'); define('AUTH_SALT', 'q)x^?N+%g=.mWD5Aqlgy;A2VaV-&DdJgg.(pKz46}K1G;Q[e^%evH`<.Y^Ikisel'); define('SECURE_AUTH_SALT', 'I_R%nZ(?5s>Y2q*vg-^(Fc;sIM5euDh-H 0DVu?q/P.Mi JmK|A|}XZS@8f60Dvk'); define('LOGGED_IN_SALT', 'vyZ?hUhL<$ZZd;q#`Hj&3U/q1K,y iD2-bRa.gnU6rh?+Vp2O.|Eb,|`^bFX`QQm'); define('NONCE_SALT', ' 7--}-Q/;Ig9<0i/J5! +vTp*WHdGr `nNOAFg05Y(yyO+i+xN7+D06T(oNueMo`');

You’re done! Now log in and do the final steps on your ip address, for example

If you found something wrong with the tutorial, don’t hesitate to write a comment!