Setting up subversion over ssl and nginx on debian

Subversion supports DAV protocol access only with Apache server. To get it running with nginx, apache has to be installed on the same system.

To start, install apache and svn support

apt-get install apache2 libapache2-svn

For apache and nginx web servers to coexist on the same computer and running at the same time, they would have to listen on the different ports. Standard ssl port is 443, lets set apache ssl to 8443. To prevent ports exposed to Internet, set apache to listen on port 8443 only localy.

Configure apache ports in /etc/apache2/ports.conf to be


Activate SSL and the DAV modules on Apache

$ a2enmod ssl
$ a2enmod dav
$ a2enmod dav_svn

Restart apache

service apache2 restart

add DAV stuff

nano -w /etc/apache2/mods-available/dav_svn.conf

LoadModule dav_svn_module modules/

LoadModule authz_svn_module modules/

# Example configuration:

       DAV svn
       SVNPath /var/svn/my_repos
       SVNListParentPath on

       AuthType Basic
       AuthName "Subversion repository"
       AuthUserFile /var/svn/conf/svnusers.conf
       Require valid-user


Link default ssl configuration

cd /etc/apache2/sites-enabled
cp ../sites-available/default-ssl.conf svn-ssl.conf
nano -w svn-ssl.conf

And also set Listen in svn-ssl.conf

Create password files

htpasswd -cm /var/svn/conf/svnusers.conf user1
htpasswd -m /var/svn/conf/svnusers.conf user2

Check permissions. Debian apache should use www-data user and group. You can double check it in /etc/apache2/apache2.conf and /etc/apache2/envvars files, or just by doing ps aux | grep apache.

Make sure the same user/group are owners of the repository.

chown -R www-data:www-data /var/svn/

Restart apache and check if it works, for example with links


Create nginx conf file, or add proxy pass in existing config

server {
    listen 80;
    return 301 https://$host$request_uri;

server {
    listen       443 ssl;

    ssl on;

    ssl_certificate /etc/nginx/ssl/nginx.crt;
    ssl_certificate_key /etc/nginx/ssl/nginx.key;

    access_log /var/log/nginx/svn.access.log;
    error_log /var/log/nginx/svn.error.log;

    location / {

Restart nginx and check in your web browser.

Migrating SVN repository to Git preserving history

After spending number of hours trying to move a simple single branch SVN repository to Git that can be accessed over https, here are the steps taken.
Assume my_svn_repository is defined by url

First, a list of all svn committers is needed. Save the list in a file ./authors.txt

user1 = user1 <>
user2 = user2 <>

Secondly, clone SVN repos

cd ~/tmp/git/
git svn clone --authors-file=./authors.txt git_temp_repository

Thirdly, clean svn:ignore properties

cd git_temp_repository
git svn show-ignore > .gitignore
git add .gitignore
git commit -m 'Cleaning svn:ignore properties'

In the step four, clone temp repository to a bare git repos:

git clone --bare ~/tmp/git/git_temp_repository my_git_repository.git

To setup nginx password protected server that serves only https, first install fcgiwrap

apt-get install fcgiwrap

and add config file:

server {
listen 80;
return 301 https://$host$request_uri;

server {
listen 443 ssl;

ssl on;

ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;

access_log /var/log/nginx/git.access.log;
error_log /var/log/nginx/git.error.log;

auth_basic "Restricted";
auth_basic_user_file /etc/awstats/.htpasswd;

location ~ /git(/.*) {
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend;
fastcgi_param GIT_HTTP_EXPORT_ALL "";
fastcgi_param GIT_PROJECT_ROOT /home/git/tmp/git;
fastcgi_param PATH_INFO $1;
fastcgi_param REMOTE_USER $remote_user;

On client

git -c http.sslVerify=false clone

http.sslVerify=false is used to allow self signed certificates during development.

Latex capacity exceeded

This latex code triggered Tex capacity exceed error:


\pgfplotsset{my style/.append style={axis x line=middle, axis y line=middle, xlabel={$x$}, ylabel={$y$}, axis equal }}

\begin{axis}[my style, xtick={-3,-2,...,3}, ytick={-3,-2,...,3}, xmin=-3, xmax=3, ymin=-3, ymax=3]

Fix was to edit config file

nano -w /usr/share/texmf-dist/web2c/texmf.cnf

and edit main_memory and pool_size parameters

fmtutil-sys --all

Btrfs raid6

See archlinux for more…
Create btrfs filesystem using raid6 for data and and raid 10 for metadata

mkfs.btrfs -d raid6 -m raid10 -L alexandria_btrfs /dev/vd[bcdefghijk]

Mount newly created partition with

mount /dev/sdb /mnt/md0

Check file system usage with:

btrfs filesystem df /mnt/md0

Removing devices

Remove a drive from the file system. This should rebalance the data across the other devices. btrfs device delete is used to remove devices online. It redistributes the any extents in use on the device being removed to the other devices in the filesystem.

# mount fs first
btrfs device delete /dev/vdX /mnt/md0

Adding devices

btrfs device add is used to add new devices to a mounted filesystem.
btrfs filesystem balance can balance (restripe) the allocated extents across all of the existing devices. For example, with an existing filesystem mounted at /mnt/md0, you can add the device /dev/vdX to it with:

btrfs device add /dev/vdX /mnt/md0

This should commence a resync.

At this point we have a filesystem with two devices, but all of the metadata and data are still stored on the original device(s). The filesystem must be balanced to spread the files across all of the devices.

btrfs filesystem balance /mnt/md0

List of all the btrfs filesystems

btrfs filesystem show gives you a list of all the btrfs filesystems on the systems and which devices they include.

btrfs filesystem show

Label: 'alexandria_btrfs'  uuid: abdef71f-d813-4128-bc39-de3df7e6c673
        Total devices 10 FS bytes used 217.73GiB
        devid    1 size 1.82TiB used 29.42GiB path /dev/sdb
        devid    2 size 1.82TiB used 29.40GiB path /dev/sdc
        devid    3 size 1.82TiB used 29.40GiB path /dev/sdd
        devid    4 size 1.82TiB used 29.40GiB path /dev/sde
        devid    5 size 1.82TiB used 29.40GiB path /dev/sdf
        devid    6 size 1.82TiB used 29.40GiB path /dev/sdg
        devid    7 size 1.82TiB used 29.40GiB path /dev/sdh
        devid    8 size 1.82TiB used 29.40GiB path /dev/sdi
        devid    9 size 1.82TiB used 29.40GiB path /dev/sdj
        devid   10 size 1.82TiB used 29.40GiB path /dev/sdk

btrfs-progs v4.0.1

Replacing failed devices

The example above can be used to remove a failed device if the super block can still be read. But, if a device is missing or the super block has been corrupted, the filesystem will need to be mounted in degraded mode:

btrfs filesystem show

Label: 'alexandria_btrfs'  uuid: abdef71f-d813-4128-bc39-de3df7e6c673
        Total devices 10 FS bytes used 14.50TiB
        devid    1 size 1.82TiB used 1.82TiB path /dev/sdb
        devid    2 size 1.82TiB used 1.82TiB path /dev/sdc
        devid    3 size 1.82TiB used 1.82TiB path /dev/sdd
        devid    4 size 1.82TiB used 1.82TiB path /dev/sde
        devid    5 size 1.82TiB used 1.82TiB path /dev/sdf
        devid    6 size 1.82TiB used 1.82TiB path /dev/sdg
        devid    7 size 1.82TiB used 1.82TiB path /dev/sdh
        devid    8 size 1.82TiB used 1.82TiB path /dev/sdi
        devid   10 size 1.82TiB used 1.82TiB path /dev/sdk
        *** Some devices missing

# sdd is destroyed or removed, use -o degraded to force the mount
# to ignore missing devices#
mount -o degraded /dev/sdX /mnt/md0
btrfs device delete missing /mnt/md0
btrfs replace start 9 /dev/sdX_new_disk /mnt/md0
btrfs replace status /mnt/md0
# cancel replacement with
# btrfs replace cancel /dev/md0


A non-raid filesystem is converted to raid by adding a device and running a balance filter that will change the chunk allocation profile.
For example, to convert an existing single device system (/dev/sdX1) into a 2 device raid1 (to protect against a single disk failure):

mount /dev/sdX1 /mnt/md0
btrfs device add /dev/sdY1 /mnt/md0
btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/md0

If the metadata is not converted from the single-device default, it remains as DUP, which does not guarantee that copies of block are on separate devices. If data is not converted it does not have any redundant copies at all.


Scrub is an online file system checking tool. Reads all the data and metadata on the file system, and uses checksums and the duplicate copies from RAID storage to identify and repair any corrupt data.

btrfs scrub start /
btrfs scrub status /

(from archlinux)
Warning: The running scrub process will prevent the system from suspending, see this thread for details.
If running the scrub as a systemd service, use Type=forking. Alternatively, you can pass the -B flag to btrfs scrub start to run it in the foreground and use the default Type value.


Balance passes all data in the filesystem through the allocator again. It is primarily intended to rebalance the data in the filesystem across the devices when a device is added or removed. A balance will regenerate missing copies for the redundant RAID levels, if a device has failed. As of linux kernel 3.3, a balance operation can be made selective about which parts of the filesystem are rewritten.

btrfs balance start /
btrfs balance status /


Btrfs supports transparent compression, meaning every file on the partition is automatically compressed. This not only reduces the size of files, but also improves performance, in particular if using the lzo algorithm, in some specific use cases (e.g. single thread with heavy file IO), while obviously harming performance on other cases (e.g. multithreaded and/or cpu intensive tasks with large file IO).

Compression is enabled using the compress=zlib or compress=lzo mount options. Only files created or modified after the mount option is added will be compressed. However, it can be applied quite easily to existing files (e.g. after a conversion from ext3/4) using the btrfs filesystem defragment -calg command, where alg is either zlib or lzo. In order to re-compress the whole file system with lzo, run the following command:

btrfs filesystem defragment -r -v -clzo /

Tip: Compression can also be enabled per-file without using the compress mount option; simply apply chattr +c to the file. When applied to directories, it will cause new files to be automatically compressed as they come.
When installing Arch to an empty Btrfs partition, set the compress option after preparing the storage drive. Simply switch to another terminal (Ctrl+Alt+number), and run the following command:

mount -o remount,compress=lzo /mnt/target

After the installation is finished, add compress=lzo to the mount options of the root file system in fstab.

Btrfs with encription

Make sure btrfs support is installed in your kernel

File systems  --->
    <*> Btrfs filesystem

Install needed packages

emerge -av sys-fs/btrfs-progs sys-fs/cryptsetup

Make sure the following flags are enabled for sys-fs/cryptsetupgcrypt: nls python udev pwquality. dev-libs/libpwquality is used for password quality checking.

Prepare disk:

parted -a optimal /dev/sdX

(parted) help
(parted) print
(parted) mklabel gpt
(parted) mkpart primary btrfs 0% 100%
(parted) name 1 storage_XX
(parted) print
(parted) quit
create filesystem
mkfs.btrfs /dev/sdX1

Encrypt disk

cryptsetup luksFormat -v /dev/sdX

To open the disk use

cryptsetup luksOpen /dev/sdX DRIVE_name

To get automated mounting, first find UUID and then modify cryptab conf file.
Command blkid can be used to get the new UUID of the drive.
Modify /etc/crypttab with your desired drive name and the partitions UUID:

DRIVE_name    UUID=your_uuid   none    luks

Crypt details can be seen with

cryptsetup luksDump /dev/sdX

For more detail see archlinux.

Ext4 HD recovery

To check superblock

fsck.ext4 -v /dev/sdX1

If there is superblock corruption, the above command will output something like this:

The superblock could not be read or does not describe a correct ext4
filesystem. If the device is valid and it really contains an ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock

If there is corruption, find where superblock backups are kept with

mke2fs -n /dev/sdX1

Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 160563

Use backup to recreate superblock:

e2fsck -b 32768 /dev/sdX1

In case of journal corruption, mount disk as read only

mount -o ro,noload /dev/sdX1 /mnt/storage_tmp/

Create backup and replace the failing drive ASAP!

exFat on linux

make sure exfat fuse and utils are installed on the system

emerge -av sys-fs/exfat-utils sys-fs/fuse-exfat sys-fs/dosfstools

exFAT seems not handle well the switches -pgo that relate to permissions. When using standard switches -avgh, it will trigger errors
rsync: mkstemp “/run/media/3461-3338/ki.txt.ws6eA5” failed: Function not implemented (38)

Instead when coping use the following flags

rsync -rltDv [SRC] [DESTINATION]

Create partitions with parted to look something like

GNU Parted 3.2
Using /dev/sdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Multiple Card Reader (scsi)
Disk /dev/sdf: 64.0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1049kB 64.0GB 64.0GB primary fat32 lba

Create vfat file system with:

mkfs.vfat -F 32 /dev/sdX1

Dynamic DNS

There are few free dynamic DNS options such as


Install ddclient.

emerge -av net-dns/ddclient

Let’s focus on Open an account and pick a domain name.

Prepare config file

nano -w /etc/ddclient/ddclient.conf

Enter information on the subdomain name you just created.

use=if, if=eth0

Start and add ddclient to boot

/etc/init.d/ddclient start
rc-update add ddclient default

Alternatively you could use cron job with url given at


0,5,10,15,20,25,30,35,40,45,50,55 * * * * sleep 34 ; wget -O - >> /tmp/freedns_XXXXX_mooo_com.log 2>&1 &

smartmon configuration

emerge -av sys-apps/smartmontools

add to init
rc-update add smartd default

nano -w /etc/smartd.conf

# DEVICESCAN For all disks with SMART capabilities.
# -o off Turn off automatic running of offline tests. An offline test
# is a test which may degrade performance.
# -n standby Do not spin up the disk for the periodic 30 minute (default)
# SMART status polling, instead wait until the disk is active
# again and poll it then.
# -W 2 Report temperature changes of at least 2 degrees celsius since
# the last reading. Also report if a new min/max temperature is
# detected.
# -S on Auto save attributes such as how long the disk has been powered
# on, min and max disk temperature.
# -s (L/../.[02468]/1/04|S/../.[13579]/1/04)
# '-------a--------' '--------b-------'
# a: Long test on even monday mornings at 04:00
# b: Short test on uneven monday mornings at 04:00

DEVICESCAN -o off -n standby -W 2 -S on -s (L/../.[02468]/1/04|S/../.[13579]/1/04)