ITjes en datjes

Dingen uit dagelijks IT werk

Archive for the ‘installation’ Category

Flexible fileserver on Debian Stretch

Finally getting around to reinstall my fileserver, I wanted to make it flexible with data integrity.
ZFS wasn’t flexible enough for my tastes, and LVM has no integrity check, so I had to create something with btrfs.

Early tests with using btrfs on LVM for root and /boot made the virtual machine unbootable, but using btrfs directly was no problem.
This meant I still had to use separate partitions for /boot, root, and swap (on mdraid10), and then use the rest for LVM.

It seems the installation of Stretch has trouble when adding raid or lvm afterwards, so I partitioned the installation this way:

vda1, 512M, bootable, btrfs, /boot
vda2, 4096M, btrfs, /
vda3, 512M, device for raid (to be made swap after creating raid10 after installation)
vda4, LVM
LVM:
vg0: vda4, vdb4, vdc4
lv’s:
tmp_0, 512M, btrfs, /tmp
home_0, 1024M, btrfs, /home
var_0, 1024M, btrfs, /var

After installation I created the same partitions for vdb and vdc, then created the mdraid10 for swap:
mdadm –create –run /dev/md99 –level=10 –layout=n2 raid-devices=3 /dev/vda3 /dev/vdb3 /dev/vdc3
mkswap /dev/md99

With that done I exchanged all /dev/mapper devices for the UUID’s of the btrfs devices in /etc/fstab and did a reboot as test.

With everything running I created logical volumes tmp_1, tmp_2, home_1, home_2, etc. by specifying the different physical devices to be put on:
lvcreate -L 512M -n tmp_1 vg0 /dev/vdb4
lvcreate -L 512M -n tmp_2 vg0 /dev/vdc4

Adding the LV’s to btrfs:
btrfs device add -f /dev/mapper/vg0-tmp_1 /dev/mapper/vg0-tmp_2 /tmp
btrfs device add -f /dev/mapper/vg0-home_1 /dev/mapper/vg0-home_2 /home

btrfs balance start -dconvert=raid1 -mconvert=raid1 /tmp
btrfs balance start -dconvert=raid1 -mconvert=raid1 /home

For unimportant data I made a logical volume striped across the three disks:
lvcreate -L xxG -n roraid -i 3 vg0
mkfs.ext4 /dev/mapper/vg0-noraid (could also be btrfs but the data integrity isn’t important here.
..and added its UUID to fstab with mountpoint /data/noraid

For the data protected with checksum integrity:
create three LV’s raid1_0, 1, 2 on the three PV’s like /tmp, /home, etc. above and make another btrfs raid of these.

Last, don’t forget to grub-install /dev/vdb and /dev/vdc.

The /data/raid1 mount point caused an error after first reboot, but later no more. Maybe a glitch in the matrix..

With this setup I can expand /tmp, /home, /var, /data/noraid and /data/raid1 as needed since I don’t know if I get more data for raid or not.
With two fileservers I can keep a backup of the raid1 data of one server on the noraid volume of the other. The btrfs volumes only protect from bit rot after all.

And if it really fills up I can start exchanging the disks for bigger ones.

 

Written by mnystrom

2016/04/04 at 22:20

Revive old home server

After replacing the failing home server with a decent Synology NAS I got my hands on a HP Storageworks Data Vault X510.

HP_X510_2
Being curious I wanted to see if I could get it to run with Debian so I could do some tests with mdadm.

Turns out it’s not too difficult.

-place the harddisk to install it on in a standard PC and use that to install the i386 version of Debian (in this case it was Jessie)
-add a fixed IP address for eth0 in /etc/networking/interfaces (the Realtek r8168 (seen as r8169) couldn’t get a DHCP address for some reason)
-place the disk in the X510 and boot
-login with SSH (you did install it, didn’t you?)
-have fun with it

 

Written by mnystrom

2015/06/03 at 20:41

Geplaatst in Debian, EN, hardware, installation, linux, server, storage

Tagged with , ,

Proxmox: upgrade PVE 1.9 to 2.0

Even though I haven’ t tried any beta version I decided to go ahead and upgrade one of my cluster nodes to version 2.0 of Proxmox virtual environment.

The tricky part would most likely be the customized PVE installation I build; two disks running partially software RAID 1 with the system split in several partitions. That might break the upgrade script provided by Proxmox.
Actually, thing went pretty well so far. The biggest error was that I had previously left open a couple of remove assignments through aptitude which removed the pve kernel, so the script halted on that when it couldn’t find it.
Nothing a reversal to lenny in the sources.list and the install of the missing kernel couldn’t fix.

One thing that didn’t match up was not being able to start cman which complained about missing the nodename in cluster.conf. It was present in /etc/pve/cluster.conf but unfortunately the init script pointed to /etc/cluster/cluster.conf.
Adding the name to the latter conf was enough to start cman and enable me to log into the web interface. That one kept looping and timed out while the cluster deamon wasn’t running.
One thing te remember is that the web interface was listening to the default port again so I had to modify the Listen entry in pve.conf under the /etc/apache/sites directories.

Next is copying the backups from the old master node to the new and import them to see if they’ll run. When they do, the old master will be upgraded after confirming that the new master works as planned.

 

*update* 20120524

Just started the upgrade of the second node, so far I only had to do an apt-get install console-tools after an error mentioning that package during the apt-get upgrade round, and an apt-get install vzctl during the second attempt.

 

Written by mnystrom

2012/04/24 at 22:15

Proxmox Virtual Environment 1.9 cluster (en)

NL versie

Because it was time to refresh the network at home I started looking at avirtual cluster solution.

Previously I used FreeBSD with jails to seperate the servers and I wanted to keep that functionality. What I wanted op top of that was failover, to be able to move the servers in case of maintenance or problems with the virtual host.

My first experiences with Proxmox VE 1.9 were good, except that I wanted a different partitioning of the host as the standard bare-metal installation uses up the whole disk. Luckily that’s no problem as Proxmox can be installed on top of an existing Debian Lenny 64 bit server.

During the installation of Debian I set up a partial RAID layout with the software RAID option in Linux. Only the virtual environment needs to run on RAID 1, the important part of my data will be secured with a backup. All other data can be downloaded or ripped again (software, my own DVD and CD collection for my media center, etc.) so there’s no real need to use RAID here.
This way I still have redundancy but I won’t lose all my disk space on RAID.

I use the following layout:
/dev/md0, 512 MB RAID 1, ext2, /boot
/dev/sda2 + sdb2, 1GB, swap
/dev/md1, 8GB RAID 1, LVM, /
/dev/md2, 100GB RAID 1, LVM, /var/lib/vz
/dev/sda5 + sdb5, *, ext3, /data/local/01 + 02

Sda5 and sdb5 contain regular data, backup of the virtual machines from the other cluster and the backup of important data from the other cluster. This will be available through NFS shares.
Swap is optional as the servers have plenty of memory installed.

Installation of Debian is done with as few options as possible, after this the sources.list will be modified to include the Proxmox repository and the installation will be finished according to the instructions on the Proxmox Wiki.

What’s left are tweaks to the vzmigrate script to speed up migration and extra network bridges to seperate host traffic from virtual server traffic.

There’s no real high availability yet for VZ containers on shared storage but I’m looking at keeping downtime as short as possible using a combination of Heartbeat, DRBD and rsync.

Written by mnystrom

2011/11/16 at 10:02