Served by CloudFlare

Starting today, my blog is being served through CloudFlare caching CDN. This means faster loading speeds, but what’s more important is valid SSL certificate through their SSL Free Termination service. I will switch all links to https by default and after few weeks if there are no issues reported, I will force HTTPS via redirect.

https://sheharyar.me/blog/free-ssl-for-github-pages-with-custom-domains/

And yeah, my blog now supports HTTP/2 and SPDY and other fancy things.

Please report issues via discussion or IRC or Google+. Take care!

16 December 2015 | linux | fedora | blog

Image based deployment via dd and nbd

My train to Pilsen had about 300+ people onboard and apparently super-modern ultra-fast WiFi technology can’t handle that many people. At least when administrator sets C class IPv4 network. People have cell phones, tablets and laptops. It’s no surprise that the DHCP server ran out of resources since I see like 500+ devices around.

Hey, Leo Express, I am looking at you!

Luckily, not being able to connect to the internet I started hacking discovery image. I was doing little research on image-based deployment. I already created a small patch exposing all block devices via netcat. But since OpenStack guys do deployments via iSCSI, I was wondering if I can do the same.

But first of all, let’s test how raw image copy (via dd) compares to exposed block device (iSCSI/NBD). Instead of iSCSI, I will test Network Block Device, which is easier to configure and it is present in all Linux distributions, including Fedora and RHEL (the client is actually in the kernel itself).

Target setup with NBD is very easy:

dst# mkdir /etc/nbd-server/; cat >/etc/nbd-server/config <<EOF
[generic]
[vdb]
exportname = /dev/XYZ
readonly = false
multifile = false
copyonwrite = false
EOF

dst# nbd-server -d

src# sudo modprobe nbd
src# sudo nbd-client dst-server -N vdb /dev/nbd1

Initial testing in the train involved testing on KVM instance. Direct transfer of 1GB image with CirrOS (created with a command below) with block size of 1kB was the fastest method indeed:

src# time sudo dd if=/tmp/cirros.img of=/dev/nbd1 bs=1k
real    0m0.882s

Compressing the data speeds up things, lzop seems to be the fastest of course:

dst# nc -l 1234 > /dev/vdb
src# time sudo cat /tmp/cirros.img | ncat dst-server 1234
real    0m9.871s

dst# nc -l 1234 | gunzip > /dev/vdb
src# time sudo cat /tmp/cirros.img | gzip -1 | ncat dst-server 1234
real    0m5.917s

dst# nc -l 1234 | lzop -d > /dev/vdb
src# time sudo cat /tmp/cirros.img | lzop | ncat dst-server 1234
real    0m2.358s

Note that KVM/virtio is a special case, usually lzop is faster than raw data. Now the remote block device approach:

src# time sudo virt-builder cirros-0.3.1 --output /tmp/cirros.img --size 1G
real    0m20.352s

The same size, but over the (virtual) network:

src# time sudo virt-builder cirros-0.3.1 --output /dev/nbd1
real    0m20.867s

I was surprised that the transfer was almost the same speed as local storage (I was using virtio driver). I was expecting it to be little more slower. That looks promising!

At home, I was able to do real testing on remote RHEL7 server over 1gig LAN connection. For the record, I had to disable IPv6 to get nbd-server running due to some bugs. For comparison, I created 10GB image. First, test the fastest possible stream method:

dst# nc -l 1234 | lzop -d > /dev/XYZ
src# time sudo cat /tmp/cirros.img | lzop | ncat dst-server 1234
real    1m32.336s

Now, that is a difference. Let’s test attached block storage. Locally first:

src# time sudo virt-builder cirros-0.3.1 --output cirros.img --size 10240000000b
real    0m33.030s

The same size, but over the (gigabit) network:

src# time sudo virt-builder cirros-0.3.1 --output /dev/nbd1
real    0m29.926s

I was scratching my head for few seconds until I realized why the attached physical storage is actually faster. The local test was on my T430s laptop with HDD in DVD bay while the server was Dell Precision T5400 with end-consumer SATA drive. Still faster than the two inch laptop one.

Anyway, it looks like NBD performs well enough. I think this is a go to implement this in foreman-discovery-image. Exporting volumes via NBD is an easy task, the only work needs to be done on smart-proxy side to implement new virt-builder plugin which will do the deployment.

20 November 2015 | linux | fedora | rhel | foreman

Foreman and PXE less environments

Foreman (and Satellite 6) has many options when it comes to provisioning in PXE-less (or DHCP-less) environments.

Option 1: Bootdisk plugin - iPXE

Foreman Bootdisk plugin enables Foreman users to download Host based or Generic host images. These are small ISO images pre-loaded with SYSLINUX which chainloads iPXE. This kind of firmware is able to load kernels via http, but hardware driver in iPXE must exist for this to work. Unfortunately, there are many issues with various hardware and even virtualization technologies (VMWare, Microsoft).

The host image embeds network credentials (IP, gateway, netmask, DNS) therefore DHCP is not requred but the host is bound to the host it was generated for. On the other hand, Generic image initializes network via DHCP and can be used with any host. Recent version of Foreman will add Subnet image, which is a Generic image but proxies the http calls via Smart Proxy (Templates plugin must be enabled).

Option 2: Bootdisk plugin - SYSLINUX

There is a special kind of image: Full host image. This one is host-based (not generic) image that requires DHCP. It contains SYSLINUX loader, configuration rendered from PXELinux template kind associated with the host and embedded Linux kernel and init RAM disk of the associated OS installer. The image is slightly bigger, but it works on most platforms as the initial network configuration is done by the OS installer (e.g. Anaconda from RHEL).

Although this image type requires DHCP, there is a trick to get it working with static configuration. Create the following PXELinux kind template and associate it with the OS/Host, then download the image:

<%
  mac = @host.primary_interface.mac
  bootif = '00-' + mac.gsub(':', '-') if mac
  ip = @host.primary_interface.ip
  mask = @host.primary_interface.subnet.mask
  gw = @host.primary_interface.subnet.gateway
  dns = @host.primary_interface.subnet.dns_primary
-%>
DEFAULT linux
LABEL linux
KERNEL <%= @kernel %>
<% if (@host.operatingsystem.name == 'Fedora' and @host.operatingsystem.major.to_i > 16) or
    (@host.operatingsystem.name != 'Fedora' and @host.operatingsystem.major.to_i >= 7) -%>
APPEND initrd=<%= @initrd %> ks=<%= foreman_url('provision') + "&static=yes" %> inst.ks.sendmac <%= "ip=#{ip}::#{gw}:#{mask}:::none nameserver=#{dns} ksdevice=bootif BOOTIF=#{bootif}" %>
<% else -%>
APPEND initrd=<%= @initrd %> ks=<%= foreman_url('provision') + "&static=yes" %> kssendmac <%= "ip=#{ip} netmask=#{mask} gateway=#{gw} dns=#{dns} ksdevice=#{mac} BOOTIF=#{bootif}" %>
<% end -%>

For Foreman 1.6 or older (or Satellite 6.0-6.1) use this one:

<%
  mac = @host.mac
  bootif = '00-' + mac.gsub(':', '-') if mac
  ip = @host.ip
  mask = @host.subnet.mask
  gw = @host.subnet.gateway
  dns = @host.subnet.dns_primary
-%>
DEFAULT linux
LABEL linux
KERNEL <%= @kernel %>
<% if (@host.operatingsystem.name == 'Fedora' and @host.operatingsystem.major.to_i > 16) or
    (@host.operatingsystem.name != 'Fedora' and @host.operatingsystem.major.to_i >= 7) -%>
APPEND initrd=<%= @initrd %> ks=<%= foreman_url('provision') + "&static=yes" %> inst.ks.sendmac <%= "ip=#{ip}::#{gw}:#{mask}:::none nameserver=#{dns} ksdevice=bootif BOOTIF=#{bootif}" %>
<% else -%>
APPEND initrd=<%= @initrd %> ks=<%= foreman_url('provision') + "&static=yes" %> kssendmac <%= "ip=#{ip} netmask=#{mask} gateway=#{gw} dns=#{dns} ksdevice=#{mac} BOOTIF=#{bootif}" %>
<% end -%>

The template passes network credentials from Foreman Host via SYSLINUX configuration file into Anaconda. This is an example for Red Hat or Fedora systems. Keep in mind that provisioning token is embedded in the image, so Host record must be present (the image is not generic).

For more info visit Foreman Bootdisk

Option 3: Discovery ISO

In Foreman 1.10, Discovery image (version 3.0.1+) together with Foreman Discovery plugin 4.1.1+ can be used to discover systems via CD/DVD-ROM or USB stick. In this workflow, discovered hosts are either manually or automatically (via Discovery Rules) provisioned and kernel is replaced with installer using kexec technology.

It works in both DHCP or DHCP-less environments as the ISO image can be “remastered” with a script providing network credentials via SYSLINUX configuration similarly as in Full host image.

For more info visit Foreman Discovery documentation

Documentation for this feature is currently being finished. The draft is available here.

07 October 2015 | linux | foreman

Fedora 22 libvirt with bridge

Fedora 22 comes with libvirt and NetworkManager and it is pre-configured with “default” NAT network. That’s fine, until you want to reach the NATed servers from your LAN. By the way this works in CentOS 7 too.

Good solution is network interface bridging. It was always a pain to configure this, but in Fedora 21 most of bugs were fixed and now it is possible to configure everything via NetworkManager.

In this tutorial, I will show you how to configure things without GUI. The commands below adds bridge in the system and reconfigures primary wired connection over that bridge. Make sure you set MAIN_CONN variable to the correct connection (use nmcli c show to find it).

I recommend to execute this in screen or tmux because connection will be lost during execution! Comment out IPv4 static configuration, if that’s your case.

yum -y install bridge-utils
yum -y groupinstall "Virtualization Tools"
export MAIN_CONN=enp8s0
bash -x <<EOS
systemctl stop libvirtd
nmcli c delete "$MAIN_CONN"
nmcli c delete "Wired connection 1"
nmcli c add type bridge ifname br0 autoconnect yes con-name br0 stp off
#nmcli c modify br0 ipv4.addresses 192.168.1.99/24 ipv4.method manual
#nmcli c modify br0 ipv4.gateway 192.168.1.1
#nmcli c modify br0 ipv4.dns 192.168.1.1
nmcli c add type bridge-slave autoconnect yes con-name "$MAIN_CONN" ifname "$MAIN_CONN" master br0
systemctl restart NetworkManager
systemctl start libvirtd
systemctl enable libvirtd
echo "net.ipv4.ip_forward = 1" | sudo tee /etc/sysctl.d/99-ipforward.conf
sysctl -p /etc/sysctl.d/99-ipforward.conf
EOS

Do not, I repeat, do not execute this one by one. You need to do this in one “transaction” because connection will be lost and screen wont help you in this case. That’s why I use bash -x HEREDOC there.

Reboot might be needed if you encounter networking issues.

Now, when creating a VM in libvirt, make sure you select “br0” as the interface to use bridged networking. That’s all for today!

24 September 2015 | linux | fedora

How to configure NFS firewall in RHEL7

If you intend to use NFSv4 protocol only, all you need to do is this:

firewall-cmd --permanent --zone public --add-service nfs
firewall-cmd --reload

But if you want to use NFSv3 protocol, things are more complicated.

firewall-cmd --permanent --zone public --add-service mountd
firewall-cmd --permanent --zone public --add-service rpc-bind
firewall-cmd --permanent --zone public --add-service nfs
firewall-cmd --reload
11 September 2015 | linux | fedora | rhel7

twitter.com linkedin.com
google.com/+ facebook.com
flickr.com youtube.com