Hidden gem of Fedora: git xcleaner

Fedora 22+ now provides new tool called git-xcleaner which helps deleting unused topic branches using TUI (text user interface). It also offers mechanisms for pre-selecting branches that can be safely removed.

Main menu


Possible actions:


Command git branch –merged is used to find a list of branches that are marked for deletion.


For each branch, tip commit message is compared against base history and if found, the branch is marked for deletion. Whole commit message is compared and it must fully match.

User enters base branch name (defaults to “master”).


Delete all remote branches in remote repo which are not present locally.

User enters remote name (defaults to current username).


All branches which no longer exist in origin or specific remote repository are marked for deletion.

User enters specific remote name (defaults to current username).


User manually marks branches for deletion.

Go ahead and try it:

# dnf -y install git-xcleaner
# git xcleaner

If you mis-deleted a branch and ignored all the warnings in documentation and on the screen, check out the homepage for instructions how to checkout your branch back.

And one more thing. File bugs at https://github.com/lzap/git-xcleaner/issues

06 May 2016 | linux | fedora

Human readable name generator

Out of ideas for incoming bare-metal host names in your cluster? I wrote a little generator which contains frequently occurring given names and surnames from the 1990 US Census (public domain data):

  • 256 (8 bits) unique male given names
  • 256 (8 bits) unique female given names
  • 65,536 (16 bits) unique surnames

Given names were filtered to be 3-5 characters long, surnames 5-8 characters, therefore generated names are never longer than 14 characters (5+1+8).

This gives 33,554,432 (25 bits) total of male and female name combinations. The generator can either generate randomized succession, or generate combinations based on MAC adresses.

Random generator

The random name generator makes use of Fibonacci linear feedback shift register which gives deterministic sequence of pseudo-random numbers. Additionally, algorithm makes sure names with same first name (or gender) and last name are not returned in succession. Since there are about 1% of such cases, there are about 33 million unique total names. Example sequence:

  • velma-pratico.my.lan
  • angie-warmbrod.my.lan
  • grant-goodgine.my.lan
  • alton-sieber.my.lan
  • velma-vanbeek.my.lan
  • don-otero.my.lan
  • sam-hulan.my.lan

The polynomial used in linear feedback shift register is

x^25 + x^24 + x^23 + x^22 + 1.

The key thing is to store register (a number) and use it for each generation in order to get non-repeating sequence of name combinations. See example below.

MAC generator

Examples of MAC-based names:

  • 24:a4:3c:ec:76:06 -> bobby-louie-sancher-weeler.my.lan
  • 24:a4:3c:e3:d3:92 -> bob-louie-sancher-rimando.my.lan

MAC addresses with same OID part (24:a4:3c in this case) generates the same middle name (“Louie Sancher” in the example above), therefore it is possible to guess server (or NIC) vendor from it, or it should be possible to shorten middle names (e.g. bobby-ls-weeler.my.lan) in homogeneous environments.

Comparison of types

MAC-based advantages

  • reprovisioning the same server generates the same name
  • middle names are same for unique hardware vendors

MAC-based disadvantages

  • name is longer

Random-based advantages

  • name is shorter

Random-based disadvantages

  • reprovisioning the same server generates different name


Visit project homepage to see some examples. Also try the Foreman which will (hopefully) have a embedded generator based on Deacon library in the 1.12 release.

Why the?

Deacon is a ministry in the Christian Church that is generally associated with service of some kind, but which varies among theological and denominational traditions. In many traditions the “diaconate” (or deaconate), the term for a deacon’s office, is a clerical office; in others it is for laity. – Wikipedia

07 March 2016 | linux | ruby | foreman

Upgraded to Jekyll 3.0

Github Pages now supports Jekyll 3.0 which has some backward incompatible features, so I have decided to upgrade. I was quite surprised when I realized I am still using Jekyll 1.0 and everything was working great so far!

What’s great about static pages is you decide when you upgrade your site. There are no security updates, it’s just bunch HTML and CSS. Upgrade was smooth, I only had to do two configuration changes and one layout change. Looks like Github and CloudFlare is a great combo, thanks folks!

I would like to thank Adam Přibyl for his missing dates report. All posts now contain date and list of relevant tags in the bottom-right corner. If you miss anything, let me know via e-mail or Google Plus comments (do they even work? :-)

News? We are wrapping up Foreman 1.11 release and I bought small little Intel NUC box to do me network management (DHCP/DNS/Foreman) in the house. Well, it will likely provide also Kodi and Steam streaming if I decide to put it under my telly. So far, I am impressed (6 W idle).

22 February 2016 | linux | fedora

Served by CloudFlare

Starting today, my blog is being served through CloudFlare caching CDN. This means faster loading speeds, but what’s more important is valid SSL certificate through their SSL Free Termination service. I will switch all links to https by default and after few weeks if there are no issues reported, I will force HTTPS via redirect.


And yeah, my blog now supports HTTP/2 and SPDY and other fancy things.

Please report issues via discussion or IRC or Google+. Take care!

16 December 2015 | linux | fedora | blog

Image based deployment via dd and nbd

My train to Pilsen had about 300+ people onboard and apparently super-modern ultra-fast WiFi technology can’t handle that many people. At least when administrator sets C class IPv4 network. People have cell phones, tablets and laptops. It’s no surprise that the DHCP server ran out of resources since I see like 500+ devices around.

Hey, Leo Express, I am looking at you!

Luckily, not being able to connect to the internet I started hacking discovery image. I was doing little research on image-based deployment. I already created a small patch exposing all block devices via netcat. But since OpenStack guys do deployments via iSCSI, I was wondering if I can do the same.

But first of all, let’s test how raw image copy (via dd) compares to exposed block device (iSCSI/NBD). Instead of iSCSI, I will test Network Block Device, which is easier to configure and it is present in all Linux distributions, including Fedora and RHEL (the client is actually in the kernel itself).

Target setup with NBD is very easy:

dst# mkdir /etc/nbd-server/; cat >/etc/nbd-server/config <<EOF
exportname = /dev/XYZ
readonly = false
multifile = false
copyonwrite = false

dst# nbd-server -d

src# sudo modprobe nbd
src# sudo nbd-client dst-server -N vdb /dev/nbd1

Initial testing in the train involved testing on KVM instance. Direct transfer of 1GB image with CirrOS (created with a command below) with block size of 1kB was the fastest method indeed:

src# time sudo dd if=/tmp/cirros.img of=/dev/nbd1 bs=1k
real    0m0.882s

Compressing the data speeds up things, lzop seems to be the fastest of course:

dst# nc -l 1234 > /dev/vdb
src# time sudo cat /tmp/cirros.img | ncat dst-server 1234
real    0m9.871s

dst# nc -l 1234 | gunzip > /dev/vdb
src# time sudo cat /tmp/cirros.img | gzip -1 | ncat dst-server 1234
real    0m5.917s

dst# nc -l 1234 | lzop -d > /dev/vdb
src# time sudo cat /tmp/cirros.img | lzop | ncat dst-server 1234
real    0m2.358s

Note that KVM/virtio is a special case, usually lzop is faster than raw data. Now the remote block device approach:

src# time sudo virt-builder cirros-0.3.1 --output /tmp/cirros.img --size 1G
real    0m20.352s

The same size, but over the (virtual) network:

src# time sudo virt-builder cirros-0.3.1 --output /dev/nbd1
real    0m20.867s

I was surprised that the transfer was almost the same speed as local storage (I was using virtio driver). I was expecting it to be little more slower. That looks promising!

At home, I was able to do real testing on remote RHEL7 server over 1gig LAN connection. For the record, I had to disable IPv6 to get nbd-server running due to some bugs. For comparison, I created 10GB image. First, test the fastest possible stream method:

dst# nc -l 1234 | lzop -d > /dev/XYZ
src# time sudo cat /tmp/cirros.img | lzop | ncat dst-server 1234
real    1m32.336s

Now, that is a difference. Let’s test attached block storage. Locally first:

src# time sudo virt-builder cirros-0.3.1 --output cirros.img --size 10240000000b
real    0m33.030s

The same size, but over the (gigabit) network:

src# time sudo virt-builder cirros-0.3.1 --output /dev/nbd1
real    0m29.926s

I was scratching my head for few seconds until I realized why the attached physical storage is actually faster. The local test was on my T430s laptop with HDD in DVD bay while the server was Dell Precision T5400 with end-consumer SATA drive. Still faster than the two inch laptop one.

Anyway, it looks like NBD performs well enough. I think this is a go to implement this in foreman-discovery-image. Exporting volumes via NBD is an easy task, the only work needs to be done on smart-proxy side to implement new virt-builder plugin which will do the deployment.

20 November 2015 | linux | fedora | rhel | foreman

twitter.com linkedin.com
google.com/+ facebook.com
flickr.com youtube.com