News archive / NeuigkeitenarchivNico Schotteliushttps://www.nico.schottelius.org//news/Nico Schotteliusikiwiki2021-06-09T17:47:15ZBuilding an IPv6 only kubernetes clusterhttps://www.nico.schottelius.org//blog/k8s-ipv6-only-cluster/2021-06-09T17:47:15Z2021-06-06T16:11:50Z
<h2>Introduction</h2>
<p>For a few weeks I am working on my pet project to create a production
ready kubernetes cluster that runs in an IPv6 only environment.</p>
<p>As the complexity and challenges for this project are rather
interesting, I decided to start documenting them in this blog post.</p>
<p>The
<a href="https://code.ungleich.ch/ungleich-public/ungleich-k8s">ungleich-k8s</a>
contanins all snippets and latest code.</p>
<h2>Objective</h2>
<p>The kubernetes cluster should support the following work loads:</p>
<ul>
<li>Matrix Chat instances (Synapse+postgres+nginx+element)</li>
<li>Virtual Machines (via kubevirt)</li>
<li>Provide storage to internal and external consumers using Ceph</li>
</ul>
<h2>Components</h2>
<p>The following is a list of components that I am using so far. This
might change on the way, but I wanted to list already what I selected
and why.</p>
<h3>OS: Alpine Linux</h3>
<p>The operating system of choice to run the k8s cluster is
<a href="https://www.alpinelinux.org/">Alpine Linux</a> as it is small, stable
and supports both docker and cri-o.</p>
<h3>Container management: docker</h3>
<p>Originally I started with <a href="https://cri-o.io/">cri-o</a>. However using
cri-o together with kubevirt and calico results in an overlayfs placed
on / of the host, which breaks the full host functionality (see below
for details).</p>
<p>Docker, while being deprecated, allows me to get kubevirt generally
speaking running.</p>
<h3>Networking: IPv6 only, calico</h3>
<p>I wanted to go with <a href="https://cilium.io/">cilium</a> first, because it
goes down the eBPF route from the get go. However cilium does not yet
contain native and automated BGP peering with the upstream
infrastructure, so managing nodes / ip network peering becomes a
tedious, manual and error prone task. Cilium is on the way to improve
this, but is not there yet.</p>
<p><a href="https://www.projectcalico.org/">Calico</a> on the other hand still
relies on ip(6)tables and kube-proxy for forwarding traffic, but has
for a long time proper BGP support. Calico also aims to add eBPF
support, however at the moment it does not support IPv6 yet (bummer!).</p>
<h3>Storage: rook</h3>
<p><a href="https://rook.io/">Rook</a> seems to be the first choice if you search
who is doing what storage providers in the k8s world. It looks rather
proper, even though some knobs are not yet clear to me.</p>
<p>Rook, in my opinion, is a direct alternative of running cephadm, which
requires systemd running on your hosts. Which, given Alpine Linux,
will never be the case.</p>
<h3>Virtualisation</h3>
<p><a href="https://kubevirt.io/">Kubevirt</a> seems to provide a good
interface. Mid term, kubevirt is projected to replace
<a href="https://opennebula.io/">OpenNebula</a> at
<a href="https://ungleich.ch">ungleich</a>.</p>
<h2>Challenges</h2>
<h3>cri-o + calico + kubevirt = broken host</h3>
<p>So this is a rather funky one. If you deploy cri-o and calico,
everything works. If you then deploy kubevirt, the <strong>virt-handler</strong>
pod fails to come up with the error message</p>
<pre><code> Error: path "/var/run/kubevirt" is mounted on "/" but it is not a shared mount.
</code></pre>
<p>In the Internet there are two recommendations to fix this:</p>
<ul>
<li>Fix the systemd unit for docker: Obviously, using neither of them,
this is not applicable...</li>
<li>Issue <strong>mount --make-shared /</strong></li>
</ul>
<p>The second command has a very strange side effect: Issueing that, the
contents of a calico pod are mounted as an overlayfs <strong>on / of the
host</strong>. This covers /proc and thus things like <strong>ps</strong>, <strong>mount</strong> and
co. fail and basically the whole system becomes unusable until reboot.</p>
<p>This is fully reproducible. I first suspected the tmpfs on / to be the
issue, used some disks instead of booting over network to check it and
even a regular ext4 on / causes the exact same problem.</p>
<h3>docker + calico + kubevirt = other shared mounts</h3>
<p>Now, given that cri-o + calico + kubevirt does not lead to the
expected result, what does the same setup with docker look like? The
calico node pods with docker fail to come up, if /sys is not
shared mounted, the virt-handler pods fail if /run is not shared
mounted.</p>
<p>Two funky findings:</p>
<p>Issueing the following commands makes both work:</p>
<pre><code>mount --make-shared /sys
mount --make-shared /run
</code></pre>
<p>The paths are totally different between docker and cri-o, even though
the mapped hostpaths in the pod description are the same. And why is
having /sys not being shared not a problem for calico in cri-o?</p>
<h2>Log</h2>
<h3>Status 2021-06-07</h3>
<p>Today I have updated the ceph cluster definition in rook to</p>
<ul>
<li>check hosts every 10 minutes instead of 60m for new disks</li>
<li>use IPv6 instead of IPv6</li>
</ul>
<p>The succesful ceph -s output:</p>
<pre><code>[20:42] server47.place7:~/ungleich-k8s/rook# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph -s
cluster:
id: 049110d9-9368-4750-b3d3-6ca9a80553d7
health: HEALTH_WARN
mons are allowing insecure global_id reclaim
services:
mon: 3 daemons, quorum a,b,d (age 75m)
mgr: a(active, since 74m), standbys: b
osd: 6 osds: 6 up (since 43m), 6 in (since 44m)
data:
pools: 2 pools, 33 pgs
objects: 6 objects, 34 B
usage: 37 MiB used, 45 GiB / 45 GiB avail
pgs: 33 active+clean
</code></pre>
<p>The result is a working ceph clusters with RBD support. I also applied
the cephfs manifest, however RWX volumes (readwritemany) are not yet
spinning up. It seems that test <a href="https://artifacthub.io/">helm charts</a>
often require RWX instead of RWO (readwriteonce) access.</p>
<p>Also the ceph dashboard does not come up, even though it is
configured:</p>
<pre><code>[20:44] server47.place7:~# kubectl -n rook-ceph get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
csi-cephfsplugin-metrics ClusterIP 2a0a:e5c0:13:e2::760b <none> 8080/TCP,8081/TCP 82m
csi-rbdplugin-metrics ClusterIP 2a0a:e5c0:13:e2::482d <none> 8080/TCP,8081/TCP 82m
rook-ceph-mgr ClusterIP 2a0a:e5c0:13:e2::6ab9 <none> 9283/TCP 77m
rook-ceph-mgr-dashboard ClusterIP 2a0a:e5c0:13:e2::5a14 <none> 7000/TCP 77m
rook-ceph-mon-a ClusterIP 2a0a:e5c0:13:e2::c39e <none> 6789/TCP,3300/TCP 83m
rook-ceph-mon-b ClusterIP 2a0a:e5c0:13:e2::732a <none> 6789/TCP,3300/TCP 81m
rook-ceph-mon-d ClusterIP 2a0a:e5c0:13:e2::c658 <none> 6789/TCP,3300/TCP 76m
[20:44] server47.place7:~# curl http://[2a0a:e5c0:13:e2::5a14]:7000
curl: (7) Failed to connect to 2a0a:e5c0:13:e2::5a14 port 7000: Connection refused
[20:45] server47.place7:~#
</code></pre>
<p>The ceph mgr is perfectly reachable though:</p>
<pre><code>[20:45] server47.place7:~# curl -s http://[2a0a:e5c0:13:e2::6ab9]:9283/metrics | head
# HELP ceph_health_status Cluster health status
# TYPE ceph_health_status untyped
ceph_health_status 1.0
# HELP ceph_mon_quorum_status Monitors in quorum
# TYPE ceph_mon_quorum_status gauge
ceph_mon_quorum_status{ceph_daemon="mon.a"} 1.0
ceph_mon_quorum_status{ceph_daemon="mon.b"} 1.0
ceph_mon_quorum_status{ceph_daemon="mon.d"} 1.0
# HELP ceph_fs_metadata FS Metadata
</code></pre>
<h3>Status 2021-06-06</h3>
<p>Today is the first day of publishing the findings and this blog
article will lack quite some information. If you are curious and want
to know more that is not yet published, you can find me on Matrix
in the <strong>#hacking:ungleich.ch</strong> room.</p>
<h4>What works so far</h4>
<ul>
<li>Spawing pods IPv6 only</li>
<li>Spawing IPv6 only services works</li>
<li>BGP Peering and ECMP routes with the upstream infrastructure works</li>
</ul>
<p>Here's an output of the upstream bird process for the routes from k8s:</p>
<pre><code>bird> show route
Table master6:
2a0a:e5c0:13:e2::/108 unicast [place7-server1 23:45:21.589] * (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3554 on eth0
unicast [place7-server3 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:224:81ff:fee0:db7a on eth0
unicast [place7-server4 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3564 on eth0
unicast [place7-server2 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:38cc on eth0
2a0a:e5c0:13:e1:176b:eaa6:6d47:1c40/122 unicast [place7-server1 23:45:21.589] * (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3554 on eth0
unicast [place7-server4 23:45:21.591] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3564 on eth0
unicast [place7-server3 23:45:21.591] (100) [AS65534i]
via 2a0a:e5c0:13:0:224:81ff:fee0:db7a on eth0
unicast [place7-server2 23:45:21.589] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:38cc on eth0
2a0a:e5c0:13:e1:e0d1:d390:343e:8480/122 unicast [place7-server1 23:45:21.589] * (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3554 on eth0
unicast [place7-server3 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:224:81ff:fee0:db7a on eth0
unicast [place7-server4 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3564 on eth0
unicast [place7-server2 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:38cc on eth0
2a0a:e5c0:13::/48 unreachable [v6 2021-05-16] * (200)
2a0a:e5c0:13:e1:9b19:7142:bebb:4d80/122 unicast [place7-server1 23:45:21.589] * (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3554 on eth0
unicast [place7-server3 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:224:81ff:fee0:db7a on eth0
unicast [place7-server4 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:3564 on eth0
unicast [place7-server2 2021-06-05] (100) [AS65534i]
via 2a0a:e5c0:13:0:225:b3ff:fe20:38cc on eth0
bird>
</code></pre>
<h4>What doesn't work</h4>
<ul>
<li>Rook does not format/spinup all disks</li>
<li>Deleting all rook components fails (<strong>kubectl delete -f cluster.yaml
hangs</strong> forever)</li>
<li>Spawning VMs fails with <strong>error: unable to recognize "vmi.yaml": no matches for kind "VirtualMachineInstance" in version "kubevirt.io/v1"</strong></li>
</ul>
Do not make your software rely on systemdhttps://www.nico.schottelius.org//blog/do-not-rely-on-systemd/2021-05-23T09:36:38Z2021-05-23T09:36:38Z
<h2>TL;DR</h2>
<pre><code>Do not make your software rely on systemd.
</code></pre>
<h2>Introduction</h2>
<p>There is some software out there that is leaning towards requiring
systemd. This will render that software unusable on non-systemd Linux
distributions. If you develop software, I urge you to not rely on
systemd features, because there are many situations in which you
cannot use systemd.</p>
<h2>The Open Source community</h2>
<p>While for many of you systemd might be something you use on a daily
basis, there is a big part of the Open Source community that does not
use systemd, for a variety of reasons. Without going into detail,
systemd does not exist in a variety of Linux distributions like
<a href="https://alpinelinux.org/">Alpine Linux</a>,
<a href="https://www.devuan.org/">Devuan</a> or <a href="https://openwrt.org/">OpenWrt</a>
nor on the BSDs.</p>
<p>However, even if it existed, people might choose to opt-out of the
systemd ecosystem because of compatibility, security, stability
or any other kind of reason.</p>
<h2>Why are we building Open Source Software?</h2>
<p>The Open Source / FOSS movement originated many years (decades!) ago
with the goal of creating usable systems. Systems that are not locked
in, systems that allow you to freely modify software and eventually:
support a wider audience, be more inclusive.</p>
<h2>Majority is not the right argument</h2>
<p>If you assume that everyone has a systemd environment, I need to raise
a flag here: you are not the majority. If you are using that line of
argument, I will answer with: the majority of systems is running
Microsoft Windows, so all software should be written only with Windows
in mind. And that is problematic, because you are fully dependent on a
single vendor with an ecosystem that one cannot change.</p>
<p>Now, you can argue that systemd is Open Source and it could be
modified. While in theory this is true, the systemd authors do have
strong opinions that conflict (details omitted here intentionally)
with others. In this regard, systemd is similar to a closed ecosystem,
because it does not make everyone benefit from it.</p>
<h2>Problematic direction</h2>
<p>Recently I see some software that assumes the existence of systemd by
default. Either by using it as a cgroupdriver or by relying on
systemctl. While <em>some</em> software can be patched, the <em>notion in the
documenation</em> inclines towards "systemd only support". And that is the
reason why I am writing this blog</p>
<h2>systemd is not for everyone</h2>
<p>You can argue for hours or days whether feature x of systemd is good
or not. However it is a fact that systemd is not for everyone and it
is not suitable for every situation that Open Source software usually
operates in.</p>
<pre><code>Forcing systemd on users does not work (and is not even realistic).
</code></pre>
<p>Even if you had the means to try forcing people into systemd, it
simply does not work, because it is not suited for running on embedded
systems for instance.</p>
<h2>Call for action</h2>
<p>I am aware that generations of hackers have changed, that Open Source
has become much more accessible and that not everyone using Open
Source is a hacker anymore. That is not a problem, but actually a
significant achievement of the Open Source community. But it also
means that we have more diversity and a broader audience.</p>
<p>However we shall not forget our roots and why Open Source Software
actually works: it is because we work together and respect different
approaches and we try to be inclusive. In terms of systems, as well as
humans. That said, I really urge you:</p>
<pre><code>Respect diversity, do not rely on systemd in your software.
</code></pre>
The Nodejs in IPv6 only networks problemhttps://www.nico.schottelius.org//blog/nodejs-and-ipv6-only-networks/2021-01-23T08:48:31Z2021-01-23T08:48:31Z
<p>For some years I have been seeing problems of nodejs based
applications that do not work in IPv6 only networks.
More recently, <a href="https://twitter.com/NicoSchottelius/status/1352243030368116739">I again found a situation in which a nodejs based
application does not even
install</a>,
if you try to install it in an IPv6 only network.</p>
<p>As the situation is not just straight forward, I started to collect
information about it on this website.</p>
<h2>The starting point</h2>
<p>I wanted to install
<a href="https://github.com/ether/etherpad-lite">etherpad-lite</a> and it failed
with the following error:</p>
<pre><code>174 error request to https://registry.npmjs.org/express-session/-/express-session-1.17.1.tgz failed, reason: connect EHOSTUNREACH 104.16.25.35:443
</code></pre>
<p>The message <strong>connect EHOSTUNREACH 104.16.25.35:443</strong> already cleary
points to the problem: npm is trying to connect via IPv4 on an IPv6
only VM. This cleary cannot work.</p>
<h2>A bug in NPM?</h2>
<p>My first suspicion was that it <a href="https://github.com/npm/cli/issues/2519">must be a bug in
npm</a>. But on Twitter
<a href="https://twitter.com/A1bi/status/1352574621594300416">I was told that npm should work in IPv6 only
networks</a>. That's
strange.
However it turns out that <a href="https://github.com/npm/cli/issues/348#issuecomment-751143040">somebody else had this problem
before</a>
and it seems to be specific to using npm on <a href="https://alpinelinux.org/">Alpine
Linux</a>.</p>
<h2>A bug in Alpine Linux?</h2>
<p>Alpine Linux is currently the main distribution that I use. Not
because of the <a href="https://musl.libc.org/">small libc called musl</a>, but
because the whole system works straight forward. Correct. And easy to
use. But what does that have to do with etherpad-lite failing to
install in an IPv6 only network?</p>
<p>It turns out that there is
<a href="https://github.com/libuv/libuv/issues/2225">a difference between musl and glibc in the default behaviour of
getaddrinfo()</a>, which is
used to retrieve DNS results from the operating system.</p>
<h2>A bug in musl libc?</h2>
<p>I got in touch with the developers of musl and the statement is rather
easy: musl <a href="https://pubs.opengroup.org/onlinepubs/9699919799/functions/getaddrinfo.html">is behaving according to the
spec</a>
and the caller, in this
context nodejs, cannot just use the <strong>first</strong> result, but has to
potentially try <strong>all results</strong>.</p>
<h2>A DNS or a design bug?</h2>
<p>And at this stage the problem gets tricky. Let's revise again what I
wanted to do and why we are so deep into the rabbit hole.</p>
<p>I wanted to install etherpad-lite, which uses resources from
registry.npmjs.org. So npm wants to connect via HTTPS to
registry.npmjs.org and download a file. To achieve this, npm has to
find out which IP address registry.npmjs.org has. And for this it is
doing a DNS lookup.</p>
<p>So far, so good. Now the trouble begins:</p>
<pre><code>A DNS lookup can contain 0, 1 or many answers.
</code></pre>
<p><strong>And in case of the libc call getaddrinfo, the result is a list of IPv6
and IPv4 addresses, potentially 0 to many of each.</strong></p>
<p>So an application that "just wants to connect somewhere", cannot just
take the first result.</p>
<h2>A bug in nodejs?</h2>
<p>The assumption at this point is that nodejs only takes the first
result from DNS and tries to connect to it. However so far I have not
been able to spot the exact source code location to support that
claim.</p>
<p>Stay tuned...</p>
My notebook firewall for the 36c3https://www.nico.schottelius.org//blog/my-notebook-firewall-36c3/2020-05-14T12:52:54Z2019-12-23T17:23:43Z
<p>It's time for the
<a href="https://events.ccc.de/congress/2019/wiki/index.php/Main_Page">36c3</a>
and to verify that some things are in place where they should be.</p>
<p>As some of you might know, I am using
<a href="https://ipv6onlyhosting.com">IPv6 extensively</a> to provide
services anywhere on anything, so you will see quite some IPv6 related
rules in my configuration.</p>
<p>This post should serve two purpose:</p>
<ul>
<li>Inspire others to verify their network settings prior to the
congress</li>
<li>Get feedback from anyone spotting a huge mistake in my config :-)</li>
</ul>
<h2>The firewall rules</h2>
<p>I am using
<a href="https://ungleich.ch/en-us/cms/blog/2018/09/11/introduction-to-nftables/">nftables</a>
on my notebook and the full ruleset is shown below.</p>
<pre><code>table ip6 filter {
chain input {
type filter hook input priority 0;
policy drop;
iif lo accept
ct state established,related accept
icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept
tcp dport { 22, 80, 443 } accept
}
chain forward {
type filter hook forward priority 0;
policy drop;
ct state established,related accept
ip6 daddr 2a0a:e5c1:137:b00::/64 jump container
ip6 daddr 2a0a:e5c1:137:cafe::/64 jump container
}
chain container {
icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept
tcp dport { 22, 80, 443 } accept
drop
}
chain output {
type filter hook output priority 0;
policy accept;
}
}
table ip filter {
chain input {
type filter hook input priority 0;
policy drop;
iif lo accept
ct state established,related accept
tcp dport { 22 } accept
tcp dport { 51820 } accept
}
chain forward {
type filter hook forward priority 0;
policy drop;
}
chain output {
type filter hook output priority 0;
policy accept;
}
}
</code></pre>
<h2>The firewall explained: IPv6</h2>
<p>Let's have a look at the IPv6 part first. In nftables we can freely
define chains, what is important is is the <strong>hook</strong> that we use in it.</p>
<pre><code> chain input {
type filter hook input priority 0;
...
</code></pre>
<p>The policy has the same meaning as in iptables and basically specifies
what to do with unmatched packets.</p>
<p>IPv6 uses quite some ICMP6 messages to control and also to establish
communication in the first place, so the list for accepting is quite
long.</p>
<pre><code> icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert } accept
</code></pre>
<p>As we are dealing with traffic that comes to my notebook ("hook
input"), I want to allow any incoming packets that belong to one of
the connections that I initiated:</p>
<pre><code> ct state established,related accept
</code></pre>
<p>And finally, I allow port 22, to be able to ssh into my notebook,
port 80 to get letsencrypt certificates and port 443 for serving
https. When I am online, my notebook is reachable at
<a href="https://nico.plays.ipv6.games">nico.plays.ipv6.games</a>, so I need the
web ports to be open.</p>
<p>As I run quite some test on my notebook with docker and lxc, I created
a /64 IPv6 network for each of them. When matching on those specific
networks, I jump into a chain that allows specific configurations for
containers:</p>
<pre><code> ip6 daddr 2a0a:e5c1:137:b00::/64 jump container
ip6 daddr 2a0a:e5c1:137:cafe::/64 jump container
</code></pre>
<p>The <strong>chain container</strong> consists at the moment of the same rule set as
the input chain, however this changes occasionally when testing
applications in containers.</p>
<p>And for the output chain, I trust that the traffic my notebook emits
is what I wanted it to emit (but also allows malware to send out data,
if I had some installed).</p>
<h2>The firewall explained: IPv4</h2>
<p>In the IPv4 irea ("<strong>table ip filter</strong>*) things are quite similar with
some small differences:</p>
<ul>
<li>I don't provides services on IPv4 besides ssh and wireguard (port 22
and 51820)</li>
<li>There is nothing to be forwarded for IPv4, all containers use IPv6</li>
<li>Same logic for the output as in IPv6</li>
</ul>
<h2>Safe or not safe?</h2>
<p>Whether this ruleset is safe or not depends a bit on your degree of
paranoia. I allow access to port 443 on which an nginx runs which then
again proxies to a self written flask application, which
might-or-might-not be safe.</p>
<p>Some people argue to limit outgoing traffic and while this is
certainly possible (whitelist ports?), often this does is rendered
useless, as any command and control server can be reached on port 80
and you probably don't want to block outgoing port 80 traffic.</p>
<p>If you have any comments about it, I'm interested in hearing your
feedback on <a href="http://chat.with.ungleich.ch">the ungleich chat</a>,
<a href="https://twitter.com/NicoSchottelius">twitter</a> or IRC (telmich).</p>
<h2>Update 2019-12-24</h2>
<p>I forgot to allow loopback traffic in the original version, which
breaks some local networking.</p>
dell-3am-call.pnghttps://www.nico.schottelius.org//blog/support-fiascos/dell-3am-call.png2019-12-05T10:04:54Z2019-12-05T10:04:54ZList of support fiascoshttps://www.nico.schottelius.org//blog/support-fiascos/2020-05-14T12:54:18Z2019-11-26T16:41:26Z
<h2>Introduction</h2>
<p>Dealing with a lot of hardware (in the sense of moving/maintaining)
involves some support from vendors. Sometimes vendors are doing a
particular bad job. This blog page is dedicated to vendor screwups and
document real stories.</p>
<h2>The support for a Dell XPS 13 2-in-1 (2019-12-19 - 2020-04-06)</h2>
<p>10 days after the repair the space bar inhibts the same behaviour and
hangs / does not produce a sign. I cannot open a service request on
the website, as the previous request is still open. Additionally now
the rubber from the screen falls off. The overall impression of the
device is like a cheap 50$ notebook that you buy on shady electronics
market, significantly below standard of regular notebooks.</p>
<p>Again a list of "mysterious" calls can be seen in the Dell website,
but nobody ever responds on the Dell own website. Meanwhile the
support on Twitter tells me to take pictures and a video of the space
on Twitter. After I sent all of it, I am asked to reboot the system
and try if the space bar still doesn't work in the bootloader.</p>
<p>Updates from this particular Dell fiasco:</p>
<ul>
<li>2019-12-20: The exchange is confirmed., it should be done in 2-3
work days</li>
<li>2019-12-28: The necessary part (unclear which) is not available, the
exchange is postponed to 2020-01-12.</li>
<li>2019-12-31: I get an offer to the the system replaced by a
refurbished system. That does not make any sense as the device is
almost brand new and a refurbished one has been returned (probably
for a good reason).</li>
<li>2020-01-03: I request a statement to how Dell understands "Next
business day" support, as the problem has been open for
weeks. Again.</li>
<li>2020-01-04: another key ("d") now also gets stuck. It seems the
keyboard was never tested to be used in reality to me.</li>
<li>2020-01-06: Dell informs me that the repair is delayed until
2020-01-28.</li>
<li>2020-01-07: I inform Dell that their behaviour is breaking the
contract and that I want to have a refund, a replacement or a repair
by end of week. This is still significantly longer than "next
business day".</li>
<li>2020-01-08: Dell re-informs me that they can exchange with a
refurbished device, which I decline before.</li>
<li>2020-01-10: Dell re-informs me that the repair is delayed until
2020-01-28</li>
<li>2020-01-16: Dell informs me that the repair is delayed until
2020-02-24. This makes it an <strong>at least 2 months repair time</strong>.</li>
<li>2020-01-16: I re-inform Dell that they can refund and pickup the
device and that I expect it to be done by 2020-01-24</li>
<li>2020-01-21: Instead of replacing the notebook with a used notebook,
Dell today suggest to replace it with a new one. I accept the
proposal and now wait for a replacement device.</li>
<li>2020-01-22: I am asked to take a picture of the notebook with the
serial tag and to provide the following information: 1. Service
request number, 2. Registered Owner's name, 3. Current date and time
and 4. Current Location. However the device does not have a sticker
and all information is already present at Dell.</li>
<li>2020-01-28: While the replacement notebook should be on the way
(according to the tracking it isn't - but then again nothing in the
support system of Dell is up-to-date), the current notebook is
slowly dying: The screen has become wobbly-wobbly and makes funny
noise when opening, closing or even moving the notebook while the
display is open. If this notebook was about 7 years old, I'd say
it's a typical worn-off problem. However it is about 3 months old
now. My hope: it's only this particular model, it's not an issue of
the whole XPS 13 series. My fear: it actually looks to be designed
rather fragile, compared to a thinkpad. Also note: more random keys
get stuck half the way and make it impossible to type text
correctly, because a key may-or-may not function on the first hit.</li>
<li>2020-01-31: The date for the replacement is set to be 2020-02-14.</li>
<li>2020-01-31: The notebook begins to further fall apart: the
keyboard/lower part slowly disconnects from the screen on the right
side. This might also explain the wobbly behaviour. Furthermore the
notebook freezes now with some disk I/O. The latter could be a
software bug, the former could be a mis-repair (screw lose?). So
clearly, if you need to rely on a computer, neither the XPS nor Dell
is something to choose.</li>
<li>2020-02-02: The audio jack is now loose and headphones only get
partial connectivity. This is probably related to the right part of
the screen falling off.</li>
<li>2020-02-03: Can't believe it, but now the touchpad also gets
stuck. It is about 0.3mm down on the left side, making it impossible
to issue a left click.</li>
<li>2020-02-11: The replacement notebook arrived.</li>
<li>2020-02-16: The replacement notebook gets hand-burning hot at the
bottom. Problem reported via Twitter.</li>
<li>2020-02-17: Dell says the hot temperatures are normal, even though I
advise Dell that it has potential to burn my skin.</li>
<li>2020-02-19: On the replacement notebook the "c" key gets stuck from
time to time.</li>
<li>2020-02-25: The power supply of the replacement notebook is
broken. It stops charging after some time, the led on the charger
turns off. It works again, if it is disconnected from the power
outlet for some hours.</li>
<li>2020-02-26: Running diagnosistics confirms that the charger is
broken. This amount of time spent for debugging this notebook series
is beyond ridiculous. Dell so far refuses a full refund even though
they clearly ship unusable hardware.</li>
<li>2020-02-26: The "h" and "u" keys are now als exhibiting partial
stuck behaviour.</li>
<li>2020-02-26: The system gets very slow (mouse pointer lagging
slow). I reboot. The system gets stuck in the Dell logo
state. Turning it off hard. Turning it on again. It stays stuck with
the Dell logo.</li>
<li>2020-04-20: The system has been sent back and refunded.</li>
</ul>
<p>Summary: <strong>Dell is fully incapable of repairing a device</strong> and
<strong>upholding a contract</strong>. I assumed I bought a notebook with next business
day service. What I got is a computer which has
frequent hardware failures and no support within any sensible amount
of time.</p>
<h2>The support for a Dell XPS 13 2-in-1 (2019-11-16 - 2019-12-09)</h2>
<p>I ordered this particular notebook on 2019-09-19 and it arrived around
2019-09-27. So far so good. However shortly after starting to use it,
I managed to get a somewhat-stuck key (the "p key"), which
more-or-less randomly hangs/does not produce a character. As some of
my passwords contain a p, this led to very frustrating login failures.</p>
<p>Having a stuck key like this after less than 2 months of use is really
not showing good quality, so I reported this issue with Dell on
2019-11-16. With the device I bought the so called
"Complete Care Service" and "Premium Support". In theory reachable
24x7.</p>
<p>In practice, after opening the support request on 2019-11-16, I did
not receive a real reply on the following Monday. So I reached out
again and got a reply on Tuesday, already being late if it was only
next business day (NBD) support.</p>
<p>After reporting that issue additionally the rubber below that keeps
the notebook stable on the table began to detach itself from the
notebook. Only another minor problem, but clearly nothing to expect
from a quality device.</p>
<p>After a long forth-and-back via Twitter DM about the device heat and
whether the p key still occasionally is stuck (yes!) there was
eventually a replacement scheduled for the 26th of November.</p>
<p>However - you can guess it - nobody showed up. The log at Dell says
that somebody tried to reach me, however there was no missed calls on
any of my numbers. And no email or no direct message. So even if
somebody tried to call, they did not bother sending an email.</p>
<p>Until I reached out again, after I got a message that the phone number
is forwarded. It continues "funny" like that: on 26th there was no
further communication from Dell. No message, no call, no email.</p>
<p>However when logging in to the Dell portal, Dell rescheduled the
appointment for Thursday, 28th, 0800.</p>
<p>Independently on how the story evolves from here, the amount of time
spent into the support, waiting, replanning locations, etc. is already
exceeding the worth of the product. So I can clearly disrecommend
buying this device/support combination, if you want to professionally
work with it.</p>
<p>And it continued on 2019-11-27 at around 2230 in the evening when the
Dell technician called me by accident. "I just wanted to save your
number". Then asking me on the phone where Glarus is in detail. I
guess Dell doesn't have a navigation software...
Then eventually telling me that he
might or might not come tomorrow (the 28th), but he will certainly
contact me in the morning.</p>
<p>2019-11-28, around 1400. No call, no message, no nothing. Reaching
out via Twitter DM. Again. My phone number is confirmed, I get as an
answer. So yet another day where Dell scheduled the support (not me),
does not appear, does not reach out nor gives any suitable answer.</p>
<p>2019-11-29. The technician just wrote an email that he comes Monday
1200. That is yet another week after Dell originally announced the
repair and yet another time that Dell unilaterally decides on a new
repair date without even trying to confirm the date.</p>
<p>But it gets worse: later in the evening I received a twitter message
that the case is closed. Without ever having seen a technician,
without having gotten it repaired. And a bit later it gets confirmed
on the "service request" page of Dell.</p>
<p>So in a summary:</p>
<ul>
<li>Waited for nothing to happen for two weeks</li>
<li>Multiple support appointments scheduled without ever showing up</li>
<li>Claims of trying to reach me by phone without any missed call (and
other calls that I received the same day)</li>
<li>Wasted many hours in communication</li>
<li>No support executed at all</li>
<li>Support cased closed without doing anything</li>
</ul>
<p>The story continues: 2019-12-04. I got a message saying that the
techincian is coming tomorrow. Again without confirmation from my side
and with less than 24h to react.</p>
<p>2019-12-05: because so far nobody ever showed up, I send a message via
Twitter to @DellHilft, asking about the technician. Answer is that I
should wait. The third day. I also check the support center, which
claims to have called me at 3 am GMT. 3 am. Seriously, which company
does that?</p>
<p><a href="https://www.nico.schottelius.org//blog/support-fiascos/dell-3am-call.png"><img src="https://www.nico.schottelius.org//blog/support-fiascos/300x-dell-3am-call.png" width="300" height="60" class="img" /></a></p>
<p>GMT is actually behind Swiss time, so the actual call happend around 2am.
Besides all of that, I obviously did not receive a call.</p>
<p>BUT things can get worse with Dell. Since the 5th, my messages in the
dell support website don't show up anymore. It basically looks as follows:</p>
<pre><code>Dell: we are calling you
Me: I don't see a call, this is my number:
Dell: we are calling you
Me: Hello? Did you see my message?
Dell: We will just silently drop your messages now
</code></pre>
<p>Since 2019-12-05 the "." key is also stuck from time to
time. Basically the notebook is falling apart within 2 months of use
and the only thing you get is false claims of a technician showing up.</p>
<p>2019-12-06: the technician is calling around 0830. He starts by asking
where I live and then tells me it is far away and he doesn't have time
for me. He has many other customers. He also sounds very drunk.
He tells me he might come on Monday, but cannot tell a time yet.</p>
<p>Also on the same day: I get a note from Dell telling me the technician
could not reach me. Not sure how many WTFs can be produced within one
day, but Dell is really pushing it to the limits.</p>
<p>2018-12-09: the technician called at 0900, arrived by 1230 and fixed
the notebook around 1500.</p>
<ul>
<li>Roughly 4 weeks waiting time</li>
<li>Roughly 80+ messages exchanged with Dell</li>
<li>4 working days invested to get it fixed</li>
</ul>
Alpine Linux on the HP X360 1040 G5https://www.nico.schottelius.org//blog/alpine-linux-on-the-hp-x360-1040-g5-notebook/2019-11-12T17:28:12Z2019-05-14T17:10:49Z
<h2>Overview</h2>
<p>Due to RAM limitations in most notebooks (16G maximum) I have recently
switched to the HP X360 1040 G5, more or less the 14" HP equivalent of
the Lenovo X1 Carbon. Some tech specs for the geeks under us:</p>
<ul>
<li>Resolution 3840x2160</li>
<li>1 TB SSD / NVMe</li>
<li>32GB RAM</li>
</ul>
<p>This article is work in progress, currently more to be seen as a todo
list for myself.</p>
<h2>Alpine</h2>
<p>My backup notebooks are currently running Arch Linux and Devuan. As I
find Alpine an interesting project (it resembles most of what I think
how Linux should be), I thought about giving it a try.</p>
<p>Some things that are a bit special in alpine Linux:</p>
<ul>
<li>Does not come with shadow by default</li>
<li>Uses musl libc instead of glibc (yeah!)</li>
</ul>
<p>Besides that, some things that are instant benefits of Alpine:</p>
<ul>
<li>easy to use package manager</li>
<li>easy to write package format</li>
<li>VERY fast package installations (because they are fast)</li>
<li>The sound is GREAT (especially compared to the X1 Carbon that does
not really have speakers)</li>
</ul>
<h2>What is working on alpine + X360 1040</h2>
<p>Almost everything. C'mon, it's 2019 and as long as xorg + i3 is
running, what is there more that you want?
Some things to emphasise of either:</p>
<ul>
<li>The keyboard is quite nice (actually nicer then Gen6 X1 Carbon)</li>
<li>You can run startx via ssh and there is no stupid config that stops you from it!</li>
<li>Suspend works even with playing sound, just using pm-suspend + acpid</li>
<li>beauty!</li>
</ul>
<h2>What is currently not working on alpine + X360 1040</h2>
<p>There are a few
minor hiccups that I still need to solve in the next days:</p>
<ul>
<li>create a package for mu4e 1.2 (currently installed in /usr/local)
<ul>
<li>needs fix for /usr/bin/sh reference</li>
<li>PR created by eu at https://github.com/alpinelinux/aports/pull/7881/files</li>
<li>local install: works!</li>
</ul>
</li>
<li>-create a package for magit-
<ul>
<li>M-x package-install magit</li>
</ul>
</li>
<li>create a package for vym</li>
<li>create a package for openconnect</li>
<li>create a package for kismet</li>
<li>checkout why the shotwell package is broken</li>
<li>checkout why the firefox package is broken</li>
<li>hotkeys don't send the right key events => might be a kernel issue</li>
<li>xrandr does not show screen connected via usb-c (have to test other outputs)</li>
<li>automate lid handling in cdist
<ul>
<li>Currently just created /etc/acpi/LID/0000080 with pm-suspend in it => works</li>
</ul>
</li>
<li>The device has a very high frequency sound that varies over time
<ul>
<li>Seems to be unrelated to power plugged in or out</li>
<li>Seems to be related to the fan: fan on => no audible high frequency sound</li>
<li>The sound is louder than music played at "regular" volume</li>
<li>The sounds is directly related to screen brigthness: 100% => no sound</li>
<li>The lower the brightness, the stronger the sound</li>
</ul>
</li>
</ul>
<h2>What has been fixed</h2>
<ul>
<li>xbacklight
<ul>
<li>Need to load / install the intel video driver (modesetting does not work atm)</li>
</ul>
</li>
</ul>
Cdist 4.11.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.11.1-released/2019-04-22T19:14:37Z2019-04-22T19:14:37Z
<p>Here's a short overview about the changes found in version 4.11.1:</p>
<pre><code>* Core: Improve explorer error reporting (Darko Poljak)
* Type __directory: explorer stat: add support for Solaris (Ander Punnar)
* Type __file: explorer stat: add support for Solaris (Ander Punnar)
* Type __ssh_authorized_keys: Remove legacy code (Ander Punnar)
* Explorer disks: Bugfix: do not break config in case of unsupported OS
which was introduced in 4.11.0, print message to stderr and empty disk list
to stdout instead (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.11.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.11.0-released/2019-04-20T15:16:55Z2019-04-20T15:16:55Z
<p>Here's a short overview about the changes found in version 4.11.0:</p>
<pre><code>* Type __package: Add __package_apk support (Nico Schottelius)
* Type __directory: Add alpine support (Nico Schottelius)
* Type __file: Add alpine support (Nico Schottelius)
* Type __hostname: Add alpine support (Nico Schottelius)
* Type __locale: Add alpine support (Nico Schottelius)
* Type __start_on_boot: Add alpine support (Nico Schottelius)
* Type __timezone: Add alpine support (Nico Schottelius)
* Type __start_on_boot: gentoo: check all runlevels in explorer (Nico Schottelius)
* New type: __package_apk (Nico Schottelius)
* Type __acl: Add support for ACL mask (Dimitrios Apostolou)
* Core: Fix circular dependency for CDIST_ORDER_DEPENDENCY (Darko Poljak)
* Type __acl: Improve the type (Ander Punnar)
* Explorer interfaces: Simplify code, be more compatible (Ander Punnar)
* Explorer disks: Remove assumable default/fallback, for now explicitly support only Linux and BSDs (Ander Punnar, Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.11 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.11-released/2019-04-13T17:57:39Z2019-04-13T17:57:39Z
<p>Here's a short overview about the changes found in version 4.10.11:</p>
<pre><code>* Core: Fix broken quiet mode (Darko Poljak)
* Build: Add version.py into generated raw source archive (Darko Poljak)
* Explorer disks: Fix detecting disks, fix/add support for BSDs (Ander Punnar)
* Type __file: Fix stat explorer for BSDs (Ander Punnar)
* Type __directory: Fix stat explorer for BSDs (Ander Punnar)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.10 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.10-released/2019-04-11T12:49:55Z2019-04-11T12:49:55Z
<p>Here's a short overview about the changes found in version 4.10.10:</p>
<pre><code>* New types: __ufw and __ufw_rule (Mark Polyakov)
* Type __link: Add messaging (Ander Punnar)
* Debugging: Rename debug-dump.sh to cdist-dump (Darko Poljak)
* Documentation: Add cdist-dump man page (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.9 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.9-released/2019-04-09T20:49:38Z2019-04-09T20:49:38Z
<p>Here's a short overview about the changes found in version 4.10.9:</p>
<pre><code>* Type __ssh_authorized_keys: Properly handle multiple --option params (Steven Armstrong)
* Debugging: Add debug dump helper script (Darko Poljak)
* Type __file: Bugfix: fire onchange for present and exists states if no attribute is changed (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.8 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.8-released/2019-04-06T08:55:05Z2019-04-06T08:55:05Z
<p>Here's a short overview about the changes found in version 4.10.8:</p>
<pre><code>* Type __clean_path: Fix list explorer exit code if path not directory or does not exist (Ander Punnar)
* New type: __check_messages (Ander Punnar)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.7 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.7-released/2019-03-30T18:14:04Z2019-03-30T18:14:04Z
<p>Here's a short overview about the changes found in version 4.10.7:</p>
<pre><code>* Build: Migrate from pep8 to pycodestyle (Darko Poljak)
* Type __start_on_boot: Implement state absent for OpenBSD (Daniel Néri)
* Explorers cpu_cores, disks: Add support for OpenBSD (Daniel Néri)
* Type __staged_file: Use portable -p instead of --tmpdir for mktemp (Silas Silva)
* Type __line: Add onchange parameter (Ander Punnar)
* Type __file: Add onchange parameter (Ander Punnar)
* New type: __clean_path (Ander Punnar)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.6 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.6-released/2019-02-15T19:48:09Z2019-02-15T19:48:09Z
<p>Here's a short overview about the changes found in version 4.10.6:</p>
<pre><code>* Type __prometheus_alertmanager: Add startup flag (Dominique Roux)
* Types __zypper_repo, __zypper_service: Re-add the use of echo in explorers (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.5 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.5-released/2018-12-21T21:26:35Z2018-12-21T21:26:35Z
<p>Here's a short overview about the changes found in version 4.10.5:</p>
<pre><code>* Type __group: Fix/remove '--' from echo command (Dimitrios Apostolou)
* New type: __ping (Olliver Schinagl)
* Type __postgres_role: Fix broken syntax (Nico Schottelius, Darko Poljak)
* Type __consul_agent: Add Debian 9 support (Jin-Guk Kwon)
* Documentation: Fix broken links (Rage <OxR463>)
* Type __docker: Add version parameter (Jonas Weber)
* Type __sysctl: Refactor for better OS support (Takashi Yoshi)
* Types __package_*: Add messaging upon installation/removal (Takashi Yoshi)
* Type __package_pkg_openbsd: Reworked (Takashi Yoshi)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.4 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.4-released/2018-11-03T18:26:23Z2018-11-03T18:26:23Z
<p>Here's a short overview about the changes found in version 4.10.4:</p>
<pre><code>* Core: Transfer all files of a directory at once instead of calling copy once per file (myeisha)
* Core: Add timestamp (optional) to log messages (Darko Poljak)
* Explorers and types: Fix shellcheck found problems and encountered bugs (Jonas Weber, Thomas Eckert, Darko Poljak)
* Build: Add shellcheck makefile target and check when doing release (Darko Poljak)
* Type __consul: Add newest versions (Dominique Roux)
* Type __user: Remove annoying output, handle state param gracefully, add messages for removal (Thomas Eckert)
* Core: Fix checking for conflicting parameters for multiple values parameters (Darko Poljak)
* Documentation: Various fixes (Thomas Eckert)
* Various types: Improve OpenBSD support (sideeffect42)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.3 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.3-released/2018-09-23T10:07:12Z2018-09-23T10:07:12Z
<p>Here's a short overview about the changes found in version 4.10.3:</p>
<pre><code>* New global explorer: os_release (Ľubomír Kučera)
* Type __docker: Update type, install docker CE (Ľubomír Kučera)
* Type __package_apt: Write a message when a package is installed or removed; shellcheck (Jonas Weber)
* Documentation: Add 'Dive into real world cdist' walkthrough chapter (Darko Poljak)
* Core: Remove duplicate remote mkdir calls in explorer transfer (myeisha)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.2 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.2-released/2018-09-06T05:13:12Z2018-09-06T05:13:12Z
<p>Here's a short overview about the changes found in version 4.10.2:</p>
<pre><code>* Type __letsencrypt_cert: Add support for devuan ascii (Darko Poljak)
* Type __systemd_unit: Fix minor issues and add masking unit files support (Adam Dej)
* Type __grafana_dashboard: Fix devuan ascii support (Dominique Roux)
* Type __apt_source: Add nonparallel marker (Darko Poljak)
* Type __package_update_index: Fix error when using OS not using apt (Stu Zhao)
* Type __package_update_index: Support --maxage for type pacman (Stu Zhao)
* Type __letsencrypt_cert: Fix explorers: check that certbot exists before using it (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.1-released/2018-06-21T06:39:10Z2018-06-21T06:39:10Z
<p>Here's a short overview about the changes found in version 4.10.1:</p>
<pre><code>* Type __letsencrypt_cert: Fix temp file location and removal (Darko Poljak)
* Type __line: Handle missing file in __line explorer gracefully (Jonas Weber)
* Documentation: Add env vars usage idiom for writing types (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.10.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.10.0-released/2018-06-17T09:03:59Z2018-06-17T09:03:59Z
<p>Here's a short overview about the changes found in version 4.10.0:</p>
<pre><code>* New type: __acl (Ander Punnar)
* Core: Disable config parser interpolation (Darko Poljak)
* Type __sysctl: Use sysctl.d location if exists (Darko Poljak)
* Type __line: Rewrite and support --before and --after (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.9.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.9.1-released/2018-05-30T17:48:45Z2018-05-30T17:48:45Z
<p>Here's a short overview about the changes found in version 4.9.1:</p>
<pre><code>* New type: __install_coreos (Ľubomír Kučera)
* Type __consul_agent: Add LSB init header (Nico Schottelius)
* Type __package_yum: Fix explorer when name contains package name with exact version specified (Aleksandr Dinu)
* Type __letsencrypt_cert: Use object id as domain if domain param is not specified (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.9.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.9.0-released/2018-05-17T14:17:38Z2018-05-17T14:17:38Z
<p>Here's a short overview about the changes found in version 4.9.0:</p>
<pre><code>* Type __docker_stack: Use --with-registry-auth option (Ľubomír Kučera)
* New type: __docker_config (Ľubomír Kučera)
* New type: __docker_secret (Ľubomír Kučera)
* Type __letsencrypt_cert: Rewritten; WARN: breaks backward compatibility (Ľubomír Kučera)
* Core: Fix NameError: name 'cdist_object' is not defined (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.8.4 releasedhttps://www.nico.schottelius.org//blog/cdist-4.8.4-released/2018-04-20T12:35:11Z2018-04-20T12:35:11Z
<p>Here's a short overview about the changes found in version 4.8.4:</p>
<pre><code>* Documentation, type manpages: Fix spelling (Dmitry Bogatov)
* New explorer: is-freebsd-jail (Kamila Součková)
* Types __hostname, __start_on_boot, __sysctl: Support FreeBSD (Kamila Součková)
* Type __install_config: set environment variable to distinguish between
install-config and regular config (Steven Armstrong)
* Core: Improve error reporting (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Linux, UNIX and FOSS users: there is light on the horizonhttps://www.nico.schottelius.org//blog/linux-users-light-on-horizon/2019-11-12T17:29:18Z2018-03-30T14:29:12Z
<p>Good morning reader. It is Easter Friday, 2018. Great timing for
writing a blog article in the mountains of Switzerland, next to the
fire.</p>
<h2>TL;DR</h2>
<p>Read the whole thing.</p>
<h2>Introduction</h2>
<p>I hope I got your attention and you have a warm gut feeling already. If you are
like me, using Linux (or "unix alike" for that matter) for about 20
years, you have seen a lot of changes.</p>
<p>Some of the changes I have been observing include:</p>
<ul>
<li>Linux getting attention from a much broader audience (developers, users, corporates)</li>
<li>Operating system choices have become less important ("people care about higher level stuff")</li>
<li>The Free and Open Source movement has lost its traction</li>
<li>Linux (stacks) ha(s|ve) become much more complex</li>
</ul>
<p>In this article, I want to point you, old greybeard, to some hope at
the end of the tunnel: Devuan.</p>
<h2>Devuan</h2>
<p>So let's start with a self-centric, barefaced advertisement so that you don't
claim I wrote this article to subconsciously lead you to <a href="https://devuanhosting.com">Devuanhosting.com</a>:
Go to <a href="https://devuanhosting.com">Devuanhosting.com</a> and get yourself a Devuan VM, if you like Devuan.</p>
<p>Now, back to the real topic of the post: You might see Devuan as a
"Debian of the retarded people who don't accept the existence of systemd".
Fair enough.</p>
<p>However, you should look a bit more closer, even if your opinion is the former.
Devuan creates a choice. It gives you the choice to run Linux without systemd.</p>
<p>Just for the sake of the argument before, ("people care about high level stuff"),
it is actually important that this is about an init system at the core of your
computer. Yes, it is low level. Yes, most people don't care and most people probably
shouldn't care, because they don't even understand why there is an
init system, nor why it can be programmed very simple (i.e. compare
with <a href="https://www.nico.schottelius.org/software/cinit/">cinit</a>).</p>
<p>What is important here is that a group of <strong>volunteers</strong> spending
their free time and resources commits themselves to fight for the
right to have a Linux without systemd.</p>
<p>"Why is that important", you might ask. And the answer is very
similar: without these people, we would all still be using a DOS based
operating system with a broken GUI on top of it.</p>
<p>Yes, exactly. If there weren't such volunteers (or even "lateral thinkers")
before, there would not be GNU/Linux distribution for you at all.</p>
<h2>The light on the horizon</h2>
<p>So why is there a light? Today I migrated my notebook from Arch Linux
to Devuan, because
<a href="https://bugs.archlinux.org/task/58001">systemd crashes my system on suspend</a>.</p>
<p>First of all, I do have the choice to change, because of the great
work of the Devuan community. But what really opened my eyes, were a
few things that I had to "manually configure":</p>
<p>Compared to Arch Linux, I needed to install <strong>acpid</strong> and <strong>pm-utils</strong>
to handle suspending. Furthermore I needed to configure acpid to
suspend on lid close as follows:</p>
<pre><code>root@line:/home/nico/Downloads# cat /etc/acpi/events/suspend
event=button/lid LID close
action=/usr/sbin/pm-suspend
</code></pre>
<p>Yes, exactly. It takes 3 commands to install this property of your
system. It is a very clean separation of concerns, and debugging this
setup is as easy as starting acpid in the foreground and showing the
events.</p>
<p>While this, as well as the logical naming of devices (eth0, wlan0),
is just a minor thing, I see that something changed:</p>
<p>There are again people, who fight for their right to do things "the
right way".</p>
<h2>Call to action</h2>
<p>If you agree to what I wrote above and you also see the light on the
horizon, I would like to ask you to be active:</p>
<p>It is not necessary to start developing code to support the Free and
Open Source Software movement, to support freedom.</p>
<p>For us, it is necessary to be seen and move forward as a community,
may it be Linux, BSD or FOSS in general.</p>
<p>So instead of staying abstract like this, I ask you to do 2 things:</p>
<ul>
<li><p>Spread the word about this article on IRC, Twitter, or social medium
of your choice</p></li>
<li><p>Get yourself a good drink of your choice, sit down and say out
aloud: I am supporting freedom of choice and will fight for it.</p></li>
</ul>
<p>Thanks for reading, enjoy your time!</p>
Cdist 4.8.3 releasedhttps://www.nico.schottelius.org//blog/cdist-4.8.3-released/2018-03-16T18:21:41Z2018-03-16T18:21:41Z
<p>Here's a short overview about the changes found in version 4.8.3:</p>
<pre><code>* Type __key_value: Add onchange parameter (Kamila Součková)
* Types __prometheus_server, __prometheus_alertmanager, __grafana_dashboard:
Work with packages instead of go get, remove __daemontools dependency and clean up (Kamila Součková)
* Documentation: Fix manpage generation (Darko Poljak)
* New type: __docker_swarm (Ľubomír Kučera)
* New type: __docker_stack (Ľubomír Kučera)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.8.2 releasedhttps://www.nico.schottelius.org//blog/cdist-4.8.2-released/2018-03-10T22:54:12Z2018-03-10T22:54:12Z
<p>Here's a short overview about the changes found in version 4.8.2:</p>
<pre><code>* Core: Fix quiet argument access for bare cdist command (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.8.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.8.1-released/2018-03-09T16:30:38Z2018-03-09T16:30:38Z
<p>Here's a short overview about the changes found in version 4.8.1:</p>
<pre><code>* Type __consul: Add option for directly downloading on target host (Darko Poljak)
* Core: Add -4 and -6 params to force IPv4, IPv6 addresses respectively (Darko Poljak)
* Type __package_update_index: Fix messaging (Thomas Eckert)
* Type __package_dpkg: Add state parameter and messaging (Thomas Eckert)
* Core: Fix a case when HOME is set but empty (Darko Poljak)
* Core: Fix non-existent manifest non graceful handling (Darko Poljak)
* Core: Fix main and inventory parent argparse options (Darko Poljak)
* Core: Fix lost error info with parallel jobs (option -j) (Darko Poljak)
* Core: Fix determining beta value through configuration (Darko Poljak)
* Core: Fix determining save_output_streams value through configuration (Darko Poljak)
* Core: Support in-distribution config file (Darko Poljak)
* New type: __apt_default_release (Matthijs Kooijman)
* Type __file: Add pre-exists state (Matthijs Kooijman)
* Type __grafana_dashboard: Add support for stretch + ascii (Nico Schottelius)
* Core: Fix idna (getaddrinfo) unicode tracebak for invalid host name (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.8.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.8.0-released/2018-02-14T19:14:38Z2018-02-14T19:14:38Z
<p>Here's a short overview about the changes found in version 4.8.0:</p>
<pre><code>* Core: Skip empty lines in parameter files (Darko Poljak)
* Explorer memory: Support OpenBSD (Philippe Gregoire)
* Type __install_config: re-export cdist log level during installation (Steven Armstrong)
* Type __sysctl: Add support for CoreOS (Ľubomír Kučera)
* Type __systemd_unit: Various improvements (Ľubomír Kučera)
* Type __line: Support regex beginning with '-' (Philippe Gregoire)
* Type __letsencrypt_cert: Add nonparallel; make admin-email required (Kamila Součková)
* Type __package_pkgng_freebsd: Redirect stdout and stderr to /dev/null instead of closing them (michal-hanu-la)
* Type __daemontools: Make it more robust and clean up the code (Kamila Součková)
* Core: Save output streams (Steven Armstrong, Darko Poljak)
* Documentation: Add local cache overview (Darko Poljak)
* Type __systemd_unit: Fix handling stdin (Jonas Weber)
* Type __package_apt: Add --purge-if-absent parameter (Jonas Weber)
* Type __package_update_index: Add --maxage parameter for apt and add message if index was updated(Thomas Eckert)
* Type __motd: Support reading from stdin (Jonas Weber)
* Type __issue: Support reading from stdin (Jonas Weber)
* Type __package_apt: Add support for --version parameter (Darko Poljak)
* Type __letsencrypt_cert: Add --renew-hook parameter(Darko Poljak)
* Core: Support disabling saving output streams (Darko Poljak)
* Type __apt_source: Remove update index dependency; call index update in gencode-remote (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.7.3 releasedhttps://www.nico.schottelius.org//blog/cdist-4.7.3-released/2017-11-10T20:23:11Z2017-11-10T20:23:11Z
<p>Here's a short overview about the changes found in version 4.7.3:</p>
<pre><code>* Type __ccollect_source: Add create destination parameter (Dominique Roux)
* Type __ssh_authorized_key: Add messaging (Thomas Eckert)
* New type: __letsencrypt_cert (Nico Schottelius, Kamila Součková)
* Core: Warn about invalid type in conf dir and continue instead of error (Darko Poljak)
* New type: __systemd_unit (Ľubomír Kučera)
* Type __letsencrypt_cert: Add support for debian stretch (Daniel Tschada)
* Type __line: Fix a case for absent when line contains single quotes (Darko Poljak)
* Type __config_file: Fix onchange command not being executed (Ľubomír Kučera)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.7.2 releasedhttps://www.nico.schottelius.org//blog/cdist-4.7.2-released/2017-10-22T14:21:26Z2017-10-22T14:21:26Z
<p>Here's a short overview about the changes found in version 4.7.2:</p>
<pre><code>* Type __hostname: Add support for CoreOS (Ľubomír Kučera)
* Type __timezone: Add support for CoreOS (Ľubomír Kučera)
* Explorer os: Fix for devuan ascii (Kamila Součková)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.7.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.7.1-released/2017-10-01T09:16:00Z2017-10-01T09:16:00Z
<p>Here's a short overview about the changes found in version 4.7.1:</p>
<pre><code>* Type __line: Add messaging (Thomas Eckert)
* Documentation: Fix documentation for building custom man-pages from non-standard path (Thomas Eckert)
* Core: Fix running scripts with execute bit when name without path is specified (Ander Punnar)
* Type __process: Add messaging (Thomas Eckert)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.7.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.7.0-released/2017-09-22T19:26:31Z2017-09-22T19:26:31Z
<p>Here's a short overview about the changes found in version 4.7.0:</p>
<pre><code>* Core: Add configuration/config file support (Darko Poljak)
* Core: Implement simple integration API (unstable) (Darko Poljak)
* Explorer machine_type: Detect kvm on proxmox (Sven Wick)
* Types __prometheus_server, __prometheus_alertmanager: Bugfixes (Kamila Součková)
* New type: __prometheus_exporter (Kamila Součková)
* Type __daemontools: Improve it on FreeBSD (Kamila Součková)
* Type __package_pkg_openbsd: Fix use of --name (Philippe Gregoire)
* Type __package_pkg_openbsd: Fix pkg_version explorer (Philippe Gregoire)
* Type __prometheus_exporter: Fixes + go version bump (Kamila Součková)
* Core, types: __cdist_loglevel -> __cdist_log_level (Darko Poljak)
* Core, types: Add __cdist_log_level_name env var with vlaue of log level name (Darko Poljak)
* Core: Make cdist honor __cdist_log_level env var (Darko Poljak)
* Core: Add -l/--log-level option (Darko Poljak)
* Type __install_stage: Fix __debug -> __cdist_log_level (Darko Poljak)
* Documentation: Document __cdist_log_level (Darko Poljak)
* Core: Log ERROR to stderr and rest to stdout (Darko Poljak, Steven Armstrong)
* Type __ssh_authorized_key: Bugfix the case where invalid key clears a file and add key validation (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.6.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.6.1-released/2017-08-30T20:44:28Z2017-08-30T20:44:28Z
<p>Here's a short overview about the changes found in version 4.6.1:</p>
<pre><code>* Type __user: Explore with /etc files (passwd, group, shadow) (Philippe Gregoire)
* Explorer init: Use pgrep instead of ps for Linux (Philippe Gregoire)
* Type __apt_key_uri: Redirect stderr of apt-key to /dev/null (Mark Verboom)
* Type __package_pkg_openbsd: Support the empty flavor (Philippe Gregoire)
* Type __package_pkg_openbsd: Support using /etc/installurl (Philippe Gregoire)
* Type __user_groups: Support OpenBSD (Philippe Gregoire)
* Type __hostname: Allow hostnamectl to fail silently (Steven Armstrong)
* Type __install_config: Use default default __remote_{copy,exec} in custom __remote_{copy,exec} scripts (Steven Armstrong)
* Type __ssh_authorized_key: Fix removing ssh key that is last one in the file (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.6.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.6.0-released/2017-08-25T09:06:07Z2017-08-25T09:06:07Z
<p>Here's a short overview about the changes found in version 4.6.0:</p>
<pre><code>* Core: Add inventory functionality (Darko Poljak)
* Core: Expose inventory host tags in __target_host_tags env var (Darko Poljak)
* Type __timezone: Check current timezone before doing anything (Ander Punnar)
* Core: Add -p HOST_MAX argument (Darko Poljak)
* Core: Add archiving support for transferring directory - new -R beta option (Darko Poljak)
* Core: Fix ssh connection multiplexing race condition (Darko Poljak)
* Core: Fix emulator race conditions with -j option (Darko Poljak)
* Documentation: Cleanup (Darko Poljak)
* Explorer os: Get ID from /etc/os-release (Philippe Gregoire)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.5.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.5.0-released/2017-07-20T18:51:56Z2017-07-20T18:51:56Z
<p>Here's a short overview about the changes found in version 4.5.0:</p>
<pre><code>* Types: Fix install types (Steven Armstrong)
* Core: Add -r command line option for setting remote base path (Steven Armstrong)
* Core: Allow manifest and gencode scripts to be written in any language (Darko Poljak)
* Documentation: Improvements to the english and fix typos (Mesar Hameed)
* Core: Merge -C custom cache path pattern option from beta branch (Darko Poljak)
* Core: Improve and cleanup logging (Darko Poljak, Steven Armstrong)
* Core: Remove deprecated -d option (Darko Poljak)
* Type __file: If no --source then create only if there is no file (Ander Punnar)
* Core: Ignore directory entries that begin with dot('.') (Darko Poljak)
* Core: Fix parallel object prepare and run steps and add nonparallel type marker (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.4.4 releasedhttps://www.nico.schottelius.org//blog/cdist-4.4.4-released/2017-06-16T10:52:22Z2017-06-16T10:52:22Z
<p>Here's a short overview about the changes found in version 4.4.4:</p>
<pre><code>* Core: Support -j parallelization for object prepare and object run (Darko Poljak)
* Type __install_mkfs: mkfs.vfat does not support -q (Nico Schottelius)
* Types __go_get, __daemontools*, __prometheus*: Fix missing dependencies, fix arguments(Kamila Součková)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.4.3 releasedhttps://www.nico.schottelius.org//blog/cdist-4.4.3-released/2017-06-13T20:18:44Z2017-06-13T20:18:44Z
<p>Here's a short overview about the changes found in version 4.4.3:</p>
<pre><code>* Type __golang_from_vendor: Install golang from https://golang.org/dl/ (Kamila Součková)
* Type __go_get: Install go packages using go get (Kamila Součková)
* Explorer kernel_name: uname -s (Kamila Součková)
* Type __sysctl: Add devuan support (Nico Schottelius)
* Type __start_on_boot: Add devuan support (Nico Schottelius)
* Core: Shorten ssh control path (Darko Poljak)
* Type __consul: Add new version and add http check (Kamila Součková)
* New types: __daemontools and __daemontools_service (Kamila Součková)
* New types: __prometheus_server and __prometheus_alertmanager (Kamila Součková)
* New type: __grafana_dashboard (Kamila Součková)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.4.2 releasedhttps://www.nico.schottelius.org//blog/cdist-4.4.2-released/2017-03-08T18:37:50Z2017-03-08T18:37:50Z
<p>Here's a short overview about the changes found in version 4.4.2:</p>
<pre><code>* Core: Fix suppression of manifests' outputs (Darko Poljak)
* Type __user_groups: Support FreeBSD (Andres Erbsen)
* Type __cron: Fix filter for new cron on sles12 sp2 (Daniel Heule)
* Type __docker: Support absent state (Dominique Roux)
* Type __docker_compose: Support absent state (Dominique Roux)
* New type: __hosts (Dmitry Bogatov)
* New type: __dot_file (Dmitry Bogatov)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
ctt 1.1 releasedhttps://www.nico.schottelius.org//blog/ctt-1.1-released/2017-02-16T07:30:20Z2017-02-16T07:30:20Z
<h2>Introduction</h2>
<p><a href="https://www.nico.schottelius.org//software/ctt/">ctt</a> is a time tracking tool for geeks.
It supports project based reporting, custom report formats and
stores its data in a <a href="https://www.nico.schottelius.org//docs/cconfig/">cconfig</a> database.</p>
<h2>Changes</h2>
<pre><code>* Ignore non matching patterns for report command (Darko Poljak)
* Added -s, --summary option (Darko Poljak)
* No args error (Darko Poljak)
* Report project name as file path basename (Darko Poljak)
* Report globbing (Darko Poljak)
</code></pre>
Cdist 4.4.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.4.1-released/2016-12-17T08:47:36Z2016-12-17T08:47:36Z
<p>Here's a short overview about the changes found in version 4.4.1:</p>
<pre><code>* Documentation: Update docs for types that used man.rst as symbolic links (Darko Poljak)
* Type __cron: Remove '# marker' for raw_command due to cron security (Daniel Heule)
* New type: __docker_compose (Dominique Roux)
* Type __apt_mark: Check supported apt version and if package is installed (Ander Punnar)
* New type: __docker (Steven Armstrong)
* New type: __package_dpkg (Tomas Pospisek)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.4.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.4.0-released/2016-12-03T08:54:17Z2016-12-03T08:54:17Z
<p>Here's a short overview about the changes found in version 4.4.0:</p>
<pre><code>* Core: deprecate -d option and make -v option log level counter (Darko Poljak)
* New type: __postgres_extension (Tomas Pospisek)
* Core, types: support IPv6 (Darko Poljak)
* Type __consul: add source and cksum files for Consul 0.7.0 and 0.7.1 (Carlos Ortigoza)
* Type __user: FreeBSD fix (Kamila Souckova)
* New type: __apt_mark (Ander Punnar)
* Type __package_upgrade_all: do not dist-upgrade by default, add apt-clean and apt-dist-upgrade options (Ander Punnar)
* Core: fix target_host vars (Darko Poljak)
* All: Merge install feature from 4.0-pre-not-stable (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
How to install Linux on a Macbook Pro 2016https://www.nico.schottelius.org//blog/linux-on-macbook-pro-2016/2020-05-14T12:52:54Z2016-11-09T13:49:21Z
<p><strong><em>Soon to come</em></strong></p>
<p>Alright - it has been <a href="https://www.nico.schottelius.org//blog/macbook-air-42-archlinux/">a couple of years since I last installed
Linux on a Apple notebook</a>, so it is time again.</p>
<p>I have just ordered the Macbook Pro 2016 (with real keys, because
the purpose of F1-F10 is exactly what Apple tries to reinvent with a
lot of colours) and plan to install Arch Linux on it in the next weeks.</p>
<p>I expect the
<a href="https://www.nico.schottelius.org//blog/macbook-air-42-correcting-multimedia-key-mapping-and-status/">typical early adaptor problems</a> with the Wifi chipset and
maybe trouble with connecting external monitors (not only because it
is Apple, but also because <a href="https://bugs.archlinux.org/task/51508">Intel Skylake support under Linux is pretty
bad at the moment</a>).</p>
<p>So if you are curious about how easy or hard it is - stay tuned and
checkout this site again in a couple of days!</p>
<h2>Meanwhile ...</h2>
<p>If you want to discuss about Linux and Apple or other FOSS topics, you
are invited to <a href="https://chat.with.ungleich.ch/">join the chat of ungleich</a>,
the Linux / Unix / FOSS company that I am working for. We have pretty
strong opinions on some topics (not mentioning systemd here) and you
are invited to challenge them :-)</p>
Cdist 4.3.2 releasedhttps://www.nico.schottelius.org//blog/cdist-4.3.2-released/2016-10-13T16:53:05Z2016-10-13T16:53:05Z
<p>Here's a short overview about the changes found in version 4.3.2:</p>
<pre><code>* Documentation: Update no longer existing links (Simon Walter)
* Core: Add warning message for faulty dependencies case (Darko Poljak)
* Explorer os_version: Use /etc/os-release instead of /etc/SuSE-release (Daniel Heule)
* Type __package: Call __package_pkg_openbsd on openbsd (Andres Erbsen)
* Type __package_pkg_openbsd: Support --version (Andres Erbsen)
* Type __hostname: Support openbsd (Andres Erbsen)
* New type: __firewalld_start: start/stop firewalld and/or enable/disable start on boot (Darko Poljak)
* Bugfix __consul_agent: config option was misnamed 'syslog' instead of 'enable_syslog' (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.3.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.3.1-released/2016-08-22T16:51:24Z2016-08-22T16:51:24Z
<p>Here's a short overview about the changes found in version 4.3.1:</p>
<pre><code>* Documentation: Spelling fixes (Darko Poljak)
* Type __filesystem: Spelling fixes (Dmitry Bogatov)
* Core: Add target_host file to cache since cache dir name can be hash (Darko Poljak)
* Core: Improve hostfile: support comments, skip empty lines (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.3.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.3.0-released/2016-08-19T19:27:51Z2016-08-19T19:27:51Z
<p>Here's a short overview about the changes found in version 4.3.0:</p>
<pre><code>* Documentation: Add Parallelization chapter (Darko Poljak)
* Core: Add -b, --enable-beta option for enabling beta functionalities (Darko Poljak)
* Core: Add -j, --jobs option for parallel execution and add parallel support for global explorers (currently in beta) (Darko Poljak)
* Core: Add derived env vars for target hostname and fqdn (Darko Poljak)
* New type: __keyboard: Set keyboard layout (Carlos Ortigoza)
* Documentation: Re-license types' man pages to GPLV3+ (Dmitry Bogatov, Darko Poljak)
* New type __filesystem: manage filesystems on devices (Daniel Heule)
* New type: __locale_system (Steven Armstrong, Carlos Ortigoza, Nico Schottelius)
* New type: __sysctl (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.2.2 releasedhttps://www.nico.schottelius.org//blog/cdist-4.2.2-released/2016-07-26T05:52:17Z2016-07-26T05:52:17Z
<p>Here's a short overview about the changes found in version 4.2.2:</p>
<pre><code>* Core: Fix ssh ControlPath socket file error (Darko Poljak)
* Documentation: Update cdist man page and cdist-references (Darko Poljak)
* Documentation: Change cdist and cdist-type__pyvenv man page licenses to GPLv3+ (Darko Poljak)
* Documentation: Add FILES to cdist man page (Darko Poljak)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.2.1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.2.1-released/2016-07-18T18:36:40Z2016-07-18T18:36:40Z
<p>Here's a short overview about the changes found in version 4.2.1:</p>
<pre><code>* Build: Fix signed release (Darko Poljak)
* Build: Fix building docs (Darko Poljak)
* Documentation: Fix man pages (Dmitry Bogatov)
* Documentation: Fix spellings (Dmitry Bogatov)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.2.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.2.0-released/2016-07-16T06:41:17Z2016-07-16T06:41:17Z
<p>Here's a short overview about the changes found in version 4.2.0:</p>
<pre><code>* Build: Make github signed release (Darko Poljak)
* Core: Fix hostdir: use hash instead of target host (Steven Armstrong)
* Core: pep8 (Darko Poljak)
* Documentation: Restructure and fix and improve docs and manpages (Darko Poljak)
* Core: Add files directory for static files (Darko Poljak)
* Custom: Add bash and zsh completions (Darko Poljak)
* Core: Improve error reporting for local and remote run command (Darko Poljak)
* New type: __jail_freebsd9: Handle jail management on FreeBSD <= 9.X (Jake Guffey)
* New type: __jail_freebsd10: Handle jail management on FreeBSD >= 10.0 (Jake Guffey)
* Type __jail: Dynamically select the correct jail subtype based on target host OS (Jake Guffey)
* Explorer __machine_type: add openvz and lxc
* Explorer __os __os_version: add scientific
* Type various: add scientific
* Explorer __machine_type: add virtualbox (Stu Zhao)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Added fgallery to the list of static image gallery generatorshttps://www.nico.schottelius.org//blog/added-fgallery-to-static-gallery-generators/2016-07-04T09:31:40Z2016-07-04T09:13:03Z
<p>For those of y$ou who enjoy static gallery generators,
there is a new item on the
<a href="https://www.nico.schottelius.org//docs/static-image-gallery-generator-comparison/">list of static image gallery generators</a>:
It's called <a href="https://www.thregr.org/~wavexx/software/fgallery/">fgallery</a> and has a fresh
design.</p>
<p>Check it out on <a href="https://www.nico.schottelius.org//docs/static-image-gallery-generator-comparison/">the list</a> or
<a href="https://www.thregr.org/~wavexx/software/fgallery/">give it a try directly</a>.</p>
Cdist 4.1.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.1.0-released/2016-05-27T05:28:12Z2016-05-27T05:28:12Z
<p>Here's a short overview about the changes found in version 4.1.0:</p>
<pre><code>* Documentation: Migrate to reStructuredText format and sphinx (Darko Poljak)
* Core: Add -f option to read additional hosts from file/stdin (Darko Poljak)
* Type __apt_key: Use pool.sks-keyservers.net as keyserver (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.0.0 releasedhttps://www.nico.schottelius.org//blog/cdist-4.0.0-released/2016-05-04T10:29:47Z2016-05-04T10:29:47Z
<p>Here's a short overview about the changes found in version 4.0.0:</p>
<pre><code>* Core: Fix bug with parallel hosts operation when output path is specifed (Darko Poljak)
* Type __package_pip: Add support for running as specified user (useful for pip in venv) (Darko Poljak)
* New type: __pyvenv: Manage python virtualenv (Darko Poljak)
* Core: Add CDIST_REMOTE_COPY/EXEC env variables and multiplexing options for default scp/ssh (Darko Poljak)
* Types: Remove bashisms in scripts (Darko Poljak)
* Core: Fix bug in remote command with environment (Darko Poljak)
* Core: Fix bug in local code execution (Darko Poljak)
* Documentation: Fix spelling in manual pages (Dmitry Bogatov)
* New type: __pacman_conf: Manage pacman.conf (Dominique Roux)
* New type: __pacman_conf_integrate: cdist compatible pacman.conf (Dominique Roux)
* Type __consul: Do not install unused package unzip (Steven Armstrong)
* Type __consul: Add source & cksum for 0.5.2 (Steven Armstrong)
* Core: Support object ids '.cdist' (Nico Schottelius)
* Type __apt_norecommends: Also setup autoremove options (Dmitry Bogatov)
* Type __user_groups: Add NetBSD support (Jonathan A. Kollasch)
* Type __timezone: Add NetBSD support (Jonathan A. Kollasch)
* Type __ccollect_source: Add NetBSD support (Jonathan A. Kollasch)
* Type __directory: Add NetBSD support (Jonathan A. Kollasch)
* Type __file: Add NetBSD support (Jonathan A. Kollasch)
* Type __group: Add NetBSD support (Jonathan A. Kollasch)
* Type __consul: Add new consul versions (Nico Schottelius)
* Type __apt_ppa: Do not install legacy package python-software-properties (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
How to change the colour of ls to work with bright terminal backgroundshttps://www.nico.schottelius.org//blog/change-colour-for-ls-to-work-with-bright-terminal-background/2016-02-25T13:34:32Z2015-06-01T18:24:23Z
<h2>Introduction</h2>
<p>I am using <a href="http://software.schmorp.de/pkg/rxvt-unicode.html">rxvt-unicode</a>
as my terminal and prefer to use a bright background due to better
readability in the sun.</p>
<p>I have tried various colour themes
(including
<a href="http://ethanschoonover.com/solarized">solarized</a> and
<a href="https://github.com/yangzetian/xresources-color-solarized-light">solarized-light</a>,
but was never satisfied, because of changes
they make to existing colour configurations for applications
like mutt, alot or irssi.</p>
<p>I am essentially using black as the foreground colour and
LightYellow2 as the background colour at the moment (which I inherit
from some very old xterm + <a href="http://www.fvwm.org/">fvwm</a>2 settings).</p>
<h2>Motivation</h2>
<p>The problem with my current setup is that symbolic links are
show in <strong>cyan</strong> by <strong><em>ls</em></strong> on my system and thus are pretty much
unreadable, as you can see:</p>
<p><a href="https://www.nico.schottelius.org//blog/change-colour-for-ls-to-work-with-bright-terminal-background/urxvt-before.png"><img src="https://www.nico.schottelius.org//blog/change-colour-for-ls-to-work-with-bright-terminal-background/urxvt-before.png" width="629" height="98" alt="urxvt with unreadable cyan colour" class="img" /></a></p>
<h2>The solution</h2>
<p>As the problem mainly arises by the use of ls, I initially thought
about modifying the <strong>LS_COLORS</strong> variable. However, as I frequently
login to servers that are variable does not effect the colour output
on the servers (and modifying AcceptEnv on all servers is also not
realistic).</p>
<p>As I do not want to have cyan output on my LightYellow2 background
at all, I thought about changing the colour cyan to black.</p>
<p>I found a nice colour table on <a href="https://wiki.gentoo.org/wiki/Rxvt%2Dunicode%23Color%5Ftheme">Rxvt-unicode#Color theme in Gentoo Wiki</a>,
with description of colour number to name:</p>
<pre><code>!black
*color0: #251f1f
*color8: #5e5e5e
!red
*color1: #eb4509
*color9: #eb4509
!green
*color2: #94e76b
*color10: #95e76b
!yellow
*color3: #ffac18
*color11: #ffac18
!blue
*color4: #46aede
*color12: #46aede
!magenta
*color5: #e32c57
*color13: #e32c57
!cyan
*color6: #d6dbac
*color14: #d6dbac
!white
*color7: #efefef
*color15: #efefef
</code></pre>
<h2>The result</h2>
<p>So in the end, only the following entries in .Xresources are
required to make cyan symbolic links readable by changing
cyan to black:</p>
<pre><code>URxvt.background: LightYellow2
URxvt.foreground: black
URxvt.color6: black
URxvt.color14: black
</code></pre>
<p>And this is how it finally looks like:</p>
<p><a href="https://www.nico.schottelius.org//blog/change-colour-for-ls-to-work-with-bright-terminal-background/urxvt-after.png"><img src="https://www.nico.schottelius.org//blog/change-colour-for-ls-to-work-with-bright-terminal-background/urxvt-after.png" width="612" height="95" alt="urxvt with cyan changed to black" class="img" /></a></p>
<h2>SEE ALSO</h2>
<ul>
<li><a href="http://ciembor.github.io/4bit/">Terminal Colours</a></li>
<li><a href="https://wiki.archlinux.org/index.php/X%5Fresources">X resources in Arch Linux Wiki</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Rxvt%2Dunicode">Rxvt-unicode in Arch Linux Wiki</a></li>
</ul>
urxvt-after.pnghttps://www.nico.schottelius.org//blog/change-colour-for-ls-to-work-with-bright-terminal-background/urxvt-after.png2016-02-25T13:34:32Z2015-06-01T18:24:23Zurxvt-before.pnghttps://www.nico.schottelius.org//blog/change-colour-for-ls-to-work-with-bright-terminal-background/urxvt-before.png2016-02-25T13:34:32Z2015-06-01T18:24:23ZCdist 3.1.13 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.13-released/2016-02-25T13:34:32Z2015-05-16T15:54:59Z
<p>Here's a short overview about the changes found in version 3.1.13:</p>
<pre><code>* Type __block: Fix support for non stdin blocks (Dominique Roux)
* Type __consul: Install package unzip (Nico Schottelius)
* Type __consul: Add source & cksum for 0.5.1 (Nico Schottelius)
* Type __consul_agent: Use systemd for Debian 8 (Nico Schottelius)
* Type __firewalld_rule: Ensure firewalld package is present (David Hürlimann)
* Type __locale: Support CentOS (David Hürlimann)
* Type __staged_file: Fix comparision operator (Nico Schottelius)
* Type __user_groups: Support old Linux versions (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.12 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.12-released/2016-02-25T13:34:32Z2015-03-19T09:23:50Z
<p>Here's a short overview about the changes found in version 3.1.12:</p>
<pre><code>* Core: Support object ids '.cdist' (Nico Schottelius)
* New type: __firewalld_rule (Nico Schottelius)
* Type __consul_agent: add support for acl options (Steven Armstrong)
* Type __consul_agent: add support for Debian (Nico Schottelius)
* Type __package_apt: Use default parameters (Antoine Catton)
* Type __package_luarocks: Use default parameters (Antoine Catton)
* Type __package_opkg: Use default parameters (Antoine Catton)
* Type __package_pacman: Use default parameters (Antoine Catton)
* Type __package_pip: Use default parameters (Antoine Catton)
* Type __package_pkg_freebsd: Use default parameters (Antoine Catton)
* Type __package_pkg_openbsd: Use default parameters (Antoine Catton)
* Type __package_pkgng_openbsd: Use default parameters (Antoine Catton)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Gluster FOSS development is awesomehttps://www.nico.schottelius.org//blog/glusterfs-foss-development-is-awesome/2016-02-25T13:34:32Z2015-03-07T08:00:20Z
<h2>TL; DR</h2>
<p>Last night I suggested a change to <a href="http://www.gluster.org/">Gluster</a> - after I woke up,
my patch has already been incorporated - that is awesome!</p>
<h2>The experience</h2>
<p>After beginning my journey with <a href="http://www.gluster.org/">Gluster</a> some months ago
and my recent blog about
<a href="https://www.nico.schottelius.org//blog/how-to-access-gluster-from-multiple-networks/">How to access gluster from multiple networks</a>,
I have made a great experience with glusterfs yesterday:</p>
<p>After <a href="http://www.gluster.org/pipermail/gluster-users/2015-March/020945.html">mentioning that I have the same problem</a> as <a href="http://www.gluster.org/pipermail/gluster-users/2015-March/020939.html">何亦</a>
and <a href="http://www.gluster.org/pipermail/gluster-users/2015-March/020948.html">suggesting a patch</a>,
I had this incredible experience:</p>
<ol>
<li>Someone (<a href="https://twitter.com/nixpanic">@nixpanic</a> aka Niels) suggested to prepare it for inclusion</li>
<li>GlusterFS has <a href="http://www.gluster.org/community/documentation/index.php/Simplified_dev_workflow">an organic process for development and testing</a> - seeing the patch going through this process gives me the impression someone cares about code quality</li>
<li>After creating the <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1199577">bugreport</a>, the process smoothly started</li>
<li>The patch was reviewed within hours</li>
<li>And finally merged into the master branch as well as <a href="http://www.gluster.org/pipermail/gluster-users/2015-March/020955.html">included for glusterfs 3.6.3</a></li>
</ol>
<p>The overall experience as a FOSS <em>contributor</em> can be described by a single word:</p>
<pre><code>A-W-E-S-O-M-E
</code></pre>
<p>I have contributed to many FOSS projects, but this experience is exceptionally great -
thanks for the help everyone and keep up the good work!</p>
<h2>Follow up</h2>
<p>If you find this article interesting, you may want to stay updated by following
me and ungleich on Twitter:</p>
<ul>
<li><a href="https://twitter.com/ungleich">@ungleich</a></li>
<li><a href="https://twitter.com/NicoSchottelius">@NicoSchottelius</a></li>
</ul>
Cdist 3.1.11 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.11-released/2016-02-25T13:34:32Z2015-02-27T13:47:11Z
<p>Here's a short overview about the changes found in version 3.1.11:</p>
<pre><code>* New type: __staged_file: Manage staged files (Steven Armstrong)
* New type: __config_file: Manage configuration files and run code on change (Steven Armstrong)
* New type: __consul: install consul (Steven Armstrong)
* New type: __consul_agent: manage the consul agent (Steven Armstrong)
* New type: __consul_check: manages consul checks (Steven Armstrong)
* New type: __consul_reload: reload consul (Steven Armstrong)
* New type: __consul_service: manages consul services (Steven Armstrong)
* New type: __consul_template: manage the consul-template service (Steven Armstrong)
* New type: __consul_template_template: manage consul-template templates (Steven Armstrong)
* New type: __consul_watch_checks: manages consul checks watches (Steven Armstrong)
* New type: __consul_watch_event: manages consul event watches (Steven Armstrong)
* New type: __consul_watch_key: manages consul key watches (Steven Armstrong)
* New type: __consul_watch_keyprefix: manages consul keyprefix watches (Steven Armstrong)
* New type: __consul_watch_nodes: manages consul nodes watches (Steven Armstrong)
* New type: __consul_watch_service: manages consul service watches (Steven Armstrong)
* New type: __consul_watch_services: manages consul services watches (Steven Armstrong)
* New Type: __rsync (Nico Schottelius)
* Type __start_on_boot: Support Ubuntu upstart (Nico Schottelius)
* Type __timezone: Added support for FreeBSD (Christian Kruse)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
How to access gluster from multiple networkshttps://www.nico.schottelius.org//blog/how-to-access-gluster-from-multiple-networks/2016-02-25T13:34:32Z2015-02-13T10:34:42Z
<h1>TL;DR</h1>
<p>Create volumes name based instead of IP based:</p>
<pre><code>gluster volume create xfs-plain replica 2 transport vmhost1:/home/gluster vmhost2:/home/gluster
</code></pre>
<p>instead of</p>
<pre><code>gluster volume create xfs-plain replica 2 transport 192.168.0.1:/home/gluster 192.168.0.2:/home/gluster
</code></pre>
<p>And have the names point to different IP addresses.</p>
<h2>The setup</h2>
<p>The basic setup (in our case) looks like this:</p>
<pre><code>---------------------------------
| Clients / Users |
---------------------------------
|
|
--------------------------------- ---------------------------------
| frontend (with opennebula) | ---| vmhost1 with glusterfs |
--------------------------------- / ---------------------------------
| / eth0 eth1
|-------------------------< ||
\ eth0 eth1
\ ---------------------------------
---| vmhost2 with glusterfs |
---------------------------------
</code></pre>
<p>The frontend running <a href="http://www.opennebula.org">Opennebula</a> connects to
<strong>vmhost1</strong> and <strong>vmhost2</strong> using their public interfaces.</p>
<p>The gluster bricks running on the vm hosts are supposed to communicate
via eth1, so that the traffic for <a href="http://www.gluster.org/">Gluster</a> does not influence
the traffic of the virtual machines to the Internet. The gluster filesystem
of the vmhosts is only thought to be used by the virtual machines running
on those two hosts - an isolated cluster. Thus the volume initially has been created
like this:</p>
<pre><code>gluster volume create xfs-plain replica 2 transport 192.168.0.1:/home/gluster 192.168.0.2:/home/gluster
</code></pre>
<h2>The problem</h2>
<p>However, the frontend requires access to the gluster volume, because
<a href="http://www.opennebula.org">Opennebula</a> needs to copy and import the VM image into the gluster
datastore. Even though the <em>glusterd</em> process listens on any IP address,
the volume contains the information that it runs on 192.168.0.1
and 192.168.0.2 and is thus not reachable from the frontend.</p>
<h2>Using name based volumes</h2>
<p>The frontend can reach the vm hosts via <strong>vmhost1</strong> and <strong>vmhost2</strong>,
which resolves to their <strong>public IP addresses</strong> via DNS.</p>
<p>On the vm hosts we created entries in <strong>/etc/hosts</strong> using <a href="http://www.nico.schottelius.org/software/cdist/">cdist</a>
that looks as following:</p>
<pre><code>192.168.0.1 vmhost1
192.168.0.2 vmhost2
</code></pre>
<p>Now we re-created the volume using</p>
<pre><code>gluster volume create xfs-plain replica 2 transport tcp vmhost1:/home/gluster vmhost2:/home/gluster
gluster volume start xfs-plain
</code></pre>
<p>And it correctly shows up in the volume info:</p>
<pre><code>%gluster volume info
Volume Name: xfs-plain
Type: Replicate
Volume ID: fe45c626-c79d-4e67-8f19-77938470f2cf
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vmhost1-cluster1.place4.ungleich.ch:/home/gluster
Brick2: vmhost2-cluster1.place4.ungleich.ch:/home/gluster
</code></pre>
<p>And now we can mount it successfully on the frontend using</p>
<pre><code>% mount -t glusterfs vmhost2:/xfs-plain /mnt/gluster
</code></pre>
<h2>Follow up</h2>
<p>If you find this article interesting, you may want to stay updated by following
me and ungleich on Twitter:</p>
<ul>
<li><a href="https://twitter.com/ungleich">@ungleich</a></li>
<li><a href="https://twitter.com/NicoSchottelius">@NicoSchottelius</a></li>
</ul>
How to show the latest git taghttps://www.nico.schottelius.org//blog/how-to-show-the-latest-git-tag/2016-02-25T13:34:32Z2015-02-11T10:21:16Z
<h2>TL;DR</h2>
<p>If you want to show the name of the latest tag, use:</p>
<pre><code>git for-each-ref --sort=-taggerdate --count=1 --format '%(tag)' refs/tags
</code></pre>
<h2>Some background</h2>
<p>The command <em>git for-each-ref</em> is pretty useful if you want to find something
in a number of commits (or tags in this case). It allows you to use variations of
output and sorting methods. For example:</p>
<h3>Show all tags including the message</h3>
<pre><code>git for-each-ref --format '%(refname) %(contents:subject)' refs/tags
</code></pre>
<p>(this is btw. very similar to <strong>git tag -n1</strong>)</p>
<h3>Show all tags sorted by date, oldest first</h3>
<pre><code>git for-each-ref --sort=taggerdate --format '%(refname)' refs/tags
</code></pre>
<h3>Show all tags sorted by date, newest first</h3>
<pre><code>git for-each-ref --sort=-taggerdate --format '%(refname)' refs/tags
</code></pre>
<h3>Show latest two tags</h3>
<pre><code>git for-each-ref --count=2 --sort=-taggerdate --format '%(refname)' refs/tags
</code></pre>
<h3>Show latest tag with its non-ambiguous short name</h3>
<pre><code>git for-each-ref --sort=-taggerdate --count=1 --format '%(refname:short)' refs/tags
</code></pre>
<h3>Show latest tag</h3>
<pre><code>git for-each-ref --count=1 --sort=-taggerdate --format '%(tag)' refs/tags
</code></pre>
<p>(note: this is the same as the previous one, but uses the <em>tag</em> field)</p>
Cdist 3.1.10 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.10-released/2016-02-25T13:34:32Z2015-02-10T22:01:54Z
<p>Here's a short overview about the changes found in version 3.1.10:</p>
<pre><code>* Core: Fix too many open files bug (#343)
* Type __ssh_authorized_keys: Remove unneeded explorer (Steven Armstrong)
* Type __ssh_authorized_keys: Fix empty output bug of entry explorer (Steven Armstrong)
* Type __package_apt: Add support for --target-release
* Type __locale: Add support for Ubuntu
* Type __group: Rewrite (Steven Armstrong)
* Documentation: Fix typo in maintainer file (Stephan Kulla)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Artikel über lernende Organisationen importierthttps://www.nico.schottelius.org//neuigkeiten/lernende-organisationen-importiert/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Und wieder ein weiteres Stück in der Netzwelt gesäubert, einen
weiteren Teil meiner alten Netzseite
<a href="http://nico.schotteli.us">nico.schotteli.us</a> in diese importiert:</p>
<p>Seit heute ist der Artikel über
<a href="https://www.nico.schottelius.org//dokumentationen/lernende_organisationen-learning_organizations/">lernende Organsationen</a>
umgezogen: Die alte Adresse war</p>
<ul>
<li> http://nico.schotteli.us/papers/examinations/la-ha2004/personal-learning-organization.*</li>
</ul>
<p>die neue ist</p>
<ul>
<li> <a href="https://www.nico.schottelius.org//dokumentationen/lernende_organisationen-learning_organizations/">http://www.nico.schottelius.org/dokumentationen/lernende organisationen-learning organizations</a>.</li>
</ul>
Matrizen, Vektoren und komplexe Zahlenhttps://www.nico.schottelius.org//neuigkeiten/matrizen-vektoren-komplexe-zahlen/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Liebe Lehrer, Professoren, Mathematiker,</p>
<p>während meines Studiums an der <a href="http://www.hsz-t.ch">HSZ-T</a> ist mir
beim Neu-Lernen der komplexen Zahlen und Wiederholen der Matrizen
eine Sache aufgefallen:</p>
<pre><code>Komplexe Zahlen sind "anders" geschriebene Vektoren und Matrizen sind nur ein Haufen Vektoren.
</code></pre>
<h2>Vektoren</h2>
<p>Vektoren sind meines Erachtens nach die Basis der drei Themen und lassen
sich ganz einfach beschreiben:</p>
<pre><code>Vektoren sind Pfeile im Raum.
</code></pre>
<p>Das ist es schon. Alles andere lässt sich durch Nachdenken herausfinden
(Senkrecht stehen, Ebenen bilden, Räume konstruieren, ...).</p>
<h2>Komplexe Zahlen</h2>
<p>Komplex - und schon erscheint der Angstschweiß auf der Stirn des Studenten - Zahlen,
ein meines Erachtenes nach viel zu weit nach hinten verlegtes Thema in der
Mathematik. <strong>Komplex</strong>, ein dazu noch besonders schönes Wort um Menschen
abzuschrecken, bedeutet es doch nach <a href="http://de.wiktionary.org/wiki/komplex">Wiktionary</a></p>
<pre><code>verflochten, zusammenhängend, umfassend, vielschichtig
</code></pre>
<p>Welcher Mensch hat nicht zunächst Berührungsängste mit einem
vielschichtigem Thema von dem er nichts weiß?</p>
<p>Wäre es nicht viel einfacher, liebe Leser, einfach zu schreiben:</p>
<pre><code>Eine komplexe Zahl ist ein Vektor ausgedrückt in Länge (Betrag) und Winkel.
</code></pre>
<p>Damit wäre dann auch der "Hokuspokus" geklärt, <strong>weshalb ein Quadrat -1</strong> ergeben kann:</p>
<pre><code>Da i einem Winkel von 90° entspricht (Punkt 0/1), ist i^2 180° und damit (-1/0).
</code></pre>
<h2>Matrizen</h2>
<p>Matrizen sind im Prinzip nur Tabellen mit n-Dimensionalen Vektoren.</p>
<pre><code>Eine Matrix ist eine Ansammlung von Vektoren.
</code></pre>
<p>Je nach Interpretation oder Drehung handelt es sich dann um Spalten- oder
Zeilenvektoren. Aber ansonsten sind auch Matrizen außer dem "mehr an Freude"
nichts anderes als Vektoren.</p>
<h2>Weniger Zauber, mehr Spaß</h2>
<p>Um zum Ursprungstheme zurückzukehren: Liebe Lehrkräfte, egal ob Grundschule
oder Hochschule oder dazwischen:</p>
<pre><code>Weshalb räumt ihr nicht auf mit dem ganzen Durcheinander?
</code></pre>
<p>Und bereitet uns Lernenden mehr Spaß an der Mathematik und lasst aus Vektoren
sowohl Matrizen als auch komplexe Zahlen wachsen?</p>
<pre><code>Die Verbindung zu etwas Bekanntem macht bekanntlich (!) das Lernen einfacher.
</code></pre>
<p>Wären die Wege im Gehirn bereits frühzeitig mit Vektoren geprägt und
sowohl Matrizen und komplexe Zahlen nicht neue Hochstraßen sondern
Abzweigungen von Vektoren, würden wir sie sicher sicherer begehen.</p>
<h2>Das stimmt so nicht...</h2>
<p>Sollte ich als Lernender etwas völlig durcheinander gebracht haben, bitte
ich um einen <a href="https://www.nico.schottelius.org//about/">Hinweis</a>, damit ich diese Seite korrigieren kann.</p>
Neue Fotos aus Rheinfeldenhttps://www.nico.schottelius.org//neuigkeiten/neue-fotos-aus-rheinfelden/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Vor einiger Zeit sind
<a href="http://photo.nico.schottelius.org/rheinfelden.20100807/">diese Photos in Rheinfelden</a>
entstanden, auf denen fast jedes Haus einen Namen trägt und hübsche Frauen einen Hut.</p>
Panter sucht Ruby on Rails und Java Coder (WERBUNG)https://www.nico.schottelius.org//neuigkeiten/panter-sucht-ruby-und-java-coder/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<h2>Stellenangebot</h2>
<p>Seit neuestem gibt es in meinem Netznotizbuch nun auch Werbung, und zwar
für eine Stelle bei <a href="http://www.panter.ch">Panter</a> als
<a href="http://rubyonrails.ch/doku.php/jobs:ruby-coder-bei-panter">Ruby-on-Rails-Coder</a>.
Zusätzlich suchen sie noch einen Java-Coder.</p>
<h2>Hintergrund Stellenangebot allgemein</h2>
<p>Ich treffe durch meine Informatik-Technik-Vorliebe häufig interessant
Menschen in interessanten technischen Berufen oder Firmen. Häufig
werde ich auch von Leuten gefragt, ob ich gerade einen Job für sie
im Kopf hätte oder ob ich kompenente Leute kenne. Meist sind sowohl
die anfragenden Firmen als auch Menschen recht interessant, die
Antwort von mir jedoch</p>
<pre><code>Im Moment nicht, kann es aber im Hinterkopf behalten.
</code></pre>
<p>Da mein Kopf leider nur beschränkt viel aufnehmen kann, lagere ich
hiermit diesen Teil aus und beginne interessante Menschen oder
Stellen als Neuigkeiten mit der <a href="https://www.nico.schottelius.org//tags/stelle/">Markierung "Stelle"</a>
zu versehen.</p>
<h2>Hintergrund Stellenangebot Panter</h2>
<p>Beat Seeliger ist nicht nur Teil der Firma Panter, sondern auch
Dozent an der <a href="http://www.hsz-t.ch">HSZ-T</a>, an der ich gerade
studiere und macht auf mich den Eindruck, dass er versteht,
von was er spricht (Beat: das sollte mindestens ein 6er Wert sein ;-),
weshalb ich denke, dass die Stelle lohnenswert ist.</p>
Bericht über die skriptbare DNS-Schnittstelle der ID und Kurzvorstellung sexyhttps://www.nico.schottelius.org//neuigkeiten/praesentation-id-dns-schnittstelle-sexy/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Seit kurzer Zeit sind wir Systemadministratoren vom Sans-Projekt dabei
die neue DNS-Schnittestelle der
<a href="http://www1.ethz.ch/id/about/sections/kom">ID-Kom</a>
<a href="https://www.nico.schottelius.org//neuigkeiten/sans-und-id-zusammenarbeit-dns-schnittstelle/">zusammen mit der ID-Kom</a>
auszutesten und Skripte zu schreiben.</p>
<p>Nun liegen erste Ergebnisse vor, die wir <strong>technisch interessierten</strong>
Personen vorstellen möchten. Und eines können wir auch schon vorab verraten:
Ja, die Schnittstelle ist von der (Unix-)Kommandozeile aus nutzbar!
Natürlich gibt es hier und da noch ein zwei kleinere Probleme, jedoch
recht wenige für eine Betaphase.</p>
<p>Da die Information zur Schnittstelle wichtig sind für den Umzug des
Departements Informatik in das CAB-Gebäude haben wir uns dazu
entschieden, die Vorstellung noch dieses Jahr zu machen. Und zwar am</p>
<pre><code>Dienstag, den 8. Dezember 2009
</code></pre>
<p>In der gleichen Präsentation gibt es eine Kurzvorstellung vom Sexy-Projekt,
das die neue geschaffene Schnittstelle nutzt.</p>
<p>Bei Interesse melde Dich bitte auf der
<a href="http://doodle.ethz.ch/2m2prf3b3bb65c7e">Terminfindungsseite</a> an und
sende eine E-Post an <strong>nico.schottelius</strong> (bei) <strong>inf.ethz.ch</strong>.</p>
Neuer E-Post-Verteiler zum Thema Puppet an der ETHhttps://www.nico.schottelius.org//neuigkeiten/puppet-e-post-verteiler/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Da an der ETH einige Gruppen
<a href="http://reductivelabs.com/products/puppet/">puppet</a> zur automatischen
Konfiguration ihrer Rechner nutzen, habe ich
bei der <a href="http://www.isg.inf.ethz.ch/">ISG D-INFK</a> eine
<a href="https://lists.inf.ethz.ch/mailman/listinfo/puppet">neuen E-Post-Verteiler zum Thema Puppet</a>
einrichten lassen.</p>
<p>Falls Du an der ETH arbeitest und an der Puppet-Entwicklung und
Zusammenarbeit zu diesem Thema interessiert bist, schreib Dich
doch auf dem E-Post-Verteiler ein und lass uns von Deinen Erfahrungen
oder Ideen wissen!</p>
<ul>
<li><a href="https://lists.inf.ethz.ch/mailman/listinfo/puppet">E-Post-Verteiler zum Thema Puppet an der ETH</a></li>
</ul>
Reisen im Flugzeug mit dem Slalombretthttps://www.nico.schottelius.org//neuigkeiten/reisen-im-flugzeug-mit-dem-slalombrett/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Auf der Reise von Zürich nach Hannover habe ich heute ein paar
lustige Erlebnisse gehabt mit dem Slalombrett:</p>
<p>Da mit dem Reisegepäck bekanntermaßen nicht sehr pfleglich bei
den Fluggesellschaften umgegangen wird, hatte ich mir vorher
überlegt es im Handgepäck zu transportieren.</p>
<p>Ein mehrmenütiger Aufenthalt in der Warteschleife der Fluggesellschaft
am Vorabend brachte leider nicht viel Erkenntnis (im Grunde genommen
nur die, daß Warteschleifen recht nervig sind).</p>
<p>Nachdem ich "komfortable" die Reisekarten an einem Automaten ausgedruckt
habe, fragte ich am Gepäckschalter nach, ob ich das Brett als
Handgepäck mitnehmen kann. Der Mitarbeiter dort war sich nicht ganz
sicher und bat mich, es bei der Handgepäckkontrolle später abzuklären
und bei Problemen nach dem "Supervisor" zu fragen.</p>
<p>Dort ankommen schilderte ich dem Kantonspolizisten die Situation,
woraufhin er den "Supervisor", eine Frau der Kantonspolizei,
kommen ließ.</p>
<p>Sie erklärte mir, daß ich das Slalombrett nicht als Handgepäck
mitnehmen dürfe, da ich damit jemanden <strong>erschlagen könne</strong>. Das
könne ich zwar auch <strong>mit dem Notebook</strong>, das stünde aber nicht auf
der Liste der gefährlichen Sachen.</p>
<p>So nahm sie das Slalombrett zu sich und sagte, wir würden uns
unten bei der Passkontrolle sehen, dort würde sie es abgeben.</p>
<p>Gesagt getan, eine Viertelstunde später brachte Sie das Brett
vorbei, übergab es der Passkontrolleurin. Diese wiederrum druckte
einen Gepäckaufkleber aus und klebte ihn auf die Rückseite des
Bretts. Dann übergab Sie es mir mit dem Hinweis, daß ich das Brett
am Flugzeug in den Gepäckwagen legen solle.</p>
<p>Ausgestiegen aus dem Bus, der mich vom Flughafen zum Flugzeug gebracht hat,
fragte ich den Busfahrer, wo ich denn den Gepäckwagen finden würde.</p>
<p>Dieser lachte und fragte „Wofür?“. Nachdem ich ihm die Situation erklärt
hatte, sagte er „Stellen sie sich als letzter in die Schlange und klären
Sie es mit dem Flugzeugpersonal.”.</p>
<p>Im Flugzeug angekommen winkte mich die freundliche Mitarbeiterin durch,
so daß ich nicht dazu kam, ihr die Situation zu erklären.</p>
<p>Das Ende der Geschichte ist nun, daß ich im Flugzeug sitze,
das Slalombrett sich im Handgepäck über mir befindet und ich
diese kleine Geschichte geschrieben habe.</p>
Sans-Veröffentlichung: Liste von interessanten Dienstenhttps://www.nico.schottelius.org//neuigkeiten/sans-dienstliste-veroeffentlicht/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Auf der <a href="https://sans.ethz.ch">Sans-Netzseite</a> regt sich etwas,
und zwar ist seit heute die erste Version der
<a href="https://sans.ethz.ch/services/">Dienstliste</a> verfügbar.</p>
<p>Die Idee ist, eine Art Zusammenstellung der interessanten
Dienste (für Sysadmins) rund um die ETH zu erstellen, damit das
Suchen und wiederholte Aufbauen gleicher Dienstleistungen ein
Ende hat.</p>
<p>Daten von anderen Quellen werden in nächster Zeit
die momentan noch bescheidene Liste erweitern - reinschauen
lohnt sich also sicher ;-)</p>
Neu: Timeserver auf Sans-Veröffentlichung: Liste von interessanten Dienstenhttps://www.nico.schottelius.org//neuigkeiten/sans-timeserver-hinzugefuegt/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Auf der <a href="https://sans.ethz.ch">Sans-Netzseite</a> regt sich etwas,
und zwar ist seit heute die erste Version der
<a href="https://sans.ethz.ch/services/">Dienstliste</a> verfügbar.</p>
<p>Die Idee ist, eine Art Zusammenstellung der interessanten
Dienste (für Sysadmins) rund um die ETH zu erstellen, damit das
Suchen und wiederholte Aufbauen gleicher Dienstleistungen ein
Ende hat.</p>
<p>Daten von anderen Quellen werden in nächster Zeit
die momentan noch bescheidene Liste erweitern - reinschauen
lohnt sich also sicher ;-)</p>
Skriptbare DNS-Schnittstelle an der ETH: Zusammenarbeit von Sans und der IDhttps://www.nico.schottelius.org//neuigkeiten/sans-und-id-zusammenarbeit-dns-schnittstelle/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Im Rahmen des demnächst anstehenden Umzuges vom Departement Informatik
in das CAB-Gebäude und des sexy-Projektes (noch nicht veröffentlich),
ist eine gute Zusammenarbeit von <a href="https://www.nico.schottelius.org//eth/sans/">Sans</a> mit der
<a href="http://www.id.ethz.ch/about/sections/kom">ID-KOM</a> am entstehen.</p>
<h2>Der Hintergrund</h2>
<p>Da die DHCP-Konfiguration, die Inventarverwaltung, die Installation als
auch Konfiguration eines Rechners bereits problemlos automatisiert werden
können, gibt es nur noch einen letzten Punkt, der manuell gemacht werden
muss: die <strong><em>DNS-Konfiguration</em></strong>.</p>
<p>Das momentan verwendete Programm <strong>NetIP</strong>, ein Java-Applet mit
Oracle-Forms-Anbindung, ist sehr fehleranfällig, schwierig zu benutzen
und vor allem eines nicht: <strong><em>automatisierbar</em></strong>.</p>
<h2>Die momentane Situation</h2>
<p>Die Situation ist klar, an vielen Stellen in der ETH geht Zeit verloren
und werden unnötige Fehler gemacht. Dies ist den Systemadministratoren als
auch der ID-KOM klar.</p>
<p>Deshalb ist die ID-KOM dabei, eine neue automatisierbare Schnittstelle,
einen sogenannten "Webservice", bereitzustellen.</p>
<p>Diese Schnittstelle wird als Teil des
<a href="https://www.komcenter.ethz.ch">Komcenters</a> (nur innerhalb der ETH
erreichbar) verfügbar sein.</p>
<h2>Gemeinsame Entwicklung</h2>
<p>Es ist nicht nur erfreulich, daß die ID-KOM bereits mit
der Entwicklung begonnen hat; nein, es ist sogar sehr erfreulich,
das sie mit den Kunden, den Systemadministratoren, zusammenarbeiten möchte.</p>
<p>Wir erhoffen uns damit auf beiden Seiten Vorteile:</p>
<ul>
<li>Die Schnittstelle der ID wird bereits in der Entwicklung
von zukünftigen Kunden getestet, wodurch Tücken früh gefunden
und beseitigt werden können.</li>
<li>Zum Testen der Schnittstelle werden Skripte von Sans entwickelt.
Die entwickelten Skripte können anschließend von anderen
Systemadministratoren und ISGs genutzt werden.</li>
</ul>
<h2>Ausblick</h2>
<p>Bereits heute sind Teile der Schnittstelle und Skripte verfügbar.
Beide sind jedoch noch nicht produktiv nutzbar. Die Entwicklung
der Skripte auf Seiten Sans ist hoch priorisiert, auch die Entwicklung
bei der ID schreitet voran.
Dies jedoch, aufgrund fehlender Mittel bei der ID, langsamer als
gewünscht. Eine Aufstockung der Mittel der ID-KOM ist von Sans-Seite aus
stark zu empfehlen, damit auch weitere Dienste der ID mit skriptbaren
Schnittstellen ausgestattet werden. Von dieser Aufstockung könnten praktisch
alle ETH-Einheiten profitieren.</p>
<h2>Zusammenfassung</h2>
<p>Die Entwicklung der Schnittstelle hat auf beiden Seiten begonnen,
besonders positiv ist die Zusammenarbeit zwischen ID und Systemadministratoren
zu sehen. Neuigkeiten zu diesem Thema werden hier und auf dem
E-Post-Verteiler von <a href="https://www.nico.schottelius.org//eth/sans/">Sans</a> bekanntgegeben.</p>
Eine riesige Pflanzenbildersammlung von Thilo Schotteliushttps://www.nico.schottelius.org//neuigkeiten/thilos-grandiose-pflanzenseite/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Es ist schon einige Zeit her, seitdem ich mit meinem Vater zusammen
eine <a href="http://www.rrc-octopus.info/">Netzseite erstellt</a> habe.
Geographisch getrennt hat jeder von uns hat auf seinem eigenen Weg
weiter Netzseiten entwickelt: meine Vater hat sich auf
Netzseite meiner Eltern unter
<a href="http://heike-und-thilo.schottelius.net/">heike-und-thilo.schottelius.net</a>
konzentriert. Aus dem anfänglich bescheidenem
<a href="http://heike-und-thilo.schottelius.net/natur/natur.html">Pflanzenbereich</a>
ist nicht nur eine riesige, sondern eine grandiose Sammlung von Pflanzenbildern
entstanden, die ich hiermit jedem Pflanzenbilderliebhaber empfehlen möchte.</p>
<p>Schönen Gruß in den Norden, Papa!</p>
<ul>
<li><a href="http://heike-und-thilo.schottelius.net/natur/natur.html">Thilos Pflanzenseite</a></li>
</ul>
Verschiebung der Rechner von inf.ethz.ch nach ethz.chhttps://www.nico.schottelius.org//neuigkeiten/verschiebung-der-rechner-von-inf.ethz.ch-nach-ethz.ch/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Am <a href="http://www.inf.ethz.ch">Departement Informatik</a> ist es möglich,
die DNS-Domäne <strong><em>inf.ethz.ch</em></strong> zu nutzen. So ergibt sich beispielsweise
der Rechnername <strong><em>sgn-x200-01.inf.ethz.ch</em></strong>.</p>
<p>Leider wird das Konzept der Unter-Domäne nicht konsequent umgesetzt:</p>
<ul>
<li>Die DHCP-Server geben normalerweise <strong><em>ethz.ch</em></strong> als Domäne zurück.</li>
<li>Der VPN-Server vergibt ebenfalls <strong><em>ethz.ch</em></strong>.</li>
<li>Rechner in der Domäne <strong><em>inf.ethz.ch</em></strong> haben meist einen Alias in <strong><em>ethz.ch</em></strong> (leider nicht immer).</li>
<li>Einige Rechner sind nur in <strong><em>ethz.ch</em></strong>.</li>
</ul>
<p>Deshalb tritt häufig das Problem auf, das</p>
<pre><code>ssh rechnername
</code></pre>
<p>fehlschlägt, weil man gerade <strong><em>ethz.ch</em></strong> als Domäne oder Suchpfad
hat, der Rechner aber nur unterhalb von <strong><em>inf.ethz.ch</em></strong> eingetragen ist.</p>
<h2>DNS-Suchpfad und DHCP</h2>
<p>Das <a href="http://www.faqs.org/rfcs/rfc2131.html">DHCP-RFC</a> und das
<a href="http://www.faqs.org/rfcs/rfc2132.html">DHCP-Optionen-RFC</a> spezifizieren
keine Möglichkeit, einen Suchpfad zu definieren. Die Vorgehensweise,
als Domain zwei Namen zu vergeben, wie zum Beispiel</p>
<pre><code>option domain-name "inf.ethz.ch ethz.ch";
</code></pre>
<p>im <a href="https://www.isc.org/software/dhcp">ISC DHCP</a>, hat zwei Nachteile:</p>
<ul>
<li>Es funktioniert zwar mit einigen DHCP-Clients und wird dort in einen
Suchpfad umgewandelt, es ist aber kein Standard.</li>
<li>Es funktioniert nicht mit Windows (das nimmt dann an, daß die Domäne
"<strong>inf.ethz.ch ethz.ch</strong>" (mit Leerzeichen!) ist).</li>
</ul>
<h2>Doppelte Namensführung</h2>
<p>Da das manuelle Pflegen von DNS-Namen (A/PTR) und deren Aliasse (CNAME)
recht fehleranfällig ist und an der ETH keine automatische Methode existiert
einen Namen mit Alias einzurichten, ist es sinnvoll, mit der doppelten Namensvergabe
aufzuhören.</p>
<h2>Gebräuchliches Vorgehen in der ID</h2>
<p>In Gesprächen mit der ID habe ich meine Problemematik beschrieben. Dort
war man recht über mein Problem verwundert, und fragte mich, wofür ich
denn überhaupt <strong><em>inf.ethz.ch</em></strong> benutzen würde. Es sei doch normal,
Rechnernamen nur unterhalb von <strong><em>ethz.ch</em></strong> anzulegen.</p>
<h2>Rechnernamen nur unterhalb von <strong><em>ethz.ch</em></strong></h2>
<p>Ich finde die Idee von Unter-Domänen zwar sehr gut, doch scheint es durch
die momentane Situation an der ETH nicht sinnvoll, dieses Konzept weiterzuverfolgen:
So sind doch Mehrarbeit und mehr Fehler die Folge, die sich vermeiden lassen,
wenn Rechnernamen nur unterhalb von <strong><em>ethz.ch</em></strong> angelegt werden.</p>
<h2>Der Wechsel</h2>
<p>Ich beginne nun seit ein paar Tagen innerhalb der Systems Group die bestehenden
Rechner in die Domäne <strong><em>ethz.ch</em></strong> zu verschieben, werde das beim Umzug in das
CAB stärker in den Fokus nehmen und von nun an neue Rechner nur noch unterhalb
von <strong><em>ethz.ch</em></strong> anlegen.</p>
wasserkuppe 2010.pdfhttps://www.nico.schottelius.org//neuigkeiten/jiu-jitsu-lehrgang-wasserkuppe-2010/wasserkuppe_2010.pdf2016-02-25T13:34:32Z2015-02-03T14:47:44ZWikiversity - die virtuelle Universität im Internethttps://www.nico.schottelius.org//neuigkeiten/wikiversity-die-virtuelle-universitaet-im-internet/2016-02-25T13:34:32Z2015-02-03T14:47:44Z
<p>Ich bin heute bei der Suche nach einem bestimmten Java-Buch
auf <strong>ein</strong>
<a href="http://de.wikibooks.org/wiki/Java">Java-Buch</a> auf der
<a href="http://de.wikibooks.org/">Wikibooks</a>-Seite gestossen.</p>
<p>Mir war bis jetzt nicht bewusst, dass es neben
<a href="http://de.wikipedia.org/">Wikipedia</a> und
<a href="http://de.wiktionary.org/">Wiktionary</a>
auch eine Seite mit Büchern gibt.</p>
<p>Aber nicht nur das, sondern auch die Lernplattform
<a href="http://de.wikiversity.org">Wikiversity</a> war mir bis heute
nicht bekannt.</p>
<p>Ein recht interessanter Ansatz sind dort insbesondere die Kurse,
wie zum Beispiel
<a href="http://de.wikiversity.org/wiki/Kurs:Programmierung_in_Java">dieser Java-Kurs</a>.</p>
<p>Denn meiner Meinung nach kann ein Kurs genau die Probleme erschlagen,
die sonst beim autodidaktischen Lernen mit einem Buch auftreten:</p>
<ul>
<li>Probleme und Fragen können mit anderen diskutiert werden</li>
<li>Der Betreuer kann die definitiv richtigen Antworten herausgeben</li>
<li>Es ist eine gute Lernstruktur vorhanden</li>
</ul>
<p>Noch habe ich mir nicht viel von Wikiversity angeschaut, aber
es sieht nach einem recht interessantem Ansatz aus.
Nicht umbedingt als Ersatz, aber als Ergänzung zu echten
Universitäten.</p>
Firmware-Aktualisierung am Bellevuehttps://www.nico.schottelius.org//neuigkeiten/bellevue-firmware-download/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Zu sehen heute morgen am Bellevue in Zürich:</p>
<p><img src="https://www.nico.schottelius.org//news/photo-firmware-bellevue.jpg" title="Firmware download Bellevue" alt="firmware download" /></p>
Der !eof-Planet lebt!https://www.nico.schottelius.org//neuigkeiten/der-eof-planet-lebt/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Seit einiger Zeit war es etwas ruhig auf dem <a href="http://planet.eof.name">!eof-Planeten</a>,
weil der Webserver nach einer Migration nicht mehr angepasst wurde.</p>
<p>Seit heute läuft der Planet wieder und wird mit Inhalt gefüllt!</p>
Ein Blick auf die Sprachehttps://www.nico.schottelius.org//neuigkeiten/ein-blick-auf-die-sprache/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Wenn ich etwas sehr häufig mache, denke ich nach einiger Zeit, daß
ich verstehe, was ich tue. Andernfalls verändere ich es oder höre
auf dieser Tätigkeit nachzugehen.</p>
<p>Wenn ich nun etwas verstehe, entdecke ich Abweichungen bei anderen
Menschen und kann kritisch überlegen, ob sie aus meiner Sicht etwas
besser oder schlechter machen.</p>
<p>Da Deutsch meine Muttersprache ist, und ich seit über zwei Jahrzehnten
diese Sprache spreche und verstehe, bin ich recht kritisch geworden
bei ihrer Anwendung.</p>
<p>Dies führt jedoch dazu, daß mir tagtäglich viele Abweichungen von meiner
Art auffallen.</p>
<p>Und weil ich der Meinung bin, daß man aus Fehlern gut lernen kann habe
ich mir vorgenommen in nächster Zeit ein paar Verbesserungsvorschläge
an praktischen Beispielen zu veröffentlichen.</p>
<p>Diejenigen, die sich für die Deutsche Sprache interessieren dürfen also
gespannt sein auf die Artikel mit der <a href="https://www.nico.schottelius.org//tags/sprache/">Notiz "Sprache"</a>.</p>
Elektrotechnik-Hilfe auf Elektronik-Kompendium.dehttps://www.nico.schottelius.org//neuigkeiten/elektrotechnik-hilfe-elektronik-kompendium.de/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Da ich mich demnächst im Studium etwas mit
<a href="https://ssl.hsz-t.ch/drupal/ebs/node/54?id_occasion=82">Elektrotechnik</a>
beschäftigen werde und im Vorkurs mir ein paar offene Fragen geblieben
sind, habe ich heute ein bißchen im Internet nach einer guten Erklärung
zu den
<a href="http://de.wikipedia.org/wiki/Kirchhoffsche_Regeln">kirchhoffschen Regeln</a>
gesucht.</p>
<p><a href="http://www.elektronik-kompendium.de/sites/grd/0608011.htm">Fündig</a>
bin ich auf
<a href="http://www.elektronik-kompendium.de">Elektronik-Kompendium.de</a> geworden.</p>
<p><a href="http://www.elektronik-kompendium.de/service/impressum.htm">Patrick Schnabel</a>
bittet um einen Verweis, wenn man die Netzseite gut findet, dem ich
hiermit gerne nachkomme.</p>
<p>Das ist für mich wieder ein Fall, bei dem ich froh wäre, wenn es eine
gescheite
<a href="http://de.wikipedia.org/wiki/Micropayment">Mikrozahlungsmöglichkeit</a>
geben würde, denn ich hätte dem Autor gerne 50 Rappen für den Artikel
gezahlt.</p>
Erster Inhalt im deutschsprachigem Bereichhttps://www.nico.schottelius.org//neuigkeiten/erster-inhalt-im-deutschen-bereich/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Seit heute ist nun der erste Inhalt im deutschsprachigem Bereich
dieser Netzseite erschienen: Eine Seite mit <a href="https://www.nico.schottelius.org//verweise/">Verweisen</a> auf
andere Seiten, wie es in 90ern üblich war.</p>
<p>Ich finde es noch interessant, wenn Leute, von denen ich Sachen benutze
oder lese solche Listen führen. Häufig komme ich so auf andere interessante
Seiten und finde dort noch mehr interessante Programme oder Dokumente.</p>
<p>Und in der der selben Tradition möchte ich nun auch den Lesern meiner
Netzseite die Möglichkeit geben, woanders <a href="https://www.nico.schottelius.org//verweise/">"weiterzulesen"</a>.</p>
ETH blogs unterstützen RSS-Feed importhttps://www.nico.schottelius.org//neuigkeiten/eth-blog-importiert-rss-feeds/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Dank der hervorragenden Arbeit deDank der hervorragenden Arbeit der
<a href="http://blogs.ethz.ch/uber-uns/">ID-Basisdienste und NET</a>
werden nun meine ETH-relevanten Seiten im
<a href="http://blogs.ethz.ch/ns/">ETH-Blog</a> sichtbar gemacht.</p>
<p>Habt vielen Dank!</p>
Die ETH-Gerätebörse - ein Versuchhttps://www.nico.schottelius.org//neuigkeiten/eth-geraeteboerse/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Hier an der ETH habe ich häufiger das Problem, das wir Geräte nicht mehr
benötigen, weil sie in der Forschung keinen Verwendung mehr haben, aber
noch funktionsfähig sind. Der Verkauf von Geräten nach außen ist an gewisse
<a href="https://www.fc.ethz.ch/docs/vermoegen/inventarwesen.pdf">Regeln</a> gebunden
und verursacht einen hohen administrativen Aufwand.</p>
<p>Die Situation, Verkauf oder Weitergabe von funktionsfähigen Geräten scheint
mir im Allgemeinen nicht sehr weit ausgeführt an der ETH.
Umso froher war ich, als mir ein Vögelchen von der
<a href="https://www.geraeteboerse.ethz.ch/">ETH-Gerätebörse</a> erzählte. Die Börse
hatte ich vor einiger Zeit zwar schon einmal entdeckt, sie sah jedoch recht
(aus-)gestorben aus.</p>
<p>Nach einem Telefonat mit Herrn Rysler wurde mir jedoch klar: Der Schein trügt.
Allerdings ist die Gerätebörse in der ETH allgemein (noch) nicht sehr bekannt,
weshalb ich auch diese Notiz schreibe, mit der Bitte an die Lesenden, die
Information weiterzuverbreiten:</p>
<pre><code>Es gibt eine Möglichkeit, ETH-intern Geräte abzugeben!
</code></pre>
<p>Es ist das übliche Prinzip: je mehr Leute es wissen, desto mehr können mitmachen,
desto besser läuft es. Zum Test habe ich gerade einen Pentium-4-Rechner
inseriert; ich bin gespannt, ob es einen Abnehmer findet!</p>
Die ETH-Gerätebörse funktionierthttps://www.nico.schottelius.org//neuigkeiten/eth-geraeteboerse-funktioniert/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Nachdem ich vor einiger Zeit über die
<a href="https://www.nico.schottelius.org//neuigkeiten/eth-geraeteboerse/">ETH-Gerätebörse berichtet</a> habe,
kann ich nun bereits über positive Erfahrungen berichten:</p>
<p>Bereits zwei Pentium-4-Rechner wurden verkauft und so langsam
haben Leute auch schon mal von der Gerätebörse gehört.</p>
<p>Ich werde nun bald ein paar ältere, ungenutzte Notebooks
und weitere Festrechner darauf inserieren.</p>
<p>Die Annahme, der Verkauf von Geräten würde einen hohen administrativen
Aufwand verursachen hat sich bis jetzt nicht bestätigt. Wir haben bei uns
intern einen einfach Ablauf definiert, der sich als recht praktikabel
herausgestellt hat:</p>
<ul>
<li>Der Käufer bezahlt die Ware bar gegen Quittungskopie,</li>
<li>der Sysdamin gibt die Originalquittung mit der Inventarnummer und das Geld
an das Sekretariat weiter und</li>
<li>das Sekretariat zahlt das Geld auf die richtigen Konten ein und archiviert die
Verkaufsquittung.</li>
</ul>
<p>Und das schönste daran, es ist ein Dreifachsieg:</p>
<ul>
<li>Der Käufer kommt günstig zu einem nutzbaren Gerät,</li>
<li>ich muss das Gerät nicht entsorgen und</li>
<li>die ETH erhält einen Teil der Anschaffungskosten zurück!</li>
</ul>
Sans- und Puppet an der ETHhttps://www.nico.schottelius.org//neuigkeiten/eth-sans-projekt-und-puppet-veroeffentlicht/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Heute wurden die ersten Informationen zu den beiden
ETH-Projekten <a href="https://www.nico.schottelius.org//eth/sans/">Sans</a> und <a href="https://www.nico.schottelius.org//eth/puppet/">Puppet</a>
veröffentlicht. Mehr Informationen zu den jeweiligen Projekten
werden in Kürze veröffentlicht.</p>
Puppet Module ethz_dinfk_network veröffentlichthttps://www.nico.schottelius.org//neuigkeiten/ethz_dinfk_network-puppet-modul-veroeffentlicht/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Das Modul
<a href="http://git.sans.ethz.ch/?p=puppet-modules/ethz_dinfk_network;a=summary">ethz_dinfk_network</a>
fügt die Domäne "inf.ethz.ch" via dhclient-Konfiguration in die /etc/resolv.conf
an den bestehenden Suchpfad an.</p>
<p>Wie die
<a href="https://www.nico.schottelius.org//blog/published-efsh-puppet-module/">vielen</a>
<a href="https://www.nico.schottelius.org//blog/published-openntpd-ethz-puppet-module/">anderen</a>
<a href="https://www.nico.schottelius.org//blog/published-java-prayer-webmail-collectd-puppet-modules/">veröffentlichen Module</a>
ist die Ausgliederung in ein eigenes Git-Repository
Teil unserer Puppetsäuberungsaktion.</p>
<p>In diesem Fall sogar gleich zweimal, da wir den <strong>inf.ethz.ch</strong>
Suchpfad <a href="https://www.nico.schottelius.org//neuigkeiten/verschiebung-der-rechner-von-inf.ethz.ch-nach-ethz.ch/">nicht mehr benutzen</a>.
Wir haben es trotzdem noch gesäubert und veröffentlicht, vielleich kann es
ja jemand anderes mal gebrauchen.</p>
Facharbeit Mathematik aus der Schulzeit veröffentlichthttps://www.nico.schottelius.org//neuigkeiten/facharbeit-mathematik-veroeffentlicht/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Ganz im Sinne des
<a href="https://www.nico.schottelius.org//neuigkeiten/je-tiefer-man-graebt-praktikumsbericht-magrathea/">vorherigen Artikels</a>
ist nun auch dieser: Wieder beim Aufräumen habe ich ein interessantes Dokument
gefunden, und zwar die
<a href="https://www.nico.schottelius.org//dokumentationen/facharbeit-mathematik-die-landung-eines-objektes-im-flaechenland/">Facharbeit im Mathematik Leistungskurs zum Thema
"Die Landung eines Objektes im Flächenland"</a>.</p>
<p>Ich denke diese Arbeit war eine der interessantesten während meiner gesamten
Schulzeit. Besonders das "querverwandte" Thema
<a href="http://de.wikipedia.org/wiki/Damenproblem">Damenproblem</a> spielte zu jener
Zeit noch eine große Rolle im Leben von Marcus und mir.</p>
<p>Mit einem beinahe fatalem Ausgang...</p>
IHK Arbeit, Webzugriff auf eine Datenbank via PHP, importierthttps://www.nico.schottelius.org//neuigkeiten/ihk-dokumentation-importiert/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Bei Aufräumarbeiten habe ich heute die Projektdokumentation zur
IHK Arbeit
"<a href="https://www.nico.schottelius.org//dokumentationen/webzugriff-auf-eine-datenbank-via-php/">Webzugriff auf eine
Datenbank via PHP</a>"
"wiedergefunden".</p>
<p>Der alte URL dazu war</p>
<ul>
<li> http://nico.schotteli.us/papers/examinations/ihk2004/</li>
</ul>
<p>der neue ist</p>
<ul>
<li> <a href="https://www.nico.schottelius.org//dokumentationen/webzugriff-auf-eine-datenbank-via-php/">http://www.nico.schottelius.org/dokumentationen/webzugriff-auf-eine-datenbank-via-php/</a>.</li>
</ul>
Import des Linux-Magazin Artikels über Monotone und Archhttps://www.nico.schottelius.org//neuigkeiten/import-des-monotone-arch-artikels/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Ich habe heute
<a href="https://www.nico.schottelius.org//dokumentationen/linux-magazin-monotone-gnu-arch-tla/">den Artikel aus dem Linux-Magazin</a>
über die beiden Versionskontrollsysteme
<a href="http://www.monotone.ca/">Monotone</a>
und <a href="http://www.gnu.org/software/gnu-arch/">GNU arch</a> von 2005 importiert.</p>
<p>Wer sich an (beinahe)
<a href="https://www.nico.schottelius.org//dokumentationen/linux-magazin-monotone-gnu-arch-tla/">historischen Berichten über
Versionskontrollsysteme</a>
interessiert wird hier fündig.</p>
Je tiefer man gräbt - der Magrathea-Praktikumsberichthttps://www.nico.schottelius.org//neuigkeiten/je-tiefer-man-graebt-praktikumsbericht-magrathea/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<pre><code>Je tiefer man gräbt, desto größere Schätze kann man finden.
</code></pre>
<p>Unter diesem Stern steht mein Fund, den ich gerade beim Aufräumen gefunden
habe: Der <a href="https://www.nico.schottelius.org//dokumentationen/praktikumsbericht-magrathea/">Praktikumsbericht</a>
vom Jahre 2000 weckt tolle Erinnerungen an ein großartiges Praktikum bei einer
ebenso <a href="http://magrathea.eu/">großartigen Firma, der Magrathea Informatik GmbH</a>.</p>
<p>Es sind zwar nun bereits über 9 Jahre vergangen und auch bei Magrathea
hat es viele Änderungen gegeben. Doch denke ich, daß die Truppe um
<a href="https://www.nico.schottelius.org//dokumentationen/gerd-dreske/">Gerd Dreske</a>
herum noch immer,
wenn nicht besser denn je, sich um die Organisation einer Klinik kümmert.</p>
<pre><code>Netz-2.5-Grüße aus der Schweiz und Prosit, Gerd!
</code></pre>
Jiu-Jitsu-Lehrgang: Wasserkuppe (20.-24. Oktober 2010)https://www.nico.schottelius.org//neuigkeiten/jiu-jitsu-lehrgang-wasserkuppe-2010/2016-02-25T13:34:32Z2015-02-03T14:47:43Z
<p>Auch dieses Jahr findet wieder das <strong>fünf-tägige</strong> Seminar auf der
Wasserkuppe statt, der von Jürgen Kippel ausgerichtet wird.</p>
<p>Erfahrungsgemäß ein sehr lehrreiches Seminar in einer eher rustikalen
Unterkunft mit viel Spaß auch neben der Matte.</p>
<p>Mehr Informationen sind im diesem <a href="https://www.nico.schottelius.org//neuigkeiten/jiu-jitsu-lehrgang-wasserkuppe-2010/wasserkuppe_2010.pdf">zusammengefügten Dokument</a>
zu finden. Die Einzelteile sind auf <a href="http://www.budo-news.com/">Budo-News</a> zu finden.</p>
photo-firmware-bellevue.jpghttps://www.nico.schottelius.org//neuigkeiten/bellevue-firmware-download/photo-firmware-bellevue.jpg2016-02-25T13:34:32Z2015-02-03T14:47:43ZAbout init dependencieshttps://www.nico.schottelius.org//blog/about-init-dependencies/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As i started to hack on <a href="http://unix.schottelius.org/cinit/">cinit</a> again,
I tried to get it running on Debian in a VM.</p>
<p>I took the old configuration from my last computer and tried to boot
with cinit, which failed, because the udev stuff changed.</p>
<p>So I added a udev service, which uses /etc/init.d/udev for switching on.</p>
<p>After booting the VM, I recognized, that the service
<strong><em>mount/proc</em></strong> (which mounts /proc...) fails. Mount claims that
/proc it is already mounted!</p>
<p>This is caused by the udev script, which contains:</p>
<pre><code>237 [ -d /proc/1 ] || mount -n /proc
</code></pre>
<p>Besides the problematic of using a sys-v-init script with an
intelligent init system, it's interesting to see, <strong>why</strong>
this happens:</p>
<p>The service mount/root needs the device files in /dev
to be able to run mount/root/fsck (the filesystem check).</p>
<p>Thus mount/root requires the service udev to be started before.</p>
<p>The service mount/proc needs the root filesystems writable,
to write into /etc/mtab and thus needs mount/root.</p>
<p>This also demonstrates the problematic of the historical grown
init scripts, which do way more than one job and their lack of
real dependencies.</p>
What went wrong, if accept(2) returns 0?https://www.nico.schottelius.org//blog/accept-returns-0/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Today I continued working on the
<a href="https://www.nico.schottelius.org//blog/ceofhack-ui-support-1/">user interface support</a>
for <span class="createlink">ceofhack</span> and created an interesting
bug, which lead to other interesting "features":</p>
<p>The second time a user interface connected to
<a href="http://git.schottelius.org/?p=EOF/ceofhack;a=commit;h=a1a4b17fae050faf3f049b15ee20985c1684f46d">ceofhack-0.5.4-2-ga1a4b17</a>
it hung, and at the same time ceofhack got the input from cmd_ui on
stdin:</p>
<pre><code>Ignoring text (2100) (later versions send this to all peers in channel)
</code></pre>
<p>Digging a bit into the source, I found out that the accept() call in ui_read
returns 0:</p>
<pre><code>ui_handle.c:35:while((nsock = accept(fds[HP_READ], NULL, NULL)) != -1) {
</code></pre>
<p>I've never seen accept returning 0 before. As 0 is the value of
STDIN_FILENO, this explained the strange behaviour, because
<strong>helper_check_input()</strong> found the old stdin handler to be responsable
for that socket (via <strong>helper_find_by_fd()</strong>).</p>
<p>After digging a bit deeper, I found the reason for all the confusion:
<strong>helper_disable()</strong> closed all <strong>four</strong> helper file descriptors, of
which only <strong>two</strong> (HP_READ and HP_WRITE) were initialised by
<strong>helper_fdonly()</strong>. The other two contained the value 0, because
the code is compiled and linked with the gcc debugging option <strong>-g</strong>.</p>
<p>As only those two are used in ceofhack, <strong>helper_disable()</strong> was fixed to
close only those two.</p>
<p>Interesting, which ways debugging may take, isn't it?</p>
Adobe Source Code Pro font - Nice to read, but not on small sizeshttps://www.nico.schottelius.org//blog/adobe-source-code-pro-font-not-for-small-sizes/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>A friend of mine pointed me to the
release of the
<a href="https://blogs.adobe.com/typblography/2012/09/source-code-pro.html">Source Code Pro font</a>
from Adobe, which can be found on
<a href="https://github.com/adobe/Source-Code-Pro">github</a>.</p>
<p>I told urxvt to use the font using the escape sequence</p>
<pre><code>printf '\33]50;%s\007' "xft:Source Code Pro"
</code></pre>
<p>which looks like this:</p>
<p><a href="https://www.nico.schottelius.org//blog/adobe-source-code-pro-font-not-for-small-sizes/scp.png"><img src="https://www.nico.schottelius.org//blog/adobe-source-code-pro-font-not-for-small-sizes/scp.png" width="713" height="434" alt="Source Code Pro in a terminal" class="img" /></a></p>
<p>(this required a new urxvt instance to be started, because urxvtd seems
not to pickup font additions during run)</p>
<p>As the resolution on the MacBook Air is "only" 1440x900, I am normally
using the <strong>fixed</strong> font to have a lot of space for the text. This
is how it looks like:</p>
<p><a href="https://www.nico.schottelius.org//blog/adobe-source-code-pro-font-not-for-small-sizes/fixed.png"><img src="https://www.nico.schottelius.org//blog/adobe-source-code-pro-font-not-for-small-sizes/fixed.png" width="665" height="438" alt="Source Code Pro in a terminal" class="img" /></a></p>
<p>Putting both terminals side by side, I am loosing 12 rows from the size
different.</p>
<p>As the
<a href="https://github.com/adobe/Source-Code-Pro/issues/28">Source Code Pro font is not designed for this small size</a>
I happily continue to use fixed for now.</p>
<p>Besides being to large for my use case, Source Code Pro could become a quite
interesting font for coders.</p>
<p>If you are interested in more usable fonts for terminals,
have a look at the
<a href="https://www.nico.schottelius.org//docs/xorg-terminal-emulator-fonts/">Xorg terminal emulator font list</a>.</p>
Backup and Restore Android Devices without Googlehttps://www.nico.schottelius.org//blog/android-backup-and-restore-without-google/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>If you want to backup and restore your contacts and
calendar entries without going via Google, this blog entry is for you.</p>
<h2>Requirements</h2>
<p>This article only applies to rooted Android devices which have
a ssh server (like
<a href="https://play.google.com/store/apps/details?id=berserker.android.apps.sshdroid&hl=de">SSHDroid</a>)
running.</p>
<h2>Implementation</h2>
<p>Within a seminar in my study I developed a simple shell script that
connects to the device via ssh, copies away or back the relevant
sqlite database.</p>
<p>The source can be found in
<a href="http://git.schottelius.org/?p=hszt/rooted_android_backup;a=tree">the git repository</a>.</p>
Announced first ETH sysadmin meetinghttps://www.nico.schottelius.org//blog/announced-first-sans-meeting/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Today
<a href="https://sans.ethz.ch/meetings/2010-05-28/">the announcement of the first /sans/ sysadmin meeting</a>
was published.</p>
<p>If you are a sysadmin working at the ETH, feel free to join us!</p>
Archlinux: One way to create one account for all systemshttps://www.nico.schottelius.org//blog/archlinux-single-authentication-database/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>If you use archlinux and use more than one of the web tools, you'll have
multiple accounts, because each tool has its own authentication database.</p>
<h2>Motivation</h2>
<p>Having Single-Sign-On (SSO) or at least one account for all systems
would simplify life with the webtools.</p>
<h2>Quick overview</h2>
<p>As of today, I see at least four systems being used:</p>
<ul>
<li><a href="https://bugs.archlinux.org/">bugs/flyspray</a></li>
<li><a href="https://wiki.archlinux.org/">wiki/mediawiki</a></li>
<li><a href="https://aur.archlinux.org/">aur/aur</a></li>
<li><a href="https://bbs.archlinux.org/">forum/fluxbb</a></li>
<li><a href="http://mailman.archlinux.org/mailman/listinfo/">mailinglist/mailman</a></li>
</ul>
<h2>Quick analysis</h2>
<p>In the bugtracker you can see a
<a href="http://bugs.archlinux.org/task/10703">bug</a> in which using
openid is described. <a href="http://openid.net/">OpenID</a> may be an
interesting option, though I see another one that could
be doable.</p>
<h2>Database support</h2>
<p>All applications with the exception of mailman, are database based.
Fluxbb support at least MySQL and PostgreSQL, Mediawiki
support MySQL and PostgreSQL, Flyspray supports
MySQL and PostgreSQL and aur seems to use MySQL
(seen in <strong><em>support/schema/aur-schema.sql</em></strong>).</p>
<p>Mailman has no database support, but from my point of view,
it makes sense to leave mailman seperated, as mailman's
primary key is an email address, which may be different
for each mailinglist anyway (I'm using a different e-mail address
for every person / mailinglist).</p>
<h2>One database, multiple schemas</h2>
<p>All these tools have their own schemas and are not written to
support each other. But there's a very elegant way supported
by PostgreSQL to access different "views" in a read-write
manner: schemas.</p>
<p>PostgreSQL normally has one default schema, named "public".
With PostgreSQL one could create a new database that contains
all the authentication information and map it into the
schemas of the other databases.</p>
<h2>This is a proposal</h2>
<p>I'm not good in politics, nor interested in arguing or fighting
for a solution. Instead, I'm giving this proposal, which I'm
willing to help and/or coordinate with Archlinux Sysadmins.</p>
<h2>One way to do it</h2>
<p>Coming back to the original idea, here's one way to do it:</p>
<h3>Test the proposal</h3>
<ul>
<li>Create a new userdb</li>
<li>Analyse schemas of applications</li>
<li>Create mappings from application schemas to userdb</li>
<li>Verify that applications work</li>
</ul>
<h3>Prepare the migration</h3>
<ul>
<li>Try to import data from current live sources</li>
<li>Fix any collisions</li>
<li>Define what a fix is: Delete or merge or rename or whatever</li>
<li>Ensure that AUR also supports postgresql</li>
</ul>
<h3>Test the migration</h3>
<ul>
<li>Import data from live systems into the new databases</li>
<li>Setup tools on test system to use new database</li>
<li>Verify everything works</li>
</ul>
<h3>Do the migration</h3>
<ul>
<li>Announce migration date</li>
<li>Freeze database</li>
<li>Import data</li>
<li>Change applications to use the databases</li>
<li>Test each application</li>
<li>Allow access from outside again</li>
<li>Announce migration finish</li>
</ul>
<h2>Comments?</h2>
<p>I'm reachable as telmich in #archlinux or <a href="https://www.nico.schottelius.org//about/">the usual way</a>.</p>
Attitude of the FOSS community - a matter of perspectivehttps://www.nico.schottelius.org//blog/attitude-of-the-foss-community-a-matter-of-perspective/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>It is a sunny day in the mountains,
<a href="https://www.google.ch/maps/place/Glarus/@46.9998482,9.0581153,21037m/data=!3m1!1e3!4m2!3m1!1s0x4785318a3a76c851:0x823354d3ed5144b2?hl=en">Glarus</a>.
I am on the train, reading what is going on in the Internet.
Reading my private, self aggregated "newspaper".
<a href="https://plus.google.com/app/basic/stream/z13rdjryqyn1xlt3522sxpugoz3gujbhh04">Lennart is complaining</a>
about how bad the FOSS community is [to him], which made me feel sad
for him. This blog post is devoted for him and everyone who
is feeling being traited unjust in the FOSS community.</p>
<h2>A short word about me</h2>
<p>You may have not heard much about me, as I used to keep out of public
discussions for a long time, especially when they are not
<a href="http://en.wiktionary.org/wiki/fruitful">fruitful (2nd meaning)</a>.</p>
<p>I am the maintainer or author of
<a href="https://www.nico.schottelius.org//software/gpm/">gpm</a>, an <a href="https://www.nico.schottelius.org//software/cinit/">init system</a>,
a <a href="https://www.nico.schottelius.org//software/ccollect/">backup utility</a>,
a <a href="https://www.nico.schottelius.org//software/cdist/">configuration management system</a> and
some <a href="https://www.nico.schottelius.org//software/">more software</a>. I am the CEO of
<a href="http://www.ungleich.ch">ungleich</a> and have been active in the FOSS community since 1998.</p>
<p>People who use my software have shining eyes, usually
contribute the software in form of ideas, code or even
better: in form of praise, which motivates any FOSS coder.</p>
<p>In short: I love the FOSS community, because they do good to me.</p>
<h2>The FOSS community</h2>
<p>In my opinion the FOSS community is very friendly, open minded
and helpful. It consists of many geeky people,
introverted and extroverted, developers, sysadmins and
also the users.</p>
<p>There are many great tools being developed, beer being drunken
and also heated discussions led, when it comes to
<strong>personal opinions</strong>.</p>
<h3>How I experience the FOSS communit</h3>
<p>I receive pull requests for the software I have written on a daily basis,
discuss how to integrate them, into which direction we should develop.
(Virtual) friends pass interesting links to me, we compare software and
find out which architecture is superior in which regards to another.</p>
<p>Questions from newbies are mostly answered by the community around
me and our newbies turn into gurus and begin to answer questions to
newbies (read <a href="http://www.catb.org/esr/faqs/hacker-howto.html">How To Become A Hacker</a> and <a href="http://www.catb.org/esr/faqs/smart-questions.html">How To Ask Questions The Smart Way</a>
to understand how it works).</p>
<p>The alert reader may have noticed I am not talking about
the Linux community as Lennart does. Why not?
Because in my opinion there is <strong>much more than just Linux</strong>
in the FOSS community.</p>
<h2>Linux is a small part of the existing FOSS</h2>
<p>Linux is one successful FOSS project, but everyone being in the
FOSS community knows that there is much more than just Linux.
The BSDs are developing pretty cool architectures and great
software (like <a href="http://www.openssh.com/">OpenSSH</a>) and in my opinion there is a
great potential to work more closely together. I am not going to name
all great software here, however if you are coming from a Linux only world,
it may be helpful to know there are other worlds that form the FOSS community.</p>
<p>Everytime I develop software, I have take care of how to make it portable
to other Unices. I think it is rather closed minded to focus on Linux
only software, especially when a great standard
(<a href="http://pubs.opengroup.org/onlinepubs/9699919799/">The Open Group Base Specifications Issue 7</a>) exists that allows us to easily write portable software.</p>
<p>So in short, if you consider Linux being the only plattform
to develop for and do not respect the work of others (there is
a reason for Posix), people may not respect you either.</p>
<h2>What goes around, comes around</h2>
<p>You probably have heard of the idiom <strong>What goes around, comes around</strong>,
it is rather old (wikipedians: cite needed) and it means "the way
you behave towards other people will be reflected by them and it is how
they behave towards you".</p>
<p>So if you are yelling at someone, trying to force someone to do something
or rant about other's software, you are likely to experience this as well.</p>
<h2>A question of life and attitude</h2>
<p>Dear reader, did you ever have a (car) accident? Or have you watched one?
Maybe you will have noticed that the two involved parties usually have a
completly different picture of what just happened.
This is true for every situation in life, how you see a situation depends
on your point of view, on your situation. So a lot of how your world <em>is</em>
depends on your view.
It is also important to note that
<strong>there are two sides to every question</strong> and it is about you, your
attitude and experience to see the right side.</p>
<h2>A pinch of spirituality and scout behaviour</h2>
<p>I will not go into beliefs here, but if you have met spiritual people in
your life you may have noticed a lot of them being positively. You may
have asked yourself if it is it because of drugs or the way the live?</p>
<p>If you have been a scout, you may remember the maxim
<strong>do a good deed every day</strong> - imagine everyone does this, there is a high
chance somebody will do something good to you, every day.</p>
<h2>Summary and suggestions</h2>
<p>So why do people love software I have written and don't collect money
for hitman to go after me? Maybe I am brilliant and they have fallen
so deep in love with the software I wrote that they don't think about it -
or maybe it is because I treat people with respect.
I think being able to to learn from critics is also very helpful and
to <strong>expect the best, prepare for the worst</strong> when you are communicating
can change your life to experience it very positively.</p>
<p>So anyone who is unhappy with how you are treated (by the FOSS community),
here are some suggestions for you:</p>
<ul>
<li>Be friendly - and the echo will be the same</li>
<li>Have an opinion, but don't force it to someone else</li>
<li>Think twice about what you write, especially if it can be interpreted offensive</li>
<li>Don't be ignorant, treat people with respect</li>
</ul>
<h2>Comments</h2>
<p>If you want to add some suggestions or have other improvement ideas,
you can either leave a comment on
<a href="https://news.ycombinator.com/item?id=8417176">hackernews</a> or
<a href="https://twitter.com/NicoSchottelius">@NicoSchottelius</a>.</p>
Automated Unix Installationshttps://www.nico.schottelius.org//blog/automated-unix-installations/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>From time to time I'm playing around with different Unices,
mostly free ones like *BSD and Linux and wonder how easy it is
to have an automatic installation.</p>
<h2>Preample: There's only one way to do it</h2>
<p>My expererience as a sysadmin is that the only way to scale
out installations is via network install: USB sticks, cdroms
or floppies just require too much manual work. As most installations
need network connectivity anyway, there is no need to rely on these
old fashioned, non scalable methods.</p>
<h2>Network install</h2>
<p>If an (unix) operating system supports network install, it should
require only a TFTP server. The reason for this is that for network
installations using PXE a TFTP server is required anyway.</p>
<p>After the installer is running, it can definitely use the usual
methods to retrieve components like packages (i.e. HTTP, NFS, etc.),
but this not require me, as a sysadmin, to setup any additional
service.</p>
<h2>The challenge</h2>
<p>Now I'm sitting in front of some computers and I'm wondering how
easy it can be to setup these boxes with different Unices
<em>automated</em>.</p>
<h2>Current status</h2>
<p>I've started the <a href="https://www.nico.schottelius.org//software/cuni/">cuni</a> project some time ago to
learn about the unix installers and I'm aware of at least
Kickstart for Redhat/Fedora, Preseed for Debian/Ubuntu
and Fai for Debian. I guess there are many other out there and
I'm wondering how easy it is for every Unix to get into a
complete unattendet, autoamated installation.</p>
<h2>Help appreciated, comments and critics welcome</h2>
<p>I'm aware that this is a bigger project, but at the end it would
be very useful for sysadmins maintaining small and large infrastructures
to be able to have <em>one way to rule them all</em>.</p>
<p>So if you are an expert of $Unix and know how to automate the
installation of it via network, <span class="createlink">just drop me a mail</span>.</p>
<p>I plan to extend <a href="https://www.nico.schottelius.org//software/cuni/">cuni</a> to be able to create automated
installation environments, as soon as I've collected the necessary
information on how to do so.</p>
bash-zsh-prompt-screenshot-20111125.pnghttps://www.nico.schottelius.org//blog/my-bash-and-zsh-prompt/bash-zsh-prompt-screenshot-20111125.png2016-02-25T13:34:32Z2015-02-03T14:47:26ZI buy No Time To Explain, because I don't need tohttps://www.nico.schottelius.org//blog/buy-pirated-game-no-time-to-explain/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>I've just decided to buy the game
<a href="http://tinybuildgames.com/no-time-to-explain-ost-out-now">No Time To Explain</a>,
because I don't need to. Tinybuildgames just put up
a
<a href="http://thepiratebay.org/torrent/6617784/No_Time_To_Explain_Windows_tinyBuildGAMES">modified, "pirated" version of their game on Piratebay</a>. So I can download and play it
for free.</p>
<p>And because Tinybuildgames gives me the freedom to decide whether or not
to buy the game, I will buy it.</p>
<p>Following up my mail to Tinybuildgames:</p>
<pre><code>From: Nico -telmich- Schottelius <>
To: CONTACT -at- TINYBUILDGAMES -dot- COM
Subject: Pirated version - the reason to buy the game
Good day TINYBUILDGAMES-Team,
I haven't bought a game for years, I am not very much interested in
getting a new game anymore (other interests are higher priority),
but making your own "pirated" release of the game on Piratebay
conviced me of spending some money to you.
Thus I'm heading over to your shop, buy the game, just to sign you,
and hopefully others, that this is exactly the right approach:
You do not even skip DRM, but present the users freedom to choose
where to get the game from and whether to spend money to you or not.
Thanks for being a pioneer in software selling.
Cheers,
Nico
</code></pre>
ccollect 0.8 includes many changes like quiet_if_downhttps://www.nico.schottelius.org//blog/ccollect-0.8-many-changes-quiet-if-down/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>I'm currently updating the changes file (<strong>doc/changes/next</strong>) of
<a href="https://www.nico.schottelius.org//software/ccollect/">ccollect</a> and realised that version 0.8 will be
one of the greatest releases of ccollect:</p>
<p>Months ago, when I went through my <a href="https://www.nico.schottelius.org//about/projects/">projects list</a>,
I thought that there will not be many changes for ccollect anymore:</p>
<p>It is running stable, has a lot of features and is still very short
(around 600 lines of code).</p>
<p>Now I have a list of more than 10 big changes
for the upcoming 0.8 release of ccollect!</p>
<h2>Be quiet!</h2>
<p>One of the new features is "<strong><em>quiet_if_down</em></strong>": If you enable this
option, ccollect will be much more quiet, if the source is not
reachable. Very useful for backing up mobile devices
(cell phones, notebooks, watches, etc.)!</p>
<p>Thanks to John, who implemented this feature!</p>
RPM for ccollect 0.8 availablehttps://www.nico.schottelius.org//blog/ccollect-0.8-rpm-available/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Last week Nikita Koshikov dropped
<a href="https://www.nico.schottelius.org//news/archiv">an email</a> to the
<a href="http://l.schottelius.org/pipermail/ccollect/2009-September/000035.html">ccollect mailinglist</a>,
providing a spec file and a rpm packet.</p>
<p>It's very interesting, some years ago users of ccollect
provided Debian packages, today the rpm users seem to be more active.</p>
<p>Written enough: Thanks for the <a href="https://www.nico.schottelius.org//software/ccollect/">ccollect RPM package</a>,
Nikita!</p>
ccollect 0.8 will soon be releasedhttps://www.nico.schottelius.org//blog/ccollect-0.8-to-be-released-soon/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>I'm just pretty impressed by a
<a href="http://l.schottelius.org/pipermail/ccollect/2009-July/000003.html">great discussion on the ccollect mailinglist</a>, in which the automatic selection of the right interval was
discussed.</p>
<p>Just some hours before I started to do
<a href="http://git.schottelius.org/?p=cLinux/ccollect.git;a=summary">some cleanups</a> on
<a href="https://www.nico.schottelius.org//software/ccollect/">ccollect</a> and now I'm pretty motivated to finish
those changes and to include the proposed one.</p>
<p>So stay tuned and expect ccollect-0.8 to be available soon!</p>
cdist-2.0.0-rc4-graph.pnghttps://www.nico.schottelius.org//blog/cdist-performance-2.0.0-rc4/cdist-2.0.0-rc4-graph.png2016-02-25T13:34:32Z2015-02-03T14:47:26ZSubject: Cdist 2.0.1 released ...https://www.nico.schottelius.org//blog/cdist-2.0.1-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>... which mainly includes bugfixes to the bugs seen in the last
days.</p>
<p>But it also adds the beginning feature of object dependent
code execution:</p>
<p>Imagine you want to restart apache, but only if <strong>apache_vhost
generated code, you can do so now, by checking all
</strong>apache_vhost objects for the "changed" flag and if and only if
the flag is existing, generate the restart command.</p>
<p>You can access all objects through the __global variable.</p>
<p>This feature is in its early stage and I'd be pretty happy to
get some feedback, if it useful for you and if you've some
other enhancements to propose.</p>
<p>As usual, the best proposals are the ones with included merge
request to a git source :-p</p>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.10 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.10-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code> * Cleanup __group: No getent gshadow in old Redhat, use groupmod -g
(Matt Coddington)
* Bugfix __package_yum: Missing cat
* Bugfix __start_on_boot: Correctly use sed and quotes (Steven Armstrong)
* Feature __file: Support for --state exists (Steven Armstrong)
* Feature core: Make variable __manifest available to type manifests
* Feature core: Correct parent dependency handling (Steven Armstrong)
* Bugfix several types: Fix sed for FreeBSD (Istvan Beregszaszi)
* New Type: __jail (Jake Guffey)
* Change Type: __rvm*: --state present/absent not installed/remvoed (Evax Software)
* Bugfix Type: __cron: Hide error output from crontab
* Various smaller bugfixes (Chris Lamb)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.11 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.11-security-bugfix-release/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This is a security bugfix release:
Cdist has so far used whatever umask has been setup on the local and remote
system. This may have lead to
<strong>/var/lib/cdist</strong> being accessable by others,
including data from explorers.</p>
<p>This release fixes this bug and setups a <strong>umask of 077</strong> within cdist.
That means if you are using the <strong>__file</strong> type without the <strong>--mode</strong>
parameter, your files may now have "more secure permissions" than you
would like.</p>
<p>It is recommended to update as soon as possible.
For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.12 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.12-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code> * Core: Correctly raise error on Python < 3.2 (Steven Armtrong)
* Core: Add support for --remote-exec and --remote-copy parameters
* Documentation: Debian Squeeze hints (Sébastien Gross)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.13 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.13-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code> * Bugfix __ssh_authorized_key: Ensure it sets proper group (contradict)
* Bugfix __addifnosuchline: Fixed quotes/interpolation bug ("a b" became "a b")
* New Explorer: interfaces (Sébastien Gross)
* Feature core: Support reading from stdin in types (Steven Armstrong)
* Feature core: Support multiple parameters for types (Steven Armstrong)
* Feature __file: Support reading from stdin with - syntax (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.14 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.14-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code>* Bugfix Type: __jail: Use correct variable (Jake Guffey)
* Change Type: __jail: Parameter jailbase now optional (Jake Guffey)
* Bugfix Type: __user: Use passwd database on FreeBSD (Jake Guffey)
* Bugfix Type: __start_on_boot: Do not change parameters
* Feature __user: Added support for BSDs (Sébastien Gross)
* Feature __group: Added support for FreeBSD (Jake Guffey)
* New Type: __package_zypper
* Feature Types: Initial Support for SuSE Linux
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.5 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.5-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code>* Bugfix __key_value: Use correct delimiters
(Steven Armstrong, Daniel Maher)
* Cleanup: Explicitly require Python >= 3.2 (do not fail implicitly)
* Documentation: (Re)write of the tutorial
* Feature: __addifnosuchline supports matching on
regular expressions (Daniel Maher)
* Feature: __directory, __file, __link:
Add --state parameter (Steven Armstrong)
* New Type: __package_luarocks (Christian G. Warden)
* New Type: __cdistmarker (Daniel Maher)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.6 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.6-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code> * Bugfix __apt_ppa:
Also remove the [ppa-name].list file, if empty. (Tim Kersten)
* Bugfix __group:
Referenced wrong variable name (Matt Coddington)
* Feature __package_apt:
Initial support for virtual packages (Evax Software)
* Feature Core: Added new dependency resolver (Steven Armstrong)
* Feature Explorer, __package_yum: Support Amazon Linux (Matt Coddington)
* New Type: __rvm (Evax Software)
* New Type: __rvm_gem (Evax Software)
* New Type: __rvm_gemset (Evax Software)
* New Type: __rvm_ruby (Evax Software)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.7 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.7-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code> * Bugfix __file: Use chmod after chown/chgrp (Matt Coddington)
* Bugfix __user: Correct shadow field in explorer (Matt Coddington)
* Bugfix __link: Properly handle existing links (Steven Armstrong)
* Bugfix __key_value: More robust implementation (Steven Armstrong)
* Bugfix __user: Fix for changing a user's group by name (Matt Coddington)
* New Type: __package_pip
* Bugfix/Cleanup: Correctly allow Object ID to start and end with /, but
not contain //.
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.8 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.8-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code> * Bugfix core: Remove another nasty traceback when sending SIGINT (aka Ctrl-C)
* Cleanup: Better hint to source of error
* Cleanup: Do not output failing script, but path to script only
* Cleanup: Remove support for __debug variable in manifests (Type != Core
debugging)
* Cleanup: Change __package_* to support absent/present (default state
name now). The values removed/installed will be removed in cdist 2.1.
* Cleanup: Change __process to support absent/present (default state
name now). The values running/stopped will be removed in cdist 2.1.
* Feature Core: Support boolean parameters (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.0.9 releasedhttps://www.nico.schottelius.org//blog/cdist-2.0.9-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code> * Cleanup documentation: Fix environment variable list to be properly
displayed (Giel van Schijndel)
* Cleanup documentation: Some minor corrections
* New Type: __package_opkg (Giel van Schijndel)
* New Type: __package_pkg_freebsd (Jake Guffey)
* New Type: __mysql_database (Benedikt Koeppel)
* Feature __package: Support for OpenWRT (Giel van Schijndel)
* Feature __start_on_boot: Support for OpenWRT (Giel van Schijndel)
* Feature __start_on_boot: Support for Amazon Linux (Matt Coddington)
* New Example: Use rsync to backup files (Matt Coddington)
* Feature core: Exit non-zero, if configuration failed
* Documentation: Describe how to do templating (Aurélien Bondis)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.1.0 releasedhttps://www.nico.schottelius.org//blog/cdist-2.1.0-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code>* Core: Ensure global explorers are executable
* Core: Ensure type explorers are executable (Steven Armstrong)
* New Type: __git
* New Type: __ssh_authorized_keys (Steven Armstrong)
* New Type: __user_groups (Steven Armstrong)
* Type __rvm_gemset: Change parameter "default" to be boolean
* Type __user: Remove --groups support (now provided by __user_groups)
* Type __apt_ppa: Bugfix: Installeded ppa detection (Steven Armstrong)
* Type __jail: Change optional parameter "started" to boolean "stopped" parameter,
change optional parameter "devfs-enable" to boolean "devfs-disable" parameter and
change optional parameter "onboot" to boolean.
* Type __package_pip: Bugfix: Installeded the package, not pyro
* Remove Type __ssh_authorized_key: Superseeded by __ssh_authorized_keys
* Support for CDIST_PATH (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.1.0pre7 releasedhttps://www.nico.schottelius.org//blog/cdist-2.1.0pre7-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code>* Core: All unit tests restored back to working
* Core: Print error message on missing initial manifest
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.1.0pre8 releasedhttps://www.nico.schottelius.org//blog/cdist-2.1.0pre8-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code>* Type cleanup: __apt_ppa, __apt_ppa_update_index, __file,
__ssh_authorized_key, __timezone, all install types (Steven Armstrong)
* Types: Remove all parameter changing code
* Type __rvm_ruby: Change parameter "default" to be boolean
* Documentation: Web documentation clean up
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.1.1 releasedhttps://www.nico.schottelius.org//blog/cdist-2.1.1-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in this release:</p>
<pre><code>* Core: Use dynamic dependency resolver to allow indirect self dependencies
* Core: Remove umask call - protect /var/lib/cdist only (Arkaitz Jimenez)
* Explorer os: Added Slackware support (Eivind Uggedal)
* Type __git: Support mode and fix owner/group settings (contradict)
* Type __jail: State absent should implies stopped (Jake Guffey)
* Type __directory: Make stat call compatible with FreeBSD (Jake Guffey)
* Type __cron: Allow crontab without entries (Arkaitz Jimenez)
* Type __user: Add support for creating user home (Arkaitz Jimenez)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.1.2 releasedhttps://www.nico.schottelius.org//blog/cdist-2.1.2-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.1.2:</p>
<pre><code>* Build: Change clean-dist target to "distclean"
* Core: Make global explorers available to initial manifest (Arkaitz Jimenez)
* Core: Change execution order to run object as one unit
* New Remote Example: Add support for sudo operations (Chase James)
* Type __apt_ppa: Fix comparison operator (Tyler Akins)
* Type __start_on_boot: Archlinux changed to use systemd - adapt type
* Type __git: Missing quotes added (Chase James)
* Type __postgres_database: Make state parameter optional (Chase James)
* Type __postgres_role: Make state parameter optional, fix password bug (Chase James)
* Type __process: Make state parameter optional
* Type __cron: Simplyfied and syntax change
* New Type: __update_alternatives
* New Type: __cdist
* Improved documentation (Tomáš Pospíšek)
* Moved a lot of build logic into Makefile for dependency resolution
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.2.0 releasedhttps://www.nico.schottelius.org//blog/cdist-2.2.0-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.2.0:</p>
<pre><code>* Build: Cleanup the Makefile
* Type __package_opkg: Use shortcut version
* Core: Remove old pseudo object id "singleton" (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.3.0 releasedhttps://www.nico.schottelius.org//blog/cdist-2.3.0-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.3.0:</p>
<pre><code>* Core: Added support for cdist shell
* Documentation: Improved some manpages
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.3.1 releasedhttps://www.nico.schottelius.org//blog/cdist-2.3.1-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.3.1:</p>
<pre><code>* Core: Support relative paths for configuration directories
* Core: Code cleanup (removed context class, added log class)
* Documentation: Add more best practises
* Documentation: Add troubleshooting chapter
* Type __key_value:Fix quoting problem (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.3.2 releasedhttps://www.nico.schottelius.org//blog/cdist-2.3.2-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.3.2:</p>
<pre><code>* Build: Ensure tests don't change attributes of non-test files
* Core: Fix typo in argument parser
* Core: Code cleanup: Remove old install code (Steven Armstrong)
* Core: Improve error message when using non-existing type in requirement
* New Type: __iptables_rule
* New Type: __iptables_apply
* Type __cdist: Also create home directory
* Type __cdist: Add support for --shell parameter
* Type __motd: Regenerate motd on Debian and Ubuntu
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.3.3 releasedhttps://www.nico.schottelius.org//blog/cdist-2.3.3-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.3.3:</p>
<pre><code>* Core: Add support for default values of optional parameters (Steven Armstrong)
* Type __start_on_boot: Bugfix for systemd (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.3.4 releasedhttps://www.nico.schottelius.org//blog/cdist-2.3.4-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.3.4:</p>
<pre><code>* Core: Add missing bits to support dry run (Steven Armstrong)
* Core: Make unit test remote copy more compatible with scp (Steven Armstrong)
* New Type: __postfix (Steven Armstrong)
* New Type: __postfix_master (Steven Armstrong)
* New Type: __postfix_postconf (Steven Armstrong)
* New Type: __postfix_postmap (Steven Armstrong)
* New Type: __postfix_reload (Steven Armstrong)
* Type __line: Ensure regex does not contain /
* Type __ssh_authorized_keys: Bugfix: Preserve ownership (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.3.5 releasedhttps://www.nico.schottelius.org//blog/cdist-2.3.5-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.3.5:</p>
<pre><code>* Core: Unit test fix for remote_copy (Steven Armstrong)
* Documentation: Updated manpages of __package and __file (Alex Greif)
* Documentation: Add more examples to cdist-manifest (Dan Levin)
* Type __package_apt: Do not install recommends by default
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.3.6 releasedhttps://www.nico.schottelius.org//blog/cdist-2.3.6-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.3.6:</p>
<pre><code>* New Type: __locale
* Type __line: Ensure special characters are not interpreted
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 2.3.7 releasedhttps://www.nico.schottelius.org//blog/cdist-2.3.7-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 2.3.7:</p>
<pre><code>* Type __file: Secure the file transfer by using mktemp (Steven Armstrong)
* Type __file: Only remove file when state is absent (Steven Armstrong)
* Type __link: Only remove link when state is absent (Steven Armstrong)
* Type __directory: Only remove directory when state is absent (Steven Armstrong)
* Type __package_zypper: Fix explorer and parameter issue (Daniel Heule)
* Core: Fix backtrace when cache cannot be deleted
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.0 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.0-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>The major new feature we introduced into this
release is the
<a href="https://www.nico.schottelius.org/software/cdist/man/3.0.0/man7/cdist-messaging.html">messaging support</a> - enjoy it!</p>
<p>Here's a short overview about the changes found in version 3.0.0:</p>
<pre><code>* Core: Added messaging support
* Core: Removed unused "changed" attribute of objects
* Core: Support default values for multiple parameters (Steven Armstrong)
* Core: Ensure Object Parameter file contains \n (Steven Armstrong)
* New Type: __zypper_repo (Daniel Heule)
* New Type: __zypper_service (Daniel Heule)
* New Type: __package_emerge (Daniel Heule)
* New Type: __package_emerge_dependencies (Daniel Heule)
* Type __cron: Add support for raw lines (Daniel Heule)
* Type __cron: Suppress stderr output from crontab (Daniel Heule)
* Type __cron: Fix quoting issue (Daniel Heule)
* Type __file: Do not generate code if mode is 0xxx
* Type __iptables_rule: Use default parameter
* Type __key_value: Fix quoting issue (Steven Armstrong)
* Type __package: Use state --present by default (Steven Armstrong)
* Type __package_zypper: Support non packages as well (Daniel Heule)
* Type __package_zypper: Support package versions (Daniel Heule)
* Type __postfix_*: Depend on __postfix Type (Steven Armstrong)
* Type __postfix_postconf: Enable support for SuSE (Daniel Heule)
* Type __postfix: Enable support for SuSE (Daniel Heule)
* Type __start_on_boot: Use default parameter state
* Type __start_on_boot: Add support for gentoo (Daniel Heule)
* Type __user: Add support for state parameter (Daniel Heule)
* Type __user: Add support for system users (Daniel Heule)
* Type __user: Add messaging support (Steven Armstrong)
* Type __zypper_service: Support older SuSE releases (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.1 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.1-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.1:</p>
<pre><code>* Core: Copy only files, not directories (Steven Armstrong)
* Core: Allow hostnames to start with /
* Type __line: Remove unecessary backslash escape
* Type __directory: Add messaging support (Daniel Heule)
* Type __directory: Do not generate code if mode is 0xxx (Daniel Heule)
* Type __package: Fix typo in optional parameter ptype (Daniel Heule)
* Type __start_on_boot: Fix for SuSE's chkconfig (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.2 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.2-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.2:</p>
<pre><code>* Documentation: Document all messages sent by types (Daniel Heule)
* New Type: __block (Steven Armstrong)
* New Type: __mount (Steven Armstrong)
* Type __cron: Replace existing entry when changing it (Daniel Heule)
* Type __ssh_authorized_keys: Use new type __block (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.3 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.3-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.3:</p>
<pre><code>* Core: Enhance error message when requirement is missing object id
* Core: Add environment variable to select shell for executing scripts (Daniel Heule)
* Explorer hostname: Return host name by using uname -n
* New Type: __hostname (Steven Armstrong)
* Type __cdist: Use default paremeters (Daniel Heule)
* Type __key_value: Use default paremeters (Daniel Heule)
* Type __line: Use printf instead of echo for printing user input
* Type __qemu_img: Use default paremeters (Daniel Heule)
* Type __zypper_repo: Use default paremeters (Daniel Heule)
* Type __zypper_service: Use default paremeters (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.4 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.4-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.4:</p>
<pre><code>* Core: Ignore install types in config mode
* Documentation: Update reference (files path in object space)
* Documentation: Update best practise: Replaces templates/ with files/
* Type __apt_ppa: Install required software (Steven Armstrong)
* Type __debconf_set_selections: Support --file - to read from stdin
* Type __jail: Fix jaildir parameter handling (Jake Guffey)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.5 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.5-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.5:</p>
<pre><code>* Core: Introduce override concept (Daniel Heule)
* Type __process: Make --state absent work (Steven Armstrong)
* Documentation: Update documentation for environment variables
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.6 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.6-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.6:</p>
<pre><code>* New Type: __apt_key (Steven Armstrong)
* New Type: __apt_key_uri (Steven Armstrong)
* New Type: __apt_norecommends (Steven Armstrong)
* New Type: __apt_source (Steven Armstrong)
* New Type: __ccollect_source
* Type __git: Use default parameters (Daniel Heule)
* Type __jail: Use default parameters (Daniel Heule)
* Type __package_yum: Use default parameters (Daniel Heule)
* Type __package_zypper: Use default parameters (Daniel Heule)
* Type __user_groups: Use default parameters (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.7 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.7-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.7:</p>
<pre><code>* Core: Allow dependencies to be created based execution order (Daniel Heule)
* Core: Add tests for override (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.8 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.8-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.8:</p>
<pre><code>* Core: Enhance object id verification (Daniel Heule)
* Core: Add unit tests for dependencies based on execution order (Daniel Heule)
* Core: Add unit tests for dry run (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.0.9 releasedhttps://www.nico.schottelius.org//blog/cdist-3.0.9-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.0.9:</p>
<pre><code>* Core: Ignore order dependencies if override is set (Daniel Heule)
* Core: Improve Mac OS X support for unit tests (Steven Armstrong)
* Type __locale: Error out in case of unsupported OS
* Type __jail: Use default parameters for state (Daniel Heule)
* Type __pf_ruleset: Use default parameters for state (Daniel Heule)
* Type __postgres_database: Use default parameters for state (Daniel Heule)
* Type __postgres_role: Use default parameters for state (Daniel Heule)
* Type __rvm: Use default parameters for state (Daniel Heule)
* Type __rvm_gem: Use default parameters for state (Daniel Heule)
* Type __rvm_gemset: Use default parameters for state (Daniel Heule)
* Type __rvm_ruby: Use default parameters for state (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.0 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.0-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.0:</p>
<pre><code>* New Type: __rbenv
* Type __file: Enhance OpenBSD Support (og)
* Type __git: Pass onwer/group/mode values to __directory
* Type __iptable_rule: Fix example documentation (Antoine Catton)
* Type __key_value: Add messaging support
* Type __package_pkg_openbsd: Allow to change PKG_PATH (og)
* Type __ssh_authorized_keys: Allow managing existing keys (Steven Armstrong)
* Type __user: Enhance OpenBSD Support (og)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.1 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.1-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.1:</p>
<pre><code>* Core: Make __object and __object_id available to code (Daniel Heule)
* New explorer: cpu_cores (Daniel Heule/Thomas Oettli)
* New explorer: cpu_sockets (Daniel Heule/Thomas Oettli)
* New explorer: machine_type (Daniel Heule/Thomas Oettli)
* New explorer: memory (Daniel Heule/Thomas Oettli)
* Type __jail: Fix parameter names in explorer (Jake Guffey)
* Type __line: Ensure permissions are kept (Steven Armstrong)
* Type __link: Do not create link in directory, if link exists (Steven Armstrong)
* Type __package_pkg_openbsd: Improve error handling (og)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.2 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.2-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.2:</p>
<pre><code>* Documentation: Add missing environment variables to reference
* Type __qemu_img: size is optional, if state is not present
* Type __key_value: Rewrite using awk (Daniel Heule)
* New Type: __dog_vdi
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.3 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.3-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.3:</p>
<pre><code>* New Type: __yum_repo (Steven Armstrong)
* Type __hostname: Add support for CentOS
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.4 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.4-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.4:</p>
<pre><code>* Core: Ensure all created files end in \n (Steven Armstrong)
* Documentation: Cleanup up, added HTML links (Tomas Pospisek)
* Explorer interfaces: Remove test output (Daniel Heule)
* Type __jail: Add messaging support (Jake Guffey)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.5 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.5-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.5:</p>
<pre><code>* Type __zypper_repo: Automatically import gpg keys (Daniel Heule)
* Type __zypper_service: Automatically import gpg keys (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.6 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.6-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.6:</p>
<pre><code>* New Type: __ssh_dot_ssh
* Type __package_yum: Support retrieving package via URL
* Type __hostname: Support SuSE and have CentOS use sysconfig value
* Type __locale: Support SuSE
* Type __locale: Support Archlinux
* Type __timezone: Support SuSE
* Type __file: Support MacOS X (Manuel Hutter)
* Type __iptables_apply: Add "reset" to init.d script of iptables
* Type __ssh_authorized_key: Use new type __ssh_dot_ssh
* Type __zypper_repo: Bugfix for pattern matching (Daniel Heule)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.7 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.7-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.7:</p>
<pre><code>* Type __cdistmarker: Fix typo (Ricardo Catalinas Jiménez)
* Core: Bugfix: Export messaging to manifests (Ricardo Catalinas Jiménez)
* Explorer cpu_cores, cpu_sockets, memory: Add Mac OS X support (Manuel Hutter)
* Type __ssh_authorized_keys: Ensure keys are correctly added (Steven Armstrong)
* New Type: __ssh_authorized_key (Steven Armstrong)
* New Type: __package_pkgng_freebsd (Jake Guffey)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.8 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.8-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.8:</p>
<pre><code>* New Type: __package_update_index (Ricardo Catalinas Jiménez)
* New Type: __package_upgrade_all (Ricardo Catalinas Jiménez)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 3.1.9 releasedhttps://www.nico.schottelius.org//blog/cdist-3.1.9-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 3.1.9:</p>
<pre><code>* Type __package_emerge: Fix handling of slotted packages (Daniel Heule)
* Type __package_apt: Use --force-confdef (Ricardo Catalinas Jiménez)
* Type __package_update_index: Decrease verbosity (Ricardo Catalinas Jiménez)
* Type __package_upgrade_all: Decrease verbosity (Ricardo Catalinas Jiménez)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.0.0pre1 releasedhttps://www.nico.schottelius.org//blog/cdist-4.0.0pre1-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 4.0.0pre1:</p>
<pre><code>* Core: Integrate initial install support
* Core: Integrate initial preos support
</code></pre>
<h2>Note for this release:</h2>
<p>4.0-pre-not-stable is an experimental branch containing features for the
next big thing: cdist 4.</p>
<p>As you can see above, we have worked on cdist to allow not only
configuration of machines, but also installation of machines.</p>
<p>These features are poorly documented, the install
types are not cleaned up and there are many more ugly things.</p>
<p>Still, we wanted to give eager people access to our codebase,
to play around with it.</p>
<p>A minimal introduction for those keen on trying it out:</p>
<ul>
<li>host boots preos (you generate this by using "cdist preos")</li>
<li>host gets installed using "cdist install"</li>
<li>types that begin with <strong>__install</strong>_are by convention used for installing sytems</li>
<li>install types contain the (empty) file "install"</li>
</ul>
<p>More to come soon - planned release for cdist 4 is mid 2014.</p>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.0.0pre2 releasedhttps://www.nico.schottelius.org//blog/cdist-4.0.0pre2-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 4.0.0pre2:</p>
<pre><code>* Core: Remove archives from generated preos (Steven Armstrong)
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist 4.0.0pre3 releasedhttps://www.nico.schottelius.org//blog/cdist-4.0.0pre3-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Here's a short overview about the changes found in version 4.0.0pre3:</p>
<pre><code>* Update to include changes from cdist 3.1.5
</code></pre>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Cdist: How to apply a single objecthttps://www.nico.schottelius.org//blog/cdist-hint-apply-single-object/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Sometime it would be nice if you could only apply a single
object on a remote host, like this:</p>
<pre><code>__file /need/this/now --state present --source $(pwd -P)/myfile --mode 0755
</code></pre>
<p>Using the initial manifest option (-i) and stdin makes this easy:</p>
<pre><code>echo "__file /need/this/now --state present --source $(pwd -P)/myfile --mode 0755" | cdist config -v -i - targethost
</code></pre>
<p>For more information about cdist visit the <a href="https://www.nico.schottelius.org//software/cdist/">cdist homepage</a>.</p>
Performance of cdist 2.0.0-rc4https://www.nico.schottelius.org//blog/cdist-performance-2.0.0-rc4/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As you may have notice, <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> 2.0.0
has been rewritten in Python due to performance issues with
fork(). This article shows the first performance results of
the new implementation</p>
<h2>Background</h2>
<p>All configurations were done using the productive
configuration of the Systems Group in the ETH Zurich.
All hosts were configured from a cable modem connection
on my Lenovo X201, with 2 cores, 4 threads
(Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz) and 4 GiB
RAM. The configuration contains 189 objects, of which
142 are of type <strong>__package</strong>.
The duration was taken from cdist output and the measurements
were taken by deploying to one, two, ..., thirty one hosts in
parallel:</p>
<pre><code>(
set --;
for host in ikq02.ethz.ch ikq03.ethz.ch ikq04.ethz.ch ikq05.ethz.ch \
ikq06.ethz.ch ikq07.ethz.ch ikr01.ethz.ch ikr02.ethz.ch ikr03.ethz.ch \
ikr05.ethz.ch ikr07.ethz.ch ikr09.ethz.ch ikr10.ethz.ch ikr11.ethz.ch \
ikr13.ethz.ch ikr14.ethz.ch ikr15.ethz.ch ikr16.ethz.ch ikr17.ethz.ch \
ikr19.ethz.ch ikr20.ethz.ch ikr21.ethz.ch ikr23.ethz.ch ikr24.ethz.ch \
ikr25.ethz.ch ikr26.ethz.ch ikr27.ethz.ch ikr28.ethz.ch ikr29.ethz.ch \
ikr30.ethz.ch ikr31.ethz.ch;
do
set -- "$@" $host;
cdist -c ~/p/cdist-nutzung -p "$@";
done
) 2>&1 | tee ~/benchmark-home
</code></pre>
<h2>The results</h2>
<p>Deploying to one host took 72 seconds, to three hosts 77 seconds,
to 6 hosts 108 seconds and to 31 hosts 282 seconds, as can be seen
in the following graph:</p>
<p><a href="https://www.nico.schottelius.org//blog/cdist-performance-2.0.0-rc4/cdist-2.0.0-rc4-graph.png"><img src="https://www.nico.schottelius.org//blog/cdist-performance-2.0.0-rc4/600x450-cdist-2.0.0-rc4-graph.png" width="600" height="450" alt="Cdist graph" class="img" /></a></p>
<p>In a sequential run, it would have taken 2232 seconds to deploy
to 31 hosts. At higher parallel configurations (>10), it can be seen
that cdist becomes CPU bound.</p>
<p>Although deploying to 31 hosts takes much longer than 1 host, it is
still much faster than the linear case.</p>
<h2>Conclusion</h2>
<ul>
<li>72 seconds is still pretty long for one run, but can easily be improved
by comparing the configuration against the cache and only run the new
parts (or nothing if rerunning)</li>
<li>Cdist can deploy to more than 30 hosts on 2 cores/4 threads within 5 minutes</li>
<li>As cdist runs massively parallel, it can utilise all existing CPUs</li>
<li>And even more important: cdist can scale out, if you add more CPUs (or machines),
it can simply use them to deploy to more machines in parallel</li>
</ul>
cdist: Why we require Python 3.2 on the source hosthttps://www.nico.schottelius.org//blog/cdist-python-3.2-requirement/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>As <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> is getting more popular, more people are using
cdist and some questions arrive more often from newbies than others.
One of them is why we require having Python 3.2 on the source host.
If you are also wondering about the motivation, or you're screaming because
you only have Python 2.x or Python 3.1 available in your distribution, this
article is for you.</p>
<pre><code>Note: Cdist does *not* require Python on the target hosts!
Note: Cdist requires only ssh and a posix shell on the targets.
</code></pre>
<h2>History of cdist</h2>
<p>Because you may be one of the people who are screaming, I'm giving you
an overview about the whole development history of cdist, which will
make things more clear for you.</p>
<p>At the end of 2010 <a href="http://www.usenix.org/event/lisa10/tech/slides/snyder_bof2.pdf">I was claiming that most current configuration management
(<strong><em>CM</em></strong>) tools are not only overly complex designed, but also that their
implementations are way too
complex</a>.
This is definitely a strong statement, which I
also used to provoke people to thing about the current situation of CM.</p>
<p>The logical consequence of my statement was to try out, whether it's
actually possible to write a CM tool completly in a very simple manner.
For instance in posix shell script. This led to the first commit to the
newly born cdist repository:</p>
<pre><code>commit 28a9807fe5e6bfa95015fe72456d63cbb2a5821f
Date: Thu Sep 16 02:20:35 2010 +0200
</code></pre>
<p>After a lot of discussions, design ideas, pictures on the whiteboard,
trying out different implementations, weighting up advantages of
each one, the first official release of cdist was put into public,
cdist 1.0.0:</p>
<pre><code>tag 1.0.0
Date: Mon Mar 7 18:21:18 2011 +0100
</code></pre>
<p>Cdist 1.0.0 is completly implemented in posix shell and had almost all
features of the current cdist implementation. With one major drawback:
Performance. When running cdist 1.0.0 in parallel mode, the source host
easily became the bottleneck. A typical run of cdist 1.0.0 caused around
3500 - 5000 forks. Running in parallel mode with about 10-15 target hosts,
most time of a cdist run was spent in kernel space to handle the forks.</p>
<p>The logical sequence again was to search for the reason for the huge amount
of forks, which was easily detected: Every routine was a shell script on its
own, that required a new launch of the shell. Now, for some operations,
like working on the dependency tree, a lot of sub-routines were called, leading
to the huge number of forks.</p>
<p>We tried to minimise the number of forks by migrating from shell scripts to
shell functions, which was a big pain, when we realised that posix shell
does not have <strong>local</strong> variable support anymore. Posix states that you should
use shell scripts instead of shell functions, if you need distinct environments.
Which is exactly were we came from...</p>
<p>Thus we decided to switch to a different programming language for cdist's core,
but only for the core. For us it is very important to minimise the dependencies
on the target hosts: We do not want to install Ruby, Python, Java, libfoo or
anything that is not usually available on a freshly installed base system.
Cdist should be able to take over, as soon as the system is setup in a very
basic state.</p>
<p>The choice felt on Python, because it felt very mature and easy to use.
Additionally, Python 3 already provides a lot of functionality in its
base installation: Everything we were used to do in shell, could easily be
rewritten in plain Python 3. After <strong>exactly</strong> one year after
the initial commit, <strong><em>cdist 2.0.0</em></strong>, a complete rewrite in Python 3
was finished and released:</p>
<pre><code>tag 2.0.0
Date: Fri Sep 16 12:11:28 2011 +0200
</code></pre>
<h2>Now, why Python 3.2?</h2>
<p>During the development of cdist 2.0, we had a lot of discussions
about clean design, pythonic ways of doing stuff (versus using the
shell approach in python) and which functions to use. In the beginning,
we were discussing about whether to settle for Python 2 or Python 3.
As we did not have any dependencies or old code that relies on Python 2,
the choice for the current stable tree, Python 3, was easy to make.</p>
<p>Python 3.2, in contrast to Python 3.1, includes the very good usable
<a href="http://docs.python.org/py3k/library/argparse.html">argparse module</a>,
as well as an enhanced variant of the
<a href="http://docs.python.org/py3k/library/os.html#os.makedirs">os.makedirs</a>
method that supports the <strong><em>exist_ok</em></strong> parameter.</p>
<p>The argparse module is also available for Python 3.1, but not the
enhanced <strong>os.makedirs</strong> method. So we had to decide: Should we
integrate a simple workaround to support the previous Python 3 release,
Python 3.1, or shall we upset users with old Python installations?</p>
<p>To answer this question, we had a look at the current situation of
Python in various distributions.</p>
<h2>Python support in Linux/BSD</h2>
<p>A very quick research showed the following results:</p>
<table>
<thead>
<tr>
<th>Distro</th>
<th> Version</th>
<th> Python version</th>
<th> Python 3?</th>
<th> Usable as cdist source host?</th>
</tr>
</thead>
<tbody>
<tr>
<td>Archlinux</td>
<td> rolling release</td>
<td> 3.2.2</td>
<td> yes</td>
<td> yes</td>
</tr>
<tr>
<td>CentOS</td>
<td> 6</td>
<td> 2.6.6</td>
<td> no</td>
<td> no</td>
</tr>
<tr>
<td>Debian</td>
<td> squeeze</td>
<td> 3.1.3</td>
<td> yes</td>
<td> no</td>
</tr>
<tr>
<td>Debian</td>
<td> wheezy</td>
<td> 3.2.2</td>
<td> yes</td>
<td> yes</td>
</tr>
<tr>
<td>Fedora</td>
<td> 14</td>
<td> 3.1.2</td>
<td> yes</td>
<td> no</td>
</tr>
<tr>
<td>Fedora</td>
<td> 15-17</td>
<td> 3.2 - 3.2.2</td>
<td> yes</td>
<td> yes</td>
</tr>
<tr>
<td>FreeBSD</td>
<td> Ports</td>
<td> 3.2.2</td>
<td> yes</td>
<td> yes</td>
</tr>
<tr>
<td>Gentoo</td>
<td> rolling release</td>
<td> 3.2.2</td>
<td> yes</td>
<td> yes</td>
</tr>
<tr>
<td>NetBSD</td>
<td> Ports</td>
<td> 3.1.4</td>
<td> yes</td>
<td> no</td>
</tr>
<tr>
<td>OpenBSD:</td>
<td> -current</td>
<td> 2.7.1</td>
<td> no</td>
<td> no</td>
</tr>
<tr>
<td>OpenBSD:</td>
<td> Ports</td>
<td> 3.2.2</td>
<td> yes</td>
<td> yes</td>
</tr>
<tr>
<td>OpenSuse</td>
<td> 11.4</td>
<td> 3.1.3</td>
<td> yes</td>
<td> no</td>
</tr>
<tr>
<td>OpenSuse</td>
<td> 12.1</td>
<td> 3.2.1</td>
<td> yes</td>
<td> yes</td>
</tr>
<tr>
<td>Redhat</td>
<td> 6</td>
<td> 2.6.6</td>
<td> no</td>
<td> no</td>
</tr>
<tr>
<td>Slackware</td>
<td> 13.37</td>
<td> 2.6.6</td>
<td> no</td>
<td> no</td>
</tr>
<tr>
<td>Ubuntu</td>
<td> maverick (10.10)</td>
<td> 3.1.3</td>
<td> yes</td>
<td> no</td>
</tr>
<tr>
<td>Ubuntu</td>
<td> natty (11.04) - precise (12.04)</td>
<td> 3.2 - 3.2.2</td>
<td> yes</td>
<td> yes</td>
</tr>
</tbody>
</table>
<p>So we have the following situations:</p>
<ul>
<li>There are a lot of distros with Python 3.2 available (Archlinux,
Debian >= Wheezy, Fedora >= 15, FreeBSD, Gentoo, OpenBSD, OpenSuse,
Ubuntu >= 11.04)</li>
<li>There are distros which do not have Python 3 at all (Centos, Redhat, Slackware)</li>
<li>Python 3 is definitely needed, so requiring 3.1 or 3.2
does not make a difference</li>
<li>There are only two cases, which would make it interesting to support
Python 3.1: Debian Squeeze (currently stable) and NetBSD.</li>
<li>As I've been a long time Debian user, I understand this may be a bit
annoying, because it's unclear, when wheezy will become stable.
On the other hand, you can easily install python 3.2 from source to
anywhere, like you'd do in the situation, when you wouldn't have
python 3 at all.</li>
<li>Another point speaking against 3.1 support for Debian is the fact that
distributions should support the user and not developers should support
distributions that ship old software (there's nothing against supporting
old <strong>and</strong> new versions, though). That's by the way one of the reasons
why I moved away from Debian...</li>
<li>I am short on experience regarding NetBSD, but installing via source
shouldn't be an issue either.</li>
</ul>
<p>To summarise: Support for Python 3.1 only makes sense for Debian Squeeze
and NetBSD at the moment. This requirement will soon [tm] be superseeded
and can easily be worked around by selecting one of many distributions
with more recent software packages. And that's the reason, why we settled
for Python 3.2.</p>
<h2>Btw, who is we?</h2>
<p>You mave have noticed that I am often referring to <strong>we</strong> in this article.
The second main developer for cdist is
<a href="https://github.com/asteven">Steven Armstrong</a>, a sysadmin at
ETH Zurich and good friend of mine.
The discussions and development time we spent together was very valuable
for me as well for the whole cdist project and thus I wanted to use this
article to say</p>
<pre><code>Thank you, Steven.
</code></pre>
<p>[Disclaimer: I do not work for ETH Zurich anymore, but for <a href="http://www.local.ch">local.ch</a>]</p>
Cdist: The scripts vs. functions and local variables problemhttps://www.nico.schottelius.org//blog/cdist-shell-scripts-functions-local-variables/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Today I've found a very nasty problem during development of
<a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> 1.8.0,
related to shell functions and the non-local variables.</p>
<h2>Background</h2>
<p>cdist, up to version 1.7.x, is implemented in shell scripts. As it turned out,
most of the time during a cdist call is spent within the kernel, which seems
to be related to some thousands of forks we do for each run (you can use
oprofile to verify this yourself). To speedup cdist, the idea was to rewrite
cdist to use functions for internal functionality, instead of the shell
scripts.</p>
<h2>The implementation</h2>
<p>So the idea was very simple:</p>
<ul>
<li>Take all existing shell scripts</li>
<li>Add a function header</li>
<li>Move shell script from bin/ to core/</li>
<li>Source all functions from core/</li>
</ul>
<p>But the first problem that came up is that the <strong>local</strong> keyword had been removed
from the POSIX standard. Thus I began to eliminate global variables by
either prefixing them with the function name or by removing them completly.</p>
<h2>The problem</h2>
<p>After a some time I realised that the __cdist_object_run() function calls itself
recursively. This leads to subsequent overwritten variables, which in turn leads
to unwanted behaviour.</p>
<h2>The question</h2>
<p>I'm aware that the <strong>local</strong> keyword is still supported by some shells, but I am
wondering, whether you have a good idea, on how to speedup cdist (having fast
fork()s would be great!) without violating the POSIX standard?</p>
Cdist: How to copy a folder recursivelyhttps://www.nico.schottelius.org//blog/cdist-transfer-files-recursively/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>This article describes one solution to transfer a folder
and all of its contents recursively with <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a>
to target hosts. I am motivated to do so, because I want to have one
central place to configure the tftproot that I may use on a variety
of KVM hosts.</p>
<p>Traditionally, it is not an easy job to handle recursive transfer correctly
and efficiently in a configuration management system. Using a sophisticated
tool like <a href="http://rsync.samba.org/">rsync</a> or
<a href="http://www.cis.upenn.edu/~bcpierce/unison/">unison</a> makes life
usually way easier.</p>
<p>If you just have a minor number of files, like I have in this case,
doing a recursive copy with cdist may be the easist way.</p>
<h2>Copying the files recursively</h2>
<p>Cdist knows about the types <strong>__file</strong> and <strong>__directory</strong> for
file transfer and directory management.
The type <strong>__nico_tftp_root</strong>,
which can be found in the
<a href="http://git.schottelius.org/?p=cdist-nico">cdist-nico git repository</a>
(below <strong>cdist/conf/type</strong>) recursively copies all files it contains to
the folder <strong>/home/service/tftp</strong>. Only when a file is changed, it
is transferred again (the <strong>__file</strong> type takes care of this).</p>
<h2>The manifest</h2>
<p>In cdist, a manifest of a type defines, which other types to use.
A manifest file is essentially shell code that can call other
cdist types.</p>
<p>To accomplish the task, first of all the base directory is created
on the remote site:</p>
<pre><code>basedir=/home/services/tftp
__directory "$basedir" --mode 0755
</code></pre>
<p>Afterwards, I change into the source directory
and find all files. Cdist exports the
variable "__type" to access the folder in which the type is stored.</p>
<pre><code>cd "$__type/files"
for file in $(find . | sed 's;./;;' | grep -v '^\.$' ); do
</code></pre>
<p>The grep command is needed, to skip the current directory, that is returned
by find.</p>
<p>Now, for every file I determine the remote file name. Furthermore
dependencies to the required directories are setup:
You can <strong>require</strong> another type to be run before a type, by setting
up the <strong>require</strong> environment variable (this will be changed in cdist
2.1. and replaced in 2.2, but there is still some time until this happens).</p>
<p>The remote name is constructed by this line:</p>
<pre><code> name="$basedir/$file"
</code></pre>
<p>And the requirement is setup using this line:</p>
<pre><code> # Require the previous directory in the path
export require="__directory/${name%/*}"
</code></pre>
<p>The shell (!) knows about string manipulation: ${variablename%/*} replaces
the shortest matching suffix that equals "/*". And thus the previous
statement removes the last part of the path (also known as dirname).</p>
<p>If the file found by find is a file, we call the __file type,
if the file is actually a directory, the __directory type is called:</p>
<pre><code> if [ -d "$file" ]; then
__directory "$name" --mode 0755
else
__file "$basedir/$file" --source "$__type/files/$file" \
--mode 0644
fi
done
</code></pre>
<p>And that's it - a full recursive copy with just a bunch of lines.</p>
<h2>Further Reading</h2>
<ul>
<li><a href="https://www.nico.schottelius.org//software/cdist/">cdist</a></li>
<li><a href="http://git.schottelius.org/?p=cdist-nico">cdist-nico git repository</a></li>
<li><a href="http://git.schottelius.org/?p=cdist-nico;a=blob;f=cdist/conf/type/__nico_tftp_root/manifest;h=b312210d878b30e5871751d62cea14172f63c756;hb=HEAD">manifest of __nico_tftp_root</a></li>
</ul>
QT4 user interface prototype added to ceofhackhttps://www.nico.schottelius.org//blog/ceofhack-qt4-ui/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>In the last weeks there has been a lot of stuff
done in the <strong>user_interface</strong> branch of <span class="createlink">ceofhack</span>.</p>
<p>The communication between a user interface (UI) and ceofhack
is described (read as in: finished!). Thus, if you are a (G)UI programmer
and like the idea of ceofhack, you can now begin to implement your
user interface.</p>
<p>This is what someone already did and thus there is now a prototype of
a QT4 gui in the <strong>user_interface</strong> branch available!</p>
<p>This new UI does indeed look much better than the included command line
user interface (<strong>ui-cmd</strong>), but it's your decision what to use with
<span class="createlink">ceofhack</span>.</p>
Introducing first code for user interfaces in ceofhackhttps://www.nico.schottelius.org//blog/ceofhack-ui-support-1/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p><span class="createlink">Ceofhack</span>, the first implementation of peer-to-peer,
onion routing crypto chat protocol, now received some code to support
user interfaces.</p>
<p>The code can be found in the <strong>user_interface</strong> branch. It will be merged
into hacking and master later on.</p>
Cinit 0.3pre15 releasedhttps://www.nico.schottelius.org//blog/cinit-0.3pre15-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>It has been a long time since the last cinit release,
but now it's there! You can find it
<a href="https://www.nico.schottelius.org//software/cinit/">at the bottom of the new website</a>.</p>
<p>Let me know if there are any issues with this update, because the
child handling stuff radically changed (to be reliable).</p>
Cinit 0.3pre16 releasedhttps://www.nico.schottelius.org//blog/cinit-0.3pre16-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>The next release of <a href="https://www.nico.schottelius.org//software/cinit/">cinit</a>,
<strong><em>0.3pre16</em></strong> did not take as long as the
<a href="https://www.nico.schottelius.org//blog/cinit-0.3pre15-released/">previous release</a>.</p>
<p>This is more or less a cleanup release, to start into
real development and begin testing on different operating
systems.</p>
<p>If you're an init-system interested person and want to
try cinit on a non-Linux-OS, I would be thankful for
a report.</p>
Cinit 0.3pre17 releasedhttps://www.nico.schottelius.org//blog/cinit-0.3pre17-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>There it is, the next release of <a href="https://www.nico.schottelius.org//software/cinit/">cinit</a>,
<strong><em>0.3pre16</em></strong>. It contains a lot of useful utilities in the
<a href="https://www.nico.schottelius.org/software/cinit/browse_source/cinit-0.3pre17/bin/">bin/ folder</a>
to begin a migration from <a href="http://www.ubuntu.com/">Ubuntus</a>
<a href="http://upstart.ubuntu.com/">upstart</a>.</p>
Cinit 0.3pre18 releasedhttps://www.nico.schottelius.org//blog/cinit-0.3pre18-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This version of <a href="https://www.nico.schottelius.org//software/cinit/">cinit</a>,
<strong><em>0.3pre16</em></strong>, contains the new utility
<strong><em>cinit-conf.config.shell</em></strong>, which creates a minimal
configuration that spawns a shell.</p>
<p>Additionally the Ubuntu migration script
(<strong><em>cinit-conf.migrate.upstart.ubuntu.jaunty</em></strong>)
is almost finished!</p>
Cinit 0.3pre19 releasedhttps://www.nico.schottelius.org//blog/cinit-0.3pre19-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Version <strong><em>0.3pre19</em></strong> of <a href="https://www.nico.schottelius.org//software/cinit/">cinit-0.3</a>
contains a lot cleanups for the final 0.3 release.</p>
Cinit is alive - and being migratedhttps://www.nico.schottelius.org//blog/cinit-alive-and-being-migrated/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Cinit, the fast, small and simple init with support for profiles,
<a href="http://git.schottelius.org/?p=cLinux/cinit.git;a=shortlog;h=refs/heads/0.3pre15">is alive again</a>!</p>
<p>There is now a <a href="http://l.schottelius.org/mailman/listinfo/cinit">mailing list</a>
up again and the <a href="https://www.nico.schottelius.org//software/cinit/">new website</a> is being prepared.</p>
<p>Additionally, cinit is also mirrored at
<a href="http://github.com/telmich/cinit">github</a> now.</p>
<p>My plan is to release at least <strong>cinit 0.3pre15</strong> this year, but maybe also version
<strong><em>0.3</em></strong>, which should be a complete replacement for 0.2.</p>
Cinit migrated to www.nico.schottelius.orghttps://www.nico.schottelius.org//blog/cinit-migrated/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p><a href="https://www.nico.schottelius.org//software/cinit/">Cinits website</a> moved from</p>
<ul>
<li> http://unix.schottelius.org/cinit</li>
</ul>
<p>to</p>
<ul>
<li> <a href="https://www.nico.schottelius.org//software/cinit/">http://www.nico.schottelius.org/software/cinit/</a></li>
</ul>
<p>Please update your links.</p>
Moving on: Migrating from #cLinux to #cstarhttps://www.nico.schottelius.org//blog/clinux-migrating-to-cstar/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>There has been the IRC channel #cLinux, which I created for
development on cLinux, a "better Linux distribution", which
I'm not focussing anymore.</p>
<p>cLinux became more a "cunix" project or, well, not even limiting
to Unix. Thus the idea of <strong>cstar</strong> was born and made reality.</p>
<p>Thus, if you've just joined #cLinux, please move on to #cstar,
the home of c*.</p>
(Virtual) Network loop powered by Qemu, Bonding, Bridging and Aristahttps://www.nico.schottelius.org//blog/comic-qemu-tap-bridge-bond-lacp-arista-network-loop/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p><a href="https://www.nico.schottelius.org//blog/comic-qemu-tap-bridge-bond-lacp-arista-network-loop/comic.png"><img src="https://www.nico.schottelius.org//blog/comic-qemu-tap-bridge-bond-lacp-arista-network-loop/comic.png" width="800" height="631" alt="Comic" title="A real loop created by a virtual switch" class="img" /></a></p>
comic.pnghttps://www.nico.schottelius.org//blog/comic-qemu-tap-bridge-bond-lacp-arista-network-loop/comic.png2016-02-25T13:34:32Z2015-02-03T14:47:26ZBootstrapping configuration and installation servershttps://www.nico.schottelius.org//blog/configuration-installation-server-bootstrap/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>Imagine you can start from beginning in a new environment. There
are no automatic frameworks to install and configure systems
in place.</p>
<h2>Installation versus Configuration</h2>
<p>We need a system to install machines and one to configure machines.
Although there are systems out there which try to do both at the
same time, the experience is having two specialised tools for this job
is the better choice: Each tool can focus on its own task best.</p>
<p>Some tools even enforce a reinstallation, if something needs to be
reconfigured, which is a good example why having two tools is the
better choice.</p>
<p>Seperating the installation from the configuration also allows you
to have the newly setup system minimally configured and adjust it
on demand, wheras having both in the same step would require
to define the use at installation time.</p>
<h2>Where to start?</h2>
<p>Now the question is, which tool do we install first? The automatic
installation or configuration?
From a logical way on how a system is built, the installation
method should be preferred, because only an installed system can
be configured. But as you can see shortly, this is a bad choice,
as it results in the following steps:</p>
<ul>
<li>Manual installation of a system</li>
<li>Manual installation of installation tool</li>
<li>Manual configuration of installation tool</li>
<li>Automatic installation of a new system for the configuration tool (optional)</li>
<li>Manual installation of configuration tool</li>
<li>Manual configuration of the configuration tool</li>
<li>Automatic configuration of installation and configuration tool possible</li>
</ul>
<p>If we change the order, we'll benefit from an automatic infrastructure
earlier:</p>
<ul>
<li>Manual installation of a system</li>
<li>Manual installation of a configuration tool</li>
<li>Manual configuration of the configuration tool</li>
<li>Automatic installation of an installation tool</li>
<li>Automatic configuration of an installation tool</li>
<li>Automatic installation of a new system for the configuration or installation tool (optional)</li>
</ul>
<h2>Ok, but what to do with these information?</h2>
<p>As you might have watched my recent activities,
I've written a new
<a href="https://www.nico.schottelius.org//software/cdist/">configuration management tool named cdist</a>,
started a project to install
<a href="https://www.nico.schottelius.org//software/cuni/">unix installers named cuni</a> and try to
bring together the <a href="https://www.nico.schottelius.org//net/u2u/">unix community in the u2u project</a>.</p>
<p>The idea described above is the result of an old discussion, but
not having an installation framework at home a current problem.</p>
<h2>The plan</h2>
<p>As <strong>cdist</strong> is ready to be used in production mode, I plan to write
some <strong>cdist</strong> types to setup installation servers soon.</p>
<p>Watch this blog for updates, if you want to install a installation
server via configuration management or begin to bootstrap a UNIX
infrastructure.</p>
How to control (shutdown) Virtual machines from Qemu/KVM via commandlinehttps://www.nico.schottelius.org//blog/control-and-shutdown-qemu-kvm-vm-via-unix-socket/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>When you have read
<a href="https://www.nico.schottelius.org//blog/tunneling-qemu-kvm-unix-socket-via-ssh/">the article about tunneling a vnc socket over ssh</a>,
you already know that I am a fan of using simple technologies like Unix
sockets and ssh to access virtual machine information.</p>
<p>I am currently in a project to create a new virtual machine infrastructure
based on Qemu/KVM. The problem we were facing is "How to shutdown a virtual
machine, when the host is being shut down".</p>
<h2>Background</h2>
<p>Qemu/KVM has a so called
<a href="https://en.wikibooks.org/wiki/QEMU/Monitor">monitor</a> to control
the VM. Usually this monitor is reachable from the VNC socket
(or whatever you use to view the console) by pressing
Ctrl-Alt-2.</p>
<p>This is inappropriate for automatic shutdown (I'm not in the mood
to script vnc sessions currently), so there must be a better solution.</p>
<h2>The solution</h2>
<p>Qemu/KVM is able to redirect the monitor to a "character device".
There are ways to create a character device with Qemu/KVM and then
attach the monitor to it, but you can have it more easy by connecting
the monitor directly to a UNIX socket:</p>
<pre><code>qemu-kvm ... -monitor unix:/opt/local.ch/sys/kvm/vm/kvmtest-vm-inx01.intra.local.ch/monitor,server,nowait
</code></pre>
<p>This way we can connect (and control!) Qemu/KVM using
[socat](http://www.dest-unreach.org/socat/]:</p>
<pre><code>socat - UNIX-CONNECT:/opt/local.ch/sys/kvm/vm/kvmtest-vm-inx01.intra.local.ch/monitor
</code></pre>
<p>And when we can connect to it, we can also shutdown a virtual machine:</p>
<pre><code>echo system_powerdown | socat - UNIX-CONNECT:/opt/local.ch/sys/kvm/vm/kvmtest-vm-inx01.intra.local.ch/monitor
</code></pre>
<p>Or we could reset it:</p>
<pre><code>echo system_reset | socat - UNIX-CONNECT:/opt/local.ch/sys/kvm/vm/kvmtest-vm-inx01.intra.local.ch/monitor
</code></pre>
<h2>The full implementation</h2>
<p>The full command line for running a VM as we do it in this project looks like this:</p>
<pre><code>[root@kvm-hw-inx01 kvmtest-vm-inx01.intra.local.ch]# cat start
#!/bin/sh
/usr/libexec/qemu-kvm -m 65536 \
-hda /opt/local.ch/sys/kvm/vm/kvmtest-vm-inx01.intra.local.ch/system-disk \
-vnc unix:/opt/local.ch/sys/kvm/vm/kvmtest-vm-inx01.intra.local.ch/vnc \
-boot order=nc \
-net nic,macaddr=00:16:3e:00:00:2d,vlan=200 \
-net tap,script=/opt/local.ch/sys/kvm/bin/ifup-pz,downscript=/opt/local.ch/sys/kvm/bin/ifdown,vlan=200 \
-net nic,macaddr=00:16:3e:00:00:2e,vlan=300 \
-net tap,script=/opt/local.ch/sys/kvm/bin/ifup-fz,downscript=/opt/local.ch/sys/kvm/bin/ifdown,vlan=300 \
-smp 16 \
-name kvmtest-vm-inx01.intra.local.ch \
-enable-kvm \
-monitor unix:/opt/local.ch/sys/kvm/vm/kvmtest-vm-inx01.intra.local.ch/monitor,server,nowait
</code></pre>
<p>As the VMs are currently not performing as well as they should, we will do
some investigations into performance tuning of Qemu/KVM. So stay tuned, if
you are interested in this topic.</p>
Created photo website: photo.nico.schottelius.orghttps://www.nico.schottelius.org//blog/created-photo-website/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>I created a new photo website,
<a href="http://photo.nico.schottelius.org">photo.nico.schottelius.org</a>,
which contains latest photos taken. The old galleries may get
imported later.</p>
<p><a href="https://www.nico.schottelius.org//about/">Let me know</a>, if you find an interesting photo on it!</p>
Ctt 0.4 releasedhttps://www.nico.schottelius.org//blog/ctt-0.4-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This is the first public release of <strong>ctt</strong>. Ctt is a time tracker for
geeks.</p>
<p>For more information visit the <a href="https://www.nico.schottelius.org//software/ctt/">ctt homepage</a>.</p>
ctt 0.7 released: new format optionhttps://www.nico.schottelius.org//blog/ctt-0.7-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p><a href="https://www.nico.schottelius.org//software/ctt/">ctt</a> is a time tracking tool for geeks.
It supports project based reporting, custom report formats and
stores its data in a <a href="https://www.nico.schottelius.org//docs/cconfig/">cconfig</a> database.</p>
<h2>Changes</h2>
<pre><code>* Added -f / --format support for reporting
</code></pre>
ctt 0.8 releasedhttps://www.nico.schottelius.org//blog/ctt-0.8-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p><a href="https://www.nico.schottelius.org//software/ctt/">ctt</a> is a time tracking tool for geeks.
It supports project based reporting, custom report formats and
stores its data in a <a href="https://www.nico.schottelius.org//docs/cconfig/">cconfig</a> database.</p>
<h2>Changes</h2>
<pre><code>* Added -e / --regexp support for filtering entries on report
* Added -i / --ignore-case support
* Renamed -s/-e to --sd / --ed (to allow -e for expression)
</code></pre>
ctt 0.9 releasedhttps://www.nico.schottelius.org//blog/ctt-0.9-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p><a href="https://www.nico.schottelius.org//software/ctt/">ctt</a> is a time tracking tool for geeks.
It supports project based reporting, custom report formats and
stores its data in a <a href="https://www.nico.schottelius.org//docs/cconfig/">cconfig</a> database.</p>
<h2>Changes</h2>
<pre><code>* Renamed -s/-e to --sd / --ed for tracking as well
* Added documentation (manpage)
</code></pre>
Ubuntu and Debian skip fsck on battery - a bughttps://www.nico.schottelius.org//blog/debian-ubuntu-fsck-skip-on-battery-bug/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Everytime I go through the init scripts or init configurations
of the "traditional" (as in having a sequential order and/or
using shell scripts) init systems, there are interesting things to find:</p>
<p>Not only that probably half of the boottime could be saved if people
move the "if system has x, do y" stuff out of the init system and make
it part of the configuration process, no.
There are also a lot of other interesting quirks within the init configuration,
like this one:</p>
<pre><code> on_ac_power >/dev/null 2>&1
if [ "$?" -eq 1 ]
then
log_warning_msg "On battery power, so skipping file system check."
rootcheck=no
fi
</code></pre>
<p>That part is shameless stolen from <strong><em>S20checkroot.sh</em></strong>,
Ubuntu Jaunty (9.04), but it can also be found in Debian.
So if the system is running on battery, this configuration will skip
the filesystem check.</p>
<p>Now imagine your filesystem <strong>needs</strong> to be checked, because the
user space program <strong><em>fsck</em></strong> replays the journal and marks the
filesystem as clean. Furthermore, assume that without this filesystem
check the filesystem cannot be mounted read write.</p>
<p>That is exactly what is done by <strong><em>fsck.jfs</em></strong>, the filesystem check
for JFS filesystems.</p>
<p>So if you are running Debian or Ubuntu with JFS on the root filesystem,
your system crashed the last time and you bootup on battery the bootup
process will fail, because the root filesystem cannot be mounted
read write.</p>
<p>Perhaps the authors idea of these lines was to skip senseless
filesystem checks for ext{2,3,4}, which occur after a certain
amount of time. But as it can be seen, this "fix" introduced
a new bug.</p>
<p>Dear Debian and Ubuntu developers:</p>
<pre><code>Please do not skip the filesystem check.
All the users with JFS on / of their notebook will thank you!
</code></pre>
<p>I would have fixed that in Debian myself, if there was a
"get a fix integrated in 5 minutes" approach.
But after reading Debian documentation for some hours I did not find it.</p>
<p>If you find that way <a href="https://www.nico.schottelius.org//about/">let me know</a> and I'll write an article
"How to fix a Debian package in 5 minutes" plus provide a patch
to solve this init misconfiguration issue.</p>
Debian with LDAP forgets about its usershttps://www.nico.schottelius.org//blog/debian-with-ldap-forgets-users/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Sometimes, when I try to login to a node as root or ldap user, I get this error:</p>
<pre><code>user@myhost.example.org: ssh_exchange_identification: Connection closed by remote host
</code></pre>
<p>When I dig into <strong>/var/log/syslog</strong> and <strong>/var/log/auth.log</strong> I see
that the users are not known to the system <strong><em>anymore</em></strong>:</p>
<pre><code>Oct 26 06:17:01 myhost CRON[24310]: pam_unix(cron:session): session opened for user root by (uid=0)
Oct 26 06:17:01 myhost CRON[24310]: pam_unix(cron:session): session closed for user root
Oct 26 06:25:01 myhost CRON[24349]: pam_unix(cron:account): could not identify user (from getpwnam(root))
Oct 26 06:25:01 myhost CRON[24350]: pam_unix(cron:account): could not identify user (from getpwnam(root))
[...]
Oct 26 08:55:41 myhost sshd[25062]: fatal: Privilege separation user sshd does not exist
Oct 26 09:24:30 myhost sshd[25196]: fatal: Privilege separation user sshd does not exist
Oct 26 09:25:01 myhost CRON[25203]: pam_unix(cron:account): could not identify user (from getpwnam(root))
Oct 26 09:27:45 myhost login[4935]: pam_unix(login:auth): check pass; user unknown
</code></pre>
<p>Now comes the interesting part: If I login <strong>locally</strong> as root, I still
cannot login. But if I try it as a <strong>ldap user</strong>, I can login and <strong>after</strong>
that I can also login locally as root and remotely as everybody again!
Those are the logs I see, when logging in locally as user <strong><em>nicosc</em></strong>:</p>
<pre><code>Oct 26 09:27:45 myhost login[4935]: pam_unix(login:auth): check pass; user unknown
Oct 26 09:27:45 myhost login[4935]: pam_unix(login:auth): authentication failure; logname=LOGIN uid=0 euid=0 tty=tty1 ruser= rhost=
Oct 26 09:27:48 myhost login[4935]: FAILED LOGIN (1) on 'tty1' FOR `UNKNOWN', Authentication failure
Oct 26 09:27:50 myhost login[4935]: nss_ldap: could not connect to any LDAP server as cn=inf_proxy,ou=admins,ou=inf,ou=auth,o=ethz,c=ch - Can't contact LDAP server
Oct 26 09:27:50 myhost login[4935]: nss_ldap: failed to bind to LDAP server ldaps://ldaps01.ethz.ch: Can't contact LDAP server
Oct 26 09:27:50 myhost login[4935]: nss_ldap: reconnected to LDAP server ldaps://ldaps02.ethz.ch
Oct 26 09:27:53 myhost login[4935]: pam_env(login:session): Unable to open env file: /etc/default/locale: No such file or directory
Oct 26 09:27:53 myhost login[4935]: pam_unix(login:session): session opened for user nicosc by LOGIN(uid=0)
Oct 26 09:27:53 myhost -bash: nss_ldap: could not connect to any LDAP server as cn=inf_proxy,ou=admins,ou=inf,ou=auth,o=ethz,c=ch - Can't contact LDAP server
Oct 26 09:27:53 myhost -bash: nss_ldap: failed to bind to LDAP server ldaps://ldaps01.ethz.ch: Can't contact LDAP server
Oct 26 09:27:53 myhost -bash: nss_ldap: reconnected to LDAP server ldaps://ldaps02.ethz.ch
Oct 26 09:27:55 myhost login[4935]: pam_unix(login:session): session closed for user nicosc
Oct 26 09:28:02 myhost postfix/pickup[25235]: nss_ldap: could not connect to any LDAP server as cn=inf_proxy,ou=admins,ou=inf,ou=auth,o=ethz,c=ch - Can't contact LDAP server
Oct 26 09:28:02 myhost postfix/pickup[25235]: nss_ldap: failed to bind to LDAP server ldaps://ldaps01.ethz.ch: Can't contact LDAP server
Oct 26 09:28:03 myhost postfix/pickup[25235]: nss_ldap: reconnected to LDAP server ldaps://ldaps02.ethz.ch
Oct 26 09:28:03 myhost sshd[25236]: nss_ldap: could not connect to any LDAP server as cn=inf_proxy,ou=admins,ou=inf,ou=auth,o=ethz,c=ch - Can't contact LDAP server
Oct 26 09:28:03 myhost sshd[25236]: nss_ldap: failed to bind to LDAP server ldaps://ldaps01.ethz.ch: Can't contact LDAP server
Oct 26 09:28:03 myhost sshd[25236]: nss_ldap: reconnected to LDAP server ldaps://ldaps02.ethz.ch
Oct 26 09:28:03 myhost sshd[25236]: Accepted publickey for root from 129.132.130.3 port 52738 ssh2
Oct 26 09:28:03 myhost sshd[25236]: pam_env(sshd:setcred): Unable to open env file: /etc/default/locale: No such file or directory
Oct 26 09:28:03 myhost sshd[25236]: pam_unix(sshd:session): session opened for user root by (uid=0)
</code></pre>
<p>I see this happening on Debian Lenny with</p>
<ul>
<li>libnss-ldap-261-2.1</li>
<li>libnss3-1d-3.12.3.1-0lenny1</li>
<li>libpam0g-1.0.1-5+lenny1</li>
<li>openssh-server-1:5.1p1-5</li>
</ul>
<p>I posted this problem with details</p>
<ul>
<li>as a <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=541188">bug to libpam (#541188)</a>,</li>
<li>to the Debian user mailinglist (<a href="http://www.mail-archive.com/debian-user@lists.debian.org/msg555072.html">Subject: ldap/libnss/ssh: (remote) login stops working after some time</a>)</li>
<li>as a <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=552431">bug to libnss-ldap (#552431)</a></li>
</ul>
<p>but currently without any helpful hint.</p>
<p>If you have any hint on what could be wrong (i.e. configuration / libs / etc.)
or if you are aware of the reason for this behaviour (perfect), please
<a href="https://www.nico.schottelius.org//about/">let me know</a>.</p>
debugloghttps://www.nico.schottelius.org//blog/puppet-empties-new-and-existing-files/debuglog2016-02-25T13:34:32Z2015-02-03T14:47:26ZLinux on the Dell R815https://www.nico.schottelius.org//blog/dell-r815-hands-on-with-linux/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>In my office resides a
<a href="http://www.dell.com/downloads/global/products/pedge/en/poweredge-r815-spec-sheet-en.pdf">Dell R815 testmachine</a>, which would like to get infected with Linux and get tested.</p>
<h2>Dell R815?</h2>
<p>That's what you get with that "little box":</p>
<ul>
<li>2U 19" rack chassis</li>
<li>4x 12 Cores (AMD Opteron(tm) Processor 6174) ("Magny Cours")</li>
<li>64 GiB memory (DDR3, 1333 Mhz)</li>
<li>5x 136 GiB HD</li>
<li>iDrac6</li>
</ul>
<p>For detailled information, have a look at
<a href="https://www.nico.schottelius.org//docs/sys-specs/get-sysinfo.sh.dell-r815.log">the get-sysinfo output</a>.</p>
<h2>Remote access / iDrac6</h2>
<ul>
<li>Works with conkeror + javaws from Sun Java 6 (archlinux)</li>
<li>Works with Mozilla 3.5.9 + Sun Java 6 (windows)</li>
<li>Even german menu (java interpreted locales?)</li>
<li>Keyboard doees not work sometimes; selecting something in the menu sometimes fixes it</li>
<li>Video refresh is very slow (only parts of <strong>ps aux</strong> can be seen)</li>
<li>Has shortcuts in the menu for <strong>alt-fx</strong> and <strong>ctrl-alt-fx</strong></li>
<li>Somebody at Dell or at Avocent (as seen in About iDRAC 6 about) seems to
have noticed that Linux is available</li>
<li>General the KVM website is slow (about 3 seconds to load subpage)</li>
<li>Boot device can be forced (without going to the BIOS)</li>
<li>Has syslog support</li>
<li>No logs when enabling</li>
<li>Tried to trigger via cold boot of the system (no success)</li>
<li>Seems to work only with the ip address</li>
<li>IP address is submitted as the hostname (instead of sgs-r815-ra01)</li>
<li>Need to select "Use DHCP to obtain DNS server addresses" manually</li>
</ul>
<h2>Debian Lenny</h2>
<p>See only 32 cores, kernel is compiled with this limit. As Debian
Lennys Linux kernel (2.6.26) does not support the raid controller
found in the system, it's not possible to install it currently.</p>
<h2>Ubuntu 10.04</h2>
<p>Ubuntu 10.04 installs fine on the machine. As another friendly sysadmin
has already prepared an automated network installation of Ubuntu
(thanks Steven!), I could easily get the machine up and running.</p>
<h2>Future</h2>
<p>The machine is now up and running with Ubuntu 10.04 and configured with
puppet. It's now open for the
<a href="http://www.systems.ethz.ch/">Systems Group people</a> for use. Have fun!</p>
dot-confighttps://www.nico.schottelius.org//blog/macbook-air-42-touchpad-keyboard-correct-screen-resolution/dot-config2016-02-25T13:34:32Z2015-02-03T14:47:26ZAdded new short commands to .gitconfig (lo, lco, lpc, lpco, m, pl)https://www.nico.schottelius.org//blog/dot-gitconfig-with-git-lo-lco-lpo-lpco-m-pl/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As some of you are using my
<a href="https://www.nico.schottelius.org//configs/dot-gitconfig">git configuration</a>, there's an updated
version now, that includes a short version of <strong>--pretty=oneline</strong>
option for <strong>git log</strong>, which is very useful if you want to get
a quick impression on what is going on. Thus the new short commands
are (just an o added at the end):</p>
<pre><code>git lo
git lco
git lpo
git lpco
</code></pre>
<p>Furthermore I added the following new short commands:</p>
<pre><code>git m (merge)
git pl (pull)
</code></pre>
<p>Have fun, other lazy people like me!</p>
Find e-mail adresses of people in git log outputhttps://www.nico.schottelius.org//blog/find-emails-in-git-log-for-notification/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Motivation</h2>
<p>Some days ago I've replaced <strong>cronwrapper</strong>, a script to monitor output of
cron scripts with the replacement <strong>cwrap</strong> in local.ch's puppet configuration.</p>
<p>If the script prints on stdout, <strong>cwrap</strong> does not raise an error by default,
which <strong>cronwrapper</strong> did.</p>
<p>To notify every user of the change, I want to send an email to every
ex-<strong>cronwrapper</strong> user.</p>
<h2>Solution</h2>
<p>The configuration is stored in a subversion repo, which I locally sync using
<strong>git svn</strong>. Thus I can use <strong>git log -p</strong> to see all changes.</p>
<p>A typical line of interest looks like this:</p>
<pre><code>- command => '/usr/local/bin/cronwrapper.sh EMAIL@EXAMPLE.COM "[mob][low][dev03-sth][front] description" /usr/bin/php /some/script',
</code></pre>
<p>Thanks to git, grep, sed, awk, there is a pretty simple solution
(not the most beautiful) to this problem. First of all, get all patches:</p>
<pre><code>git log -p
</code></pre>
<p>Then find all removal entries of cronwrapper:</p>
<pre><code>grep ^- | grep cronwrapper
</code></pre>
<p>But only those containing an e-mail address:</p>
<pre><code>grep '@'
</code></pre>
<p>And filter out the e-mail address:</p>
<pre><code>sed 's/.* \(.*@.*\)/\1/' | awk '{ print $1 }'
</code></pre>
<p>Replace all quotes and backslash quotes:</p>
<pre><code>sed -e 's/\\"//g' -e 's/"//g' -e "s/'//g"
</code></pre>
<p>The problem now is that some e-mail adresses are indeed multiple e-mail adresses
(abc@example.com;def@example.com) and some e-mail adresses are lower, some upper case.</p>
<p>Breaking up the concatenated addresses can be done use awk easily:</p>
<pre><code>awk '{ gsub(";", "\n"); print $0 }'
</code></pre>
<p>Transforming all addresses to lower case can be done using the fine utility <strong>tr</strong>:</p>
<pre><code>tr '[A-Z]' '[a-z]'
</code></pre>
<p>Filter out all duplicates:</p>
<pre><code>sort | uniq
</code></pre>
<p>The result is a list of e-mail addresses. Making them usable for copy & paste
into webmail of exchange needs another filter to convert <strong>\n</strong> to <strong>;</strong>, but
add one <strong>\n</strong> at the end:</p>
<pre><code>awk 'ORS=";" { print $0 } END { ORS="\n"; print "" }'
</code></pre>
<p>So in the end, the complete chanin looks like this:</p>
<pre><code>git log -p | grep ^- | grep cronwrapper | \
grep '@' | sed 's/.* \(.*@.*\)/\1/' | awk '{ print $1 }' | \
sed -e 's/\\"//g' -e 's/"//g' -e "s/'//g" | \
tr '[A-Z]' '[a-z]' | \
awk '{ gsub(";", "\n"); print $0 }' | \
sort | uniq | \
awk 'ORS=";" { print $0 } END { ORS="\n"; print "" }'
</code></pre>
<p>For me, this is a nice demonstration of the power of shell, unix tools and filtering via pipes.</p>
How to scroll and paste with the middle mouse button in Firefoxhttps://www.nico.schottelius.org//blog/firefox-middlemouse-scrolling-paste/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>If you're like me and use firefox with a trackpoint (i.e. no scroll "buttons"),
you're probably also happy with these settings:</p>
<h2>middlemouse.paste=true</h2>
<p>This adds pasting from the x clipboard.</p>
<h2>middlemouse.contentLoadURL=false</h2>
<p>Do not try to load website from content in clipboard.</p>
<h2>general.autoScroll=true</h2>
<p>Enable scrolling using the middle mouse button.</p>
<p>Now firefox behaves similar to <a href="http://conkeror.org/">conkeror</a>,
which is my default browser currently.</p>
First release of fuihttps://www.nico.schottelius.org//blog/first-version-of-fui-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As announced some days ago, I started to hack on a
UI for <span class="createlink">ceofhack</span>.</p>
<p>The first version of <a href="https://www.nico.schottelius.org//software/fui/">fui</a> is now available,
which can only print one line of text you entered, but it
can do that with ncurses!</p>
fixed.pnghttps://www.nico.schottelius.org//blog/adobe-source-code-pro-font-not-for-small-sizes/fixed.png2016-02-25T13:34:32Z2015-02-03T14:47:26ZHow to format and partition a SD-Card (USB-Stick) under Linux for the Canon CP800 printerhttps://www.nico.schottelius.org//blog/format-sd-card-usb-stick-under-linux-for-canon-cp800-printer/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>From time to time I encounter devices that still require some old kind legacy
partitioning scheme and filesystem: Namely MBR type partitioning and
the VFAT filesystem.</p>
<p>One of these devices is the Canon Selphy CP800 photo printer, which reads
photos from various kinds of storage mediums, like USB sticks, SD cards
or CF cards.</p>
<p>Most of my usb sticks are formatted using ext3, jfs,
btrfs, LUKS encrypted, or even contain RAID signatures.</p>
<p>In case I need to transfer data to the printer, I often use pre-formatted
SD cards, because the cards I simply format with <strong><em>mkfs.vfat</em></strong> are not
recognised.</p>
<h2>Motivation</h2>
<p>Not depending on those cards and being able to re-create correct format,
everywhere and everytime, makes me more independent (and thus more happy).</p>
<h2>Analysis</h2>
<p>Having a look at my new <strong>128GB SDXC card</strong> shows the following
partitioning scheme.</p>
<pre><code>[14:31] kr:/# fdisk -l /dev/sdb
Disk /dev/sdb: 132.0 GB, 132035641344 bytes
255 heads, 63 sectors/track, 16052 cylinders, total 257882112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 2048 1026047 512000 27 Hidden NTFS WinRE
/dev/sdb2 1026048 2050047 512000 83 Linux
/dev/sdb3 2050048 257882078 127916015+ 83 Linux
</code></pre>
<p>Although /dev/sdb1 is formatted with mkfs.vfat, it is not recognised
by the printer. Comparing this with a working 4GB card reveals the
following partitioning scheme:</p>
<pre><code>[14:31] kr:/# fdisk -l /dev/sdb
Disk /dev/sdb: 3999 MB, 3999268864 bytes
82 heads, 17 sectors/track, 5603 cylinders, total 7811072 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 8192 7811071 3901440 b W95 FAT32
</code></pre>
<h2>Changing the first card</h2>
<p>Using the <strong>fdisk</strong> utility to change the partition ID to <strong><em>b</em></strong>
is sufficient to make the printer recognise the card:</p>
<pre><code>[14:31] kr:/# fdisk /dev/sdb
Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): b
Changed system type of partition 1 to b (W95 FAT32)
Command (m for help):
Command (m for help): p
Disk /dev/sdb: 132.0 GB, 132035641344 bytes
255 heads, 63 sectors/track, 16052 cylinders, total 257882112 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 2048 1026047 512000 b W95 FAT32
/dev/sdb2 1026048 2050047 512000 83 Linux
/dev/sdb3 2050048 257882078 127916015+ 83 Linux
Command (m for help):
Command (m for help): w
The partition table has been altered!
</code></pre>
<h2>Summary</h2>
<p>Thus, if you encounter a card that is not readable, the following two
commands should give you a working card on VFAT requiring devices:</p>
<ul>
<li>Change partition ID to <strong><em>b</em></strong></li>
<li><strong><em>fdisk </em></strong>, use commands <strong><em>t1bwq</em></strong></li>
<li>Ensure changes are written to disk and table is reread by kernel</li>
<li> You can either remove/add the card or use <strong><em>hdparm -z</em></strong> to trigger this</li>
<li>Create a VFAT filesystem</li>
<li><strong><em>mkfs.vfat </em></strong></li>
</ul>
<p>Some other hints:</p>
<ul>
<li>Using a GPT partition table may make the card unusable on older devices</li>
<li>Not sure whether the device will seek through all partitions, sticking to the
first partition may give you a higher chance of a working setup</li>
<li>Using <strong>no partition</strong> at all, putting the filesystem on the device directly
also used to work on another printer</li>
</ul>
Released fui-0.2https://www.nico.schottelius.org//blog/fui-0.2-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This version of <a href="https://www.nico.schottelius.org//software/fui/">fui</a> contains working ruby
code that connects to a Unix-Socket (including a testcase !).</p>
Released fui-0.3https://www.nico.schottelius.org//blog/fui-0.3-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This version of <a href="https://www.nico.schottelius.org//software/fui/">fui</a> cleanly integrates ncurses,
has a new screenshot and integrates basic unit testing.</p>
Generic automatic installation for different Linux distributionshttps://www.nico.schottelius.org//blog/generic-automatic-linux-installation-for-different-distributions/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction / Motivation</h2>
<p>If you've been a little bit longer active as a sysadmin, you'll have faced
different Linux distributions and got used to their pros and cons.</p>
<p>One thing though that I cannot (and do not) want to get used to, is that
every distribution contains its own way of having an automated installation.
May it be <a href="http://fedoraproject.org/wiki/Anaconda/Kickstart">kickstart</a>,
<a href="http://wiki.debian.org/DebianInstaller/Preseed">preseed</a> or
<a href="http://www.archlinux.org/packages/extra/any/aif/">aif</a> (or any other
I've not mentioned), all contain their own specific settings to do
the same again and again.</p>
<p>Having a lot of discussions regarding this topic, there is one
approach that seems to kick them all, can be automated easily and
extended for other distributions (or Unices) and probably
easily developed as well:</p>
<h2>Unix = collection of files + metadata</h2>
<p>Unices and Linux distributions are just a number of files in some particular
filesystem plus some metadata like partitions (or slices), as
some other people at <a href="http://openqrm.com/?q=node/2">OpenQRM</a>
have recognised this as well:</p>
<pre><code>We asked ourself "what is linux ?", it is the kernel, an initrd,
some modules and a root-filesystem. Those are all "just" files
..... so we should treat them like files by putting and managing
them on modern storage-servers.
</code></pre>
<p>But instead of doing the whole infrastructure management stuff,
virtualisation (hear the cloud?) and monitoring environment, I'll
just focus on the part of the installation.</p>
<h2>Installation into/from a directory</h2>
<p>Assume the following: Every Linux distribution can somehow be put
into a directory. Either by tar'ing an existing installation or
by using specific tools like
<a href="http://wiki.debian.org/Debootstrap">debootstrap</a>,
<a href="http://people.redhat.com/~rjones/febootstrap/">febootstrap</a> or
<a href="https://wiki.archlinux.org/index.php/Archbootstrap">archbootstrap</a>.</p>
<p>This proces is pretty easy
(especially if you compare it to all the options you get
with the preseed/kickstart/etc., this is just <strong>one</strong> command line!)
and can be used as a starting point for a generic installation.</p>
<h2>Add some metadata</h2>
<p>Now take this directory, and put it onto the harddisk. Installation
done. Well. almost:</p>
<ul>
<li>Todo before</li>
<li>Create partitions</li>
<li>Create filesystems</li>
<li>Todo after</li>
<li>Adjust /etc/fstab</li>
<li>Add correct boot loader</li>
<li>Configure network + ssh</li>
</ul>
<p>After you've created the partitions and the filesystem
(which is distribution independent!), you can copy over the files
to the target diretory, in which the selected partitions are mounted.</p>
<p>For post processing, you need to adjust the fstab, so partitions
(and swap, do not forget computers with 4 or 8 MiB main memory!)
will be used at the right location. Adding a bootloader would be of
great help for the boot process (almost distribution independent),
though booting from the network
via PXE may even get around this.
Finally the network configuration (distribution specific!) must be
adjusted and ssh should be installed, so the system can be configured
remotely (for instance with <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a>).</p>
<h2>Todo</h2>
<p>So the next steps are pretty straight forward:
Write something that automatically</p>
<ul>
<li>configures partitions</li>
<li>creates filesystems</li>
<li>mounts the filesystems</li>
<li>copies files from a know location</li>
<li>adjusts the operating system</li>
<li>umounts everything and</li>
<li>reboots</li>
</ul>
<p>Doesn't sound like very much and so
I'll probably give it a try in the next weeks, so stay tuned if you're
interested in this topic.</p>
GPM 1.20.7 releasedhttps://www.nico.schottelius.org//blog/gpm-1.20.7-released/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After about 3 years, a new version of <a href="https://www.nico.schottelius.org//software/gpm/">gpm</a> is available.
Enjoy it, while it's fresh!</p>
Published gpm2https://www.nico.schottelius.org//blog/gpm2-published/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After a very long time I decided to give a rewrite of <a href="https://www.nico.schottelius.org//software/gpm/">gpm</a>
a try.</p>
<h2>Motivation</h2>
<p>The
<a href="http://lists.linux.it/pipermail/gpm/2010-May/thread.html">recent discussion on the mailing list</a>
reminded me of one thing, since I overtook gpm maintenance:</p>
<ul>
<li>There are a lot of problems with gpm</li>
</ul>
<p>Don't understand me wrong, gpm is a great software, knows how to handle
a lot of mice and has many interesting programming techniques inside.</p>
<p>To extend gpm or to debug it, is pretty hard, as the code is not easy
to read nor to understand (though fun if you do).</p>
<p>Some time ago I asked around in the world of BSDs, in the Linux kernel
and the xorg developers, what their preferred way would be to handle mice.</p>
<p>As there have been almost no responses, it seems everybody does his
own thing. I was also considering whether merging gpm into some
"general input library" makes sense, but at the end of the day:
I only care about mice.</p>
<h2>Current situation</h2>
<p>So yesterday evening I began to hack a new version of gpm, <a href="https://www.nico.schottelius.org//software/gpm2/">gpm2</a>,
which can show you some ps/2 mouse movements as of today.</p>
<p>The concepts of gpm2 are quite different to those of gpm, especially that
gpm2 itself <strong>cannot</strong></p>
<ul>
<li>decode mice protocols</li>
<li>draw a pointer</li>
</ul>
<p>Instead the idea of gpm2 is to have external programs do that and make
gpm2d just an interface to access the various mice.</p>
<p>The "special case" of drawing a pointer can be realised by writing a
gpm2 client that does only that particular job.</p>
<h2>The future</h2>
<p>I am not sure whether to invest a lot of work into gpm2 or not.
On the one hand I would be pretty happy to have a clean, portable
mouse handling daemon on the other hand I am not sure whether there
is really a need for it. That said, it depends a lot on the feedback
I get from others.</p>
<p>In case you have an opinion regarding gpm2, think there's a need for it,
or totally disagree with me (or anything in between),
I would like you to drop a mail to the <a href="https://www.nico.schottelius.org//software/gpm/">gpm mailinglist</a>
and let me know what you think about gpm2.</p>
<p>And if you like them very much, you're invited to port or rewrite
mouse drivers for <a href="https://www.nico.schottelius.org//software/gpm2/">gpm2</a>. ;-)</p>
Great Rails Hosting: A symlink for an apphttps://www.nico.schottelius.org//blog/great-rails-hosting-a-symlink-for-an-app/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>As <a href="http://www.ungleich.ch">ungleich</a> focusses on educated customers,
we meet pretty cool infrastructures from time to time.
In some sense I count <a href="http://www.local.ch">local.ch</a>
as a customer: they supported me with one day off
per week so I was able to found <a href="http://www.ungleich.ch">ungleich</a>
and acquire first customers.
This article is dedicated to <a href="http://www.local.ch">local.ch</a> and describes
a very elegant solution for Ruby on Rails hosting.</p>
<h2>Overview</h2>
<p>The setup consists of the following services, glued
together in an elegant way:</p>
<ul>
<li><a href="http://nginx.org/">nginx</a></li>
<li><a href="http://unicorn.bogomips.org/">unicorn</a></li>
<li><a href="https://www.isc.org/downloads/bind/">ISC Bind</a></li>
<li><a href="https://github.com/capistrano/capistrano/wiki">Capistrano</a></li>
<li>Symlinks</li>
</ul>
<h2>Nginx</h2>
<p>The great trick of the setup is that nginx is used to forward requests
to a unix socket that depends on the <strong>hostname</strong>, which is
exposed as <strong>$host</strong> by nginx. The following
configuration snippet contains the important parts:</p>
<pre><code>server {
listen 80;
location @error_page {
root /var/nginx/$host/current/public;
internal;
[...]
location ~ "^/assets/.*-[a-z0-9]{32}.\w+" {
root /var/nginx/$host/current/public;
[...]
location ~ ^/assets/ {
root /var/nginx/$host/current/public;
[...]
root /var/nginx/$host/current/public;
location @unicorn {
proxy_pass http://unix:/var/nginx/$host/unicorn.sock;
# Forward original host name to be seen in unicorn
proxy_set_header Host $host;
# Server name and address like being available in PHP
proxy_set_header SERVER_NAME $server_name;
proxy_set_header SERVER_ADDR $server_addr;
# The real client IP address - header has ben setup by Zeus
proxy_set_header X-Real-IP $http_x_cluster_client_ip;
# Needed second header for rails - See SYS-1587
proxy_set_header X_FORWARDED_FOR $http_x_cluster_client_ip;
</code></pre>
<p>As you can see, all paths are dependent on the actual hostname
as setup by nginx.</p>
<h2>Application Deployment</h2>
<p>Applications are deployed under their project name below
<strong>/var/nginx</strong>
(like <strong>ws-locomotive.dev-deploy</strong> or <strong>ws-locomotive.master</strong>).
As you can see from the naming, developers can deploy one application
from different branches easily (dev-deploy and master branches is this
case).
Developers can use <a href="https://github.com/capistrano/capistrano/wiki">Capistrano</a> to deploy their applications
and don't need to interact (reload/restart) with nginx, as it is
already configured to accept any hostname.</p>
<h2>Name Server Configuration</h2>
<p>As you can imagine, it would be quite cumbersome for developers to
reach a host named <strong>ws-locomotive.dev-deploy</strong>.
That is why a wildcard domain is configured to point
to the host running nginx:</p>
<pre><code>*.play.intra.local.ch. CNAME rails-dev-vm-snr01.intra.local.ch.
</code></pre>
<h2>Give the application a name</h2>
<p>A new hostname can be assigned to an application simply by symlinking
it to the application:</p>
<pre><code>% cd /var/nginx
% ln -s ws-locomotive.dev-deploy my-fancy-name.play.intra.local.ch
</code></pre>
<p>This way, developers can use <strong>any name</strong> below
play.intra.local.ch for their application. Some applications
actually behave differently depending on the name they are accessed
with:</p>
<pre><code>info.ws-locomotive.master.play.intra.local.ch -> ws-locomotive.master
hp.ws-locomotive.master.play.intra.local.ch -> ws-locomotive.master
</code></pre>
<h2>Conclusions</h2>
<p>The setup is pretty elegant, because it allows developers to
create new development environments without interacting with any
sysadmin to configure nginx, bind or whatsoever.
There is a security drawback though:
An attacker could try to use hostnames like
<strong>../../../../etc/</strong> and request the file <strong>passwd</strong>.
That is the reasion why this service is not exposed
to the outside world directly, but all external requests
are filtered (whitelisting) by a load balancer in front
of the rails hosts.</p>
A guide for IT bosseshttps://www.nico.schottelius.org//blog/guide-for-it-bosses/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>You are an IT boss. Your job is to manage the
<a href="https://en.wikipedia.org/wiki/The_IT_Crowd">IT crowd</a>.
Depending on your skills and knowledge you may find this
job more easy or hard.</p>
<p>This guide is created by those who you try to manage: The IT crowd.</p>
<h2>Background</h2>
<p>I am a System Engineer currently working 80%
for <a href="http://www.local.ch">local.ch (Swiss Phonebook)</a> and 20% for
<a href="http://www.ungleich.ch">ungleich (Unix/Linux infrastructure company)</a>.</p>
<p>On a daily basis I see how employees and bosses are acting and I spent time
on analysing the behaviour of both parties (for fun - not profit).
As I often see common mistakes and behaviour patterns, which make
good or bad bosses, the idea was born to create a guide for IT bosses.</p>
<h2>Guidelines</h2>
<h3>Be honest</h3>
<p>Not a special requirement in regards of IT, but if you want your
employees to respect you, you definitely need to be honest.</p>
<p>Don't even think about playing tricks on them, they will find out and
everybody will lose the respect for you. Guaranteed.</p>
<h3>Be available</h3>
<p>Your job involves a lot of meetings and coordination.
Your employees understand that and may even be very thankful you took that job.
Still, as you are the boss, communicate straightly when you are available, so
people can bring their questions and problems to you.</p>
<p>If you see there is too less time to be available for your team, it's probably
good time to split up the team or to move on to another position
and promote somebody else for being the head of IT crowd.</p>
<h3>Give freedom</h3>
<p>More important than in probably most other areas is the amount of freedom you
give: IT professionals are usually bright people who understand their job very
well. The learn on the job (which includes getting side tracked from time to time),
they are keen to touch the latest and newest technologies and have a high motivation.</p>
<p>Adding artifical borders to the way the work makes them less productive, less motivated and
in the worst case leave your workplace.</p>
<p>Pay even more attention on this topic, if you have some technical background.
You may know (or think you know!) what the best solution or technical choice is,
but you hired those people to do a good job, not just to execute your thoughts, did you?</p>
<h3>Don't assume</h3>
<p>Don't try to enhance the working situation
of your employees with stuff you assume could be good for them.</p>
<p>You will most likely be wrong.</p>
<p>Instead listen to your employees or ask them about your idea.
Spending an hour or day discussing is probably more worth than throwing away your
shiny new invention.</p>
<h3>Give tools</h3>
<p>Have you ever seen a good craftsman working with broken tools? Probably not. Take the same
approach for your IT professionals: If they request specific tools
(software, notebook, mobile phone, screen, etc.), they probably have a good reason for it.</p>
<p>Don't hesitate to question the request ("Why do you need this / how does it make you more
efficient?"), but also don't hesitate to let them buy the right tools afterwards.</p>
<p>Denying to give good tools makes your employees less motivated, less productive and
indicate you don't value their work.</p>
<p>Regarding value: Did you consider that the 3000 USD notebook,
even if it is not better than the employees current computer,
is worth the motivation you gain from it?</p>
<h3>Plan, assist and communicate objectives</h3>
<p>Your key competence as an IT boss is probably planning and communication.
Use this power to <strong>assist (!)</strong> your IT crowd: Aid them in planning their work,
show them how to plan and communicate what you expect from them.</p>
<p>Don't try to squeeze them into a specific way of working. Better: Let your employees know
what the objectives are (expected results, date of delivery).
They probably
figure better out how to reach it than you. Always remember:
IT guys are different,
some of them love to work in the night,
some of them cannot concentrate in open plan offices
and some of them want to work under high pressure (do all the work in one night).</p>
<h3>Consider the difference</h3>
<p>Compared to many other professions, IT people are behaving a <strong>bit</strong> differently
(that's why sysadmins have their own
<a href="http://www.amazon.com/Management-System-Administrators-Thomas-Limoncelli/dp/0596007833/">time management</a>
book, for example). This may require special treatments from your side: For instance
the usual motivation factors may not work as expected. If you listen carefully, you
may hear "weird" requests like "I'd like to start working at 14:00 until the night".
If possible, try to honour these requests: They don't cost a lot of ressources, but
they require an open minded leader.</p>
<h3>Set values in relation</h3>
<p>IT projects are quite often expensive and there are various reasons for it.
One of them being a future orientated market requires using latest high
technology equipment. IT guys are used to carrying around computers worth
a car or a bigger house. Thus IT guys are aware of the money that is being
spent on IT equipment.</p>
<p>As stated before, your IT guys may have special requirements, not only in terms
of working time, but also their choice of tools may be non standard.</p>
<p>Instead of denying to buy simple tools for your IT guys, include those costs into
the project budget. Also consider reading
<a href="https://en.wikipedia.org/wiki/Parkinson%27s_Law_of_Triviality">Parkinson's law of triviality</a>.</p>
<h2>See also</h2>
<p>The following resources may be of interest for you as well:</p>
<pre><code>* [The Hacker FAQ](http://www.seebs.net/faqs/hacker.html)
* [Managing Humans: Biting and Humorous Tales of a Software Engineering Manager](http://www.amazon.com/Managing-Humans-Humorous-Software-Engineering/dp/1430243147/ref=sr_1_1?ie=UTF8&qid=1366379157&sr=8-1)
* [How to Work with Software Engineers](https://www.kennethnorton.com/essays/how-to-work-with-software-engineers.html)
</code></pre>
<h2>More to come</h2>
<p>This article is work in progress and is being enhanced by input
from other IT professionals (thanks for all the great comments!).</p>
<p>If you want to contribute,
you can add a comment on <a href="https://news.ycombinator.com/item?id=5575419">Hackernews</a>,<br />
<a href="http://www.reddit.com/r/sysadmin/comments/1co3y5/a_guide_for_it_bosses/">reddit</a>
or <a href="https://www.nico.schottelius.org//about/">contact me directly</a>.</p>
Anacron and cronie: How cron.hourly, cron.daily and cron.weekly workhttps://www.nico.schottelius.org//blog/how-cronie-anacron-cron-hourly-daily-weekly-work/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Situation</h2>
<p>We noticed that the <a href="http://nginx.org/">nginx</a> logfile is not being rotated
on some freshly setup Centos 6.2 servers, which should have been triggered by
logrotate running from cron. The used cron implementation is cronie together
with anacron.</p>
<h2>Background</h2>
<p>I first suspected the setup being broken due permissions issues
with cronie, which requires special permissions as found in the
<strong>CAVEATS</strong> section of cron(8):</p>
<pre><code>The crontab files have to be regular files or symlinks to regular files,
they must not be executable or writable by anyone else than the owner.
This requirement can be overridden by using the -p option on the crond command line.
</code></pre>
<p>We had this bug before, but this time it is different:</p>
<ul>
<li>The Logrotate cronjob is located at <strong>/etc/cron.daily/logrotate</strong></li>
<li>The cron.{daily, weekly, monthly} jobs are defined in <strong>/etc/anacrontab</strong></li>
<li>The <strong>anacron</strong> command interprets jobs in <strong>/etc/anacrontab</strong></li>
<li>Anacron is called from <strong>/etc/cron.hourly/0anacron</strong></li>
<li><strong>/etc/cron.d/0hourly</strong> contains <strong>01 * * * * root run-parts /etc/cron.hourly</strong></li>
</ul>
<h2>Solution</h2>
<p>In our situation <strong>/etc/cron.d/0hourly</strong> was missing, because we removed all
files from <strong>/etc/cron.d/</strong> and put only our own files in there. The simple
fix is to ensure the contents of this directory are not removed anymore and to
reinstall the <strong>cronie</strong> package to recreate the <strong>/etc/cron.d/0hourly</strong> file.</p>
How to add private information to a public puppet repositoryhttps://www.nico.schottelius.org//blog/how-to-add-private-puppet-modules-to-a-public-puppet-repository/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Preamble</h2>
<p>If you are like <a href="https://sans.ethz.ch">sans</a>, you are probably
using <a href="http://www.puppetlabs.com/">puppet</a> and
<a href="https://sans.ethz.ch/projects/puppet/">publishing your modules</a>
so others can reuse them, too.</p>
<p>At some point, you need to include private data, like passwords
into your configuration.</p>
<h2>How to cleanly add private stuff with git</h2>
<p>We are using <a href="http://git-scm.com/">git</a> here to manage
our puppet-modules and exported most of them to
git-submodules.</p>
<h2>Create a fresh submodule</h2>
<p>So first of all, I create a new submodule
containing the private data:</p>
<pre><code>% mkdir ethz_systems_private
% cd ethz_systems_private
# add the private stuff
% git init && git add . && git commit -m "init"
</code></pre>
<h2>Publish the private module to a private location</h2>
<p>I will push the module to the same location as usual, but
tell git-daemon and gitweb not to show it (I am doing
this here by removing the file <strong>git-daemon-export-ok</strong>,
which is configured in gitweb and git-daemon):</p>
<pre><code>% git remote add origin sans.ethz.ch:/home/services/sans/git/puppet-modules/ethz_systems_private
% git push origin master
</code></pre>
<h2>Add the submodule in a private branch</h2>
<p>In our main repository, which contains the information to the
git-submodules, I have been working in the <strong>master</strong> branch
up to today. As I don't want others who clone our public repo
to recognise they are missing data, I'll create a new branch
called <strong>private</strong> and add our private submodule there:</p>
<pre><code>% git checkout -b private
% git submodule add sans.ethz.ch:/home/services/sans/git/puppet-modules/ethz_systems_private modules/ethz_systems_private
% git commit -a -m "Add private submodule ethz_systems_private"
% git push origin private
</code></pre>
<p>This submodule is added differently than usual, it is accessed via ssh instead
of using the git protocol we usually use:</p>
<pre><code>git://git.sans.ethz.ch/puppet-modules/ethz_systems
</code></pre>
<h2>Use the new branch on the puppetmaster</h2>
<p>On the puppetmaster we essentially use the <strong>update.sh</strong> script, that contains
only one line:</p>
<pre><code>git pull && git submodule sync && git submodule update --init
</code></pre>
<p>This time, I manually fetch and change to the private branch and make sure
the private branch works smoothly:</p>
<pre><code># git fetch
# git checkout -b private origin/private
# sh meta/update.sh
</code></pre>
<p>The last line fails, as root on sans.ethz.ch cannot login to sans.ethz.ch,
as there has not been any publickey generated for root, which can easily be
fixed:</p>
<pre><code># ssh-keygen
# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
</code></pre>
<p>And finally, the <strong>update.sh</strong> also works!</p>
<h2>How to use the new private branch</h2>
<p>It is important to remember that the <strong>private</strong> branch will never be merged
into the <strong>master</strong> branch, because otherwise people cloning our main repo
will see a broken submodule reference.</p>
<p>As the puppetmaster always wants to include the private modules, we keep the
checkout there running the <strong>private</strong> branch and only pulling from the
remote <strong>private</strong> branch.</p>
<p>As all our public changes will still be made within the <strong>master</strong> branch,
I created the following script <strong>release.sh</strong> to handle automatic
propagation of changes from the <strong>master</strong> branch to the <strong>private</strong> branch:</p>
<pre><code>% git checkout master
% cat meta/release.sh
#!/bin/sh
set -e
git checkout private
git merge master
git push origin master private
git checkout master
</code></pre>
<p>The last command currently throws the error</p>
<pre><code>warning: unable to rmdir modules/ethz_systems_private: Directory not empty
</code></pre>
<p>which seems to be a weiredness of git-submodules I have to figure out how
to solve.</p>
<h2>Updating the private branch</h2>
<p>Whenever there's a need to change something in the <strong>private</strong> branch
(probably seldom, as this happens only when new private submodules are
added), it can be done like this:</p>
<pre><code>% git checkout private
% git merge master
# *hack* *eat pizza* *hack*
% git add fancy-changes
% git commit -m "more private stuff"
% git push origin private
% git checkout master
</code></pre>
<h2>Further information</h2>
<p>The described repos and scripts can be found via
<a href="https://sans.ethz.ch/projects/puppet/">sans' puppet project</a>, besides
the private module...</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
How to backup The Piratebay and its contenthttps://www.nico.schottelius.org//blog/how-to-backup-the-piratebay-and-its-content/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>If you haven't seen <a href="http://watch.tpbafk.tv/">TPB - AFK</a>, it's a good time
to do so now.</p>
<h2>Introduction<sup>2</sup></h2>
<p>Imagine you want to support TPB and create a P2P backup of
TPB (and all of its torrents and the referenced content from the torrents).
This post describes some ideas to do that.</p>
<h2>Backup Destination</h2>
<p>For every backup you need a destination: some space to store the
content and some network bandwidth to pull and push the data.</p>
<p>In this post I assume that the backup destination is "a [big]
number of volunteers". I chose this one, because it is harder to
bring down a huge number of hosts than it is to bring down some
datacenters.</p>
<h2>Problem: You may not upload some kind of content</h2>
<p>Some countries have restrictions that disallow people to <strong>upload</strong>
some kind of content, but allow them to download it.</p>
<p>Some people may also not want to upload specific content.</p>
<p>To make life easier for volunteers, we may want to make them unaware
of which content they are backing up and providing for restore.</p>
<h2>Solution: Hide what the volunteer serves</h2>
<p>Assume that there is a torrent serving funny cat pics I have taken over the
last years. One of the volunteers likes dogs and hates cat and would thus
never serve the content of this torrent, if she know it contains cat pictures.</p>
<p>But: If she doesn't know - she doesn't care.</p>
<h2>Technical Solution</h2>
<p>So this volunteer, let's call her Alice, wants to offer 10 Gigabyte
of her hard disk, 1 Mbit/s of her upstream and 5 Mbit/s of her downstream
to backup data.
John wants to backup his cat pictures, which he is seeding.</p>
<p>Let's see how Alice can share the cat pictures, without knowing she does.</p>
<ul>
<li>Alice registers at a <strong>backup tracker</strong></li>
<li>John creates an <strong>encrypted torrent</strong> that contains
<strong> a private and public key pair
</strong> references to the data blocks, which are encryped using the above key
(like a normal torrent - just all the content is encrypted
with the public key which is included into the torrent)
<strong> references to the regular tracker(s)
</strong> and references to the backup tracker(s)</li>
<li>John also creates a <strong>plain torrent</strong> that does <strong>not</strong> contain the private and public key</li>
<li>John submits the <strong>encrypted torrent</strong> to a regular tracker -
everybody who wants to download the cat pictures (and decrypt them) can do so</li>
<li>John submits the <strong>plain torrent</strong> (withouth the keys) to the backup tracker</li>
<li>Alice modified torrent client picks up latest torrents from the backup tracker until
her space or network bandwidth exceeds</li>
<li>Alice cannot decrypt the content, as she does not have the private key</li>
</ul>
<h2>The result</h2>
<ul>
<li>Alice is happy, because she aids in supporting a more robust internet</li>
<li>John is happy, because his cat pictures are still available, although his computer may be offline</li>
<li>Bob is happy, because he can download the awesome cat pictures, although John is away</li>
</ul>
<h2>Future Work</h2>
<p>This is just a short and quick hack to do a backup of TPB.
There are probably many more variants available and further optimisations
could be done (for instance rewarding those serving backup with higher
download rate).</p>
How to extract your Amazon ebooks from the Android Kindle Apphttps://www.nico.schottelius.org//blog/how-to-extract-your-amazon-ebooks-from-the-android-kindle-app/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>TL;DR</h2>
<p>The ebooks of the Amazon Kindle app can be found on your
Android phone in PRC format below the folder
<strong><em>/data/media/0/Android/data/com.amazon.kindle/files/</em></strong>.</p>
<h2>Download the books</h2>
<p>To be able to extract your books from your Android phone, you need to
synchronise the books first, so they are available on your device.</p>
<p>To ensure they are local. turn of all network connections
(wifi, mobile data) and try to read them.</p>
<h2>Finding the files</h2>
<p>At first I guessed the files I was looking for may be
ending in <strong><em>.azw</em></strong>. Looking for these files, however
did not reveal any file.</p>
<p>My second guess was to look for amazon files or folder
named amazon:</p>
<pre><code>% find / -name \*amazon\* 2>/dev/null
</code></pre>
<p>I found <strong>/data/data/com.amazon.kindle</strong>, with the following content:</p>
<pre><code>% ls
app_com.amazon.device.syncHIGH app_metricsNORMAL databases
app_com.amazon.device.syncNORMAL app_web_cache files
app_dex app_web_database lib
app_dexopt app_webview shared_prefs
app_metricsHIGH cache
</code></pre>
<p>Looking for books in this directory wasn't successful.
However, the databases directory looked interesting.</p>
<pre><code>% cd databases
% grep -ri mybookname *
Binary file databases/kindle_library.db matches
Binary file databases/kindle_library.db-journal matches
</code></pre>
<p>I used sqlite to have a look at the database:</p>
<pre><code>% sqlite3 kindle_library.db
sqlite> .schema
...
</code></pre>
<p>This revealed one interesting table named <strong><em>LocalContent</em></strong>.
Looking at it closer:</p>
<pre><code>sqlite> select * from LocalContent;
</code></pre>
<p>This revealed the emulated path
<strong><em>/storage/emulated/0/Android/data/com.amazon.kindle/files/</em></strong>
and referenced <strong><em>.prc</em></strong> files.</p>
<p>I looked for them in the filesystem using ...</p>
<pre><code>% find / -name \*.prc 2>/dev/null
</code></pre>
<p>... and found my books!</p>
<p>To extract the ebooks to your computer, you can use any file copy program.
The <strong><em>rsync</em></strong> utility however is suited for it very well, as it can
(re-)sychronise the whole folder:</p>
<pre><code>% rsync -av /data/media/0/Android/data/com.amazon.kindle/files/ mycomputer:mybooks/
</code></pre>
How to find and execute stuff on all hosts?https://www.nico.schottelius.org//blog/how-to-find-and-execute-stuff-on-all-hosts/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Motivation</h2>
<p>Assume that you are managing a pretty large infrastructure of hosts,
sometimes there is a need to execute a command on all of them.</p>
<p>The big question is, where to find out, which hosts exist.</p>
<h2>Solution</h2>
<p>The usual approach is to invent some kind of centralised daemon that collects
or searches for available hosts. There is a way simpler solution available in
my situation, which may help you as well:
We do have a monitoring infrastructure, to which all hosts transmit their
configuration. The configuration is stored containing the full hostname
(like <strong>foo.bar.local.ch</strong>) plus the .cfg suffix.</p>
<p>Thus a script that can be used to execute something on all hosts (sequentially though)
can look like this:</p>
<pre><code>for host in $(ssh monitoring01 "cd /opt/icinga/etc/hosts.d; ls"); do
host=${host%.cfg}
ssh "root@$host" "$@"
done
</code></pre>
How to generate a /etc/shadow compatible md5 passwordhttps://www.nico.schottelius.org//blog/how-to-generate-crypted-md5-password-shadow/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Documentated here to save somebody's else's time, you can do that easily with
<a href="http://www.openssl.org/">openssl</a>:</p>
<pre><code>openssl passwd -1
</code></pre>
<p>See also <strong>openssl-passwd(1)</strong>.</p>
How to inform people about better solutions?https://www.nico.schottelius.org//blog/how-to-inform-people-about-better-solutions/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>I'm wondering how to handle the following social-political-technial
situation and would be interested in your opinion!</p>
<h2>Situation</h2>
<p>Imagine you are an expert in some topic, let's use knitting for the
example. Further assume you read posts in a lot of different
knitting techniques forums.
There is one specific forum driven by a specific
knitting needle manufacturer, in which a lot of newbies post into
and hope to do some good knitting with the manufacturers knitting needles.</p>
<h2>The problem</h2>
<p>Now, as you are the expert, you are aware that the knitting needles
produced by this manufacturer are obsolete and contain bugs that
makes knitting way more difficult than it needs to be.
But you know there is a different product out there that does the
job way better and costs the same.</p>
<p>Now you feel responsible to let those newbies know about the better
knitting needles, but <strong>how would you tell them?</strong></p>
<h3>Posting to the forum</h3>
<p>Posting to that specific forum that there are better knitting needles
out there more than once may be seen as an abuse of the forum, because it's
still driven by the obsolete manufacturer.</p>
<h3>Posting to individuals</h3>
<p>Replying to an individual when a question was asked within a forum
is usually seen as bad behaviour and may be interpreted by the
manufacturer as a indirect attack.</p>
<h3>What's the best solution?</h3>
<p>To me it looks like neither of the two options are good ones.
Thus I'd like to <a href="https://www.nico.schottelius.org//about/">hear your proposal to solve this situation</a>.</p>
I want a decentralised bugtrackerhttps://www.nico.schottelius.org//blog/i-want-a-decentralised-bugtracker/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>I have about 50 different website accounts for submitting and reading
bugs. I use about 4 different bugtrackers regulary
(bugzilla, launchpad, ditz, Debians reportbug).</p>
<p>And I do not want to create another account at a different site
again. No. What I want is different:</p>
<p>I want to have one tool, that</p>
<ul>
<li>allows me to track bugs of different projects with different systems
(similar to <a href="http://www.launchpad.net">launchpad</a>)</li>
<li>to mirror the information locally, including history
(like most <a href="http://en.wikipedia.org/wiki/Revision_control">version control systems (VCS)</a>)</li>
<li>to add or search a bug, while being offline
(like <a href="http://git-scm.com/">git</a>)</li>
<li>has a usable command line interface
(like <a href="http://www.zsh.org">zsh</a>)</li>
</ul>
<p>So in practise, it should look something like this:</p>
<pre><code>% tool
tool> init-bug-db /path/to/somewhere
tool> bug-source add project1 bugzilla://where-it-is
tool> bug-source add project2 ditz://where-it-is
tool> bug-source list
project1
project2
tool> bug-source show project2
* can submit
* url
tool> bug query project1 a-search-string
tool> bug add project1
[either ask for input or add them after the project name]
tool> bug pull project1 # get latest bugs
tool> bug push project1 # submit latest bug (-changes)
</code></pre>
<p>Of course all the commands should also be available on the command line as
options. If you have already created such a tool or are interested in creating
such a tool, do not hesitate to <a href="https://www.nico.schottelius.org//about/">contact</a> me!</p>
Ikiwiki has been slow, but it is fast now!https://www.nico.schottelius.org//blog/ikiwiki-has-been-slow/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As I reported before, <a href="https://www.nico.schottelius.org//blog/ikiwiki-is-slow/">ikiwiki is slow</a>.
That's not true anymore: I received an e-mail from
<a href="http://kitenet.net/~joey/">Joey Hess</a> that contained a
<a href="http://git.schottelius.org/?p=netzseiten/www.nico.schottelius.org;a=commit;h=9ad0b7ba4763f3fe6773427326bcc32dbe332a01">patch to my website</a> and an answer to my problem:
He used a copy of
<a href="http://git.schottelius.org/?p=netzseiten/www.nico.schottelius.org;a=summary">the source of my website</a>
to reproduce the problems I had, which even took <strong><em>68 minutes</em></strong> on his computer!</p>
<p>So I grabbed the latest version of ikiwiki from git today and found out:</p>
<pre><code>ikiwiki exits with exit status 0, but does not produce a website!
</code></pre>
<p>I reported it to Joey on <a href="irc://irc.freenode.net/#ikiwiki">IRC</a>, who fixed
that some hours later in commit
<a href="http://git.ikiwiki.info/?p=ikiwiki;a=commit;h=587e0c3d21dfbde052e0fd71a7ed0e33e09e757f">587e0c3d21dfbde052e0fd71a7ed0e33e09e757f</a>. Now comes the interesting part:
I added some timing information to
<a href="http://git.schottelius.org/?p=nsbin;a=blob;f=ikiwikitest.sh;hb=HEAD">ikiwikitest.sh</a>,
which allows me to run the latest ikiwiki version, without installing it.
And here are the results:</p>
<pre><code>no-refresh, no changes: ~70 seconds
no-refresh, created one new file: ~70 seconds
--refresh, no changes: ~5 seconds
--refresh, changes to one file: ~10 seconds
--refresh, adding a new tag: ~10 seconds
</code></pre>
<p>The tests were done using ikiwiki <strong>3.20091017-22-gba682e0</strong>
(from git describe). To summarise:</p>
<ul>
<li>Updating my website using ikiwiki now takes less than 30 seconds.</li>
<li>Joey did a great job.</li>
<li>I owe him something.</li>
</ul>
<h2>I think I should send Joey some money.</h2>
<p>I want to emphasise this very much, because he's a
<a href="https://www.nico.schottelius.org//docs/the-term-foss/">FOSS developer</a>, like <a href="https://www.nico.schottelius.org//about/foss/">me</a>.
He has spent a lot of time developing and maintaining ikiwiki
and will probably continue to do so. Besides that he does a great
job in supporting his users.</p>
<pre><code>Everything for free.
</code></pre>
<p>I think just writing here about him and telling everybody that
he does a great job does not fill his stomach, nor gives him the
ability to enjoy a coffee in the early afternoon.</p>
<p>When you read this article, Joey will already know about it and
also knows, that I would like to have his
<a href="http://en.wikipedia.org/wiki/International_Bank_Account_Number">IBAN</a>,
to submit him some money.</p>
<p>I encourage you to do the same when you realise that you enjoy using
some software (or reading some documentation).</p>
Ikiwiki is slowhttps://www.nico.schottelius.org//blog/ikiwiki-is-slow/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As you probably know, this <a href="https://www.nico.schottelius.org//about/websites/">website</a> is generated
by <a href="http://www.ikiwiki.info/">ikiwiki</a>, which does a pretty nice
job for people who like to write their websites without a fancy GUI,
but still do not want to write HTML or XHTML directly.</p>
<p>As I <a href="https://www.nico.schottelius.org//blog/migration-1-configs/">migrated</a>
<a href="https://www.nico.schottelius.org//blog/migration-2-freebsd-raid-monitoring-foss/">some</a>
<a href="https://www.nico.schottelius.org//blog/migration-3-ccollect/">parts</a>
of <a href="https://www.nico.schottelius.org//blog/cinit-migrated/">my other websites</a>
<a href="https://www.nico.schottelius.org//blog/migration-4-gpm/">into this one</a>,
I realised that ikiwiki becomes slower and slower.</p>
<p>A normal run with</p>
<pre><code>ikiwiki --setup ikiwiki.setup
</code></pre>
<p>takes about <strong><em>15</em></strong> minutes!
If I run it with</p>
<pre><code>ikiwiki --setup ikiwiki.setup --refresh
</code></pre>
<p>it still takes about <strong><em>1-2</em></strong> minutes. I clearly understand that
my site is not the smallest anymore (</p>
<pre><code>[16:38] ikn:nicoweb% find . -type f | wc -l
31015
[16:38] ikn:nicoweb% find . -type d | wc -l
3092
</code></pre>
<p>), but I still think that it should be possible to (re-)generate
it in less than 30 seconds. I know that the author,
<a href="http://kitenet.net/~joey/">Joey Hess</a>, is very open to feedback
and does a great job, but as I always regenerate the website before a public
release and that costs my time, it motivated me to write this article.</p>
<pre><code>Dear Joey, keep up the good work, but speedup ikiwiki, please!
</code></pre>
Introducing Simple Universal Time (SUT)https://www.nico.schottelius.org//blog/introducing-simple-universal-time/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>For some time I've been discussing this idea with some friends:
One simple time format that simplifies our lifes:
<strong>Simple Universal Time (SUT)</strong>.</p>
<p>Instead of relying on time zones, it simplifies life by
reducing life to the max: It skips time zones.
Instead of using 24 hours, 60 minutes and 60 seconds
scheme, it is a metric centric time: Simple Universal
Time knows about 10 hours a day, with 100 minutes
and 100 seconds.</p>
<p>Thanks to <strong>Stefanos Kornilios Mitsis Poiitidis</strong>
you can see the
<a href="http://telmich.github.io/sut">current time in SUT</a>.
The <a href="https://www.nico.schottelius.org//docs/sut/">Simple Universal Time page</a> shows in detail
how it works.</p>
<p>Make your life easier by switching to SUT - your brain and
your phone partners will be thankful!</p>
Linux distribution independent iptables setup powered by cdist sponsored by panterhttps://www.nico.schottelius.org//blog/iptables-distribution-independent-powered-by-cdist-sponsored-by-panter/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>As a sysadmin, you may have encountered several different
Linux distributions in your life. You may also have found
out that configuring <a href="http://www.netfilter.org/">iptables</a>
permanently differs from distribution to distribution.</p>
<p>Fortunately you can stop caring about this problem:
In the <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> source tree you find
two new types to handle this problem universally, independent
of the Linux distribution.</p>
<p>These types are a result of work done at <a href="http://www.ungleich.ch">ungleich</a>
for our customer <a href="http://www.panter.ch">panter</a>. Panter does not only
allow us to publish the code freely, but also encourages
us to do so - many thanks!</p>
<h2>How to use it</h2>
<p>First of all, ensure you have cdist installed on your source host.
Then create the directory ~/.cdist/manifest and then the file
~/.cdist/manifest/init with the following content:</p>
<pre><code>case "$__target_host" in
insert-your-target-host-name-here)
__iptables_rule policy-in --rule "-P INPUT DROP"
__iptables_rule policy-out --rule "-P OUTPUT ACCEPT"
__iptables_rule policy-fwd --rule "-P FORWARD DROP"
__iptables_rule established --rule "-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT"
__iptables_rule http --rule "-A INPUT -p tcp --dport 80 -j ACCEPT"
__iptables_rule ssh --rule "-A INPUT -p tcp --dport 22 -j ACCEPT"
;;
esac
</code></pre>
<p>Running</p>
<pre><code>% cdist config insert-your-target-host-name-here
</code></pre>
<p>applies the configuration. That's it, really! Log on to your
server and do <strong><em>iptables -L -n</em></strong> to see the result!</p>
<h2>What did cdist do?</h2>
<p>The cdist types __iptables_rule and __iptables_apply
take care of the necessary steps. In detail they</p>
<ul>
<li>create the necessary files and directory</li>
<li>create and setup an init-script that loads / unloads the rules</li>
<li>apply the rules</li>
</ul>
kvm-setup-local.ch-overview.pnghttps://www.nico.schottelius.org//blog/kvm-vms-with-cdist-at-local.ch/kvm-setup-local.ch-overview.png2016-02-25T13:34:32Z2015-02-03T14:47:26ZKVM Virtual Machines managed with cdist and sexy @ local.chhttps://www.nico.schottelius.org//blog/kvm-vms-with-cdist-at-local.ch/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>This article describes the KVM setup of <a href="http://www.local.ch">local.ch</a>, which is
managed by <a href="https://www.nico.schottelius.org//software/sexy/">sexy</a> and configured by <span class="createlink">cdist</span>.</p>
<p>If you haven't so far, you may want to have a look at the
<a href="https://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/">Sexy and cdist @ local.ch</a>
article before continuing to read this one.</p>
<h2>KVM Host configuration</h2>
<p>The KVM hosts are Dell R815 with CentOS 6.x installed. Why Dell? Because they
offered a good price/value combination. Why CentOS? Historical
reasons. The hosts got a minimal set of BIOS tuning to support the VM performance:</p>
<ul>
<li>Enable the usual virtualisation flags (don't forget to enable the IOMMU!)</li>
<li>Change the power profile to <strong>Maximum Perforamnce</strong></li>
</ul>
<p>Furthermore, as the CentOS kernel is pretty old (2.6.32-279) and
conservatively configured, the kernel needs the following
command line option to enable the IOMMU:</p>
<pre><code>amd_iommu=on
</code></pre>
<p>Not enabling this option degrades the performance.
In our case, enabling it reduced the latency of the
application running in the VM by a factor of 10.</p>
<p>One big design consideration of the the KVM setup at local.ch is to make the
KVM hosts as independent as possible and sensibly fault tolerant. That said,
VMs are stored on local storage and hosts are always redundantly connected
to two switches use <a href="https://en.wikipedia.org/wiki/Link_aggregation">LACP</a>.</p>
<h2>KVM Host Network Configuration</h2>
<p><a href="https://www.nico.schottelius.org//blog/kvm-vms-with-cdist-at-local.ch/kvm-setup-local.ch-overview.png"><img src="https://www.nico.schottelius.org//blog/kvm-vms-with-cdist-at-local.ch/kvm-setup-local.ch-overview.png" width="770" height="620" alt="Overview of KVM setup at local.ch" class="img" /></a></p>
<p>As can be seen in the picture above, every KVM host is connected to two
<strong>10G Arista switches (7050T-52-R)</strong> using LACP. Besides being capable
of running 10G, the Arista switches are actually pretty neat for the Unix geek,
because they are Linux based with a
<a href="https://en.wikipedia.org/wiki/Field-programmable_gate_array">FPGA</a>
attached. Furthermore you can easily
gain access to a shell by typing <strong>enable</strong> followed by <strong>bash</strong>.</p>
<p>The Arista switches are connected together with 2x 10G links, over which LACP+MLAG
is configured. This gives us the ability to connect every KVM host with LACP to two
<strong>different</strong> switches: They use MLAG to synchronise their LACP states.</p>
<p>On the KVM host, the network is configured as follows:</p>
<p>The dual Port 10G card (Intel Corporation 82599EB) is bonded together into bond0.</p>
<pre><code>[root@kvm-hw-inx01 network-scripts]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 3
Number of ports: 2
Actor Key: 33
Partner Key: 30
Partner Mac Address: 02:1c:73:1b:f5:b2
Slave Interface: eth4
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:0b:5b:6a
Aggregator ID: 3
Slave queue ID: 0
Slave Interface: eth5
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:0b:5b:6b
Aggregator ID: 3
Slave queue ID: 0
</code></pre>
<p>The following configuration is used to create the bond0 device:</p>
<pre><code>[root@kvm-hw-inx01 network-scripts]# cat ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
BONDING_OPTS="mode=802.3ad"
ONBOOT=yes
MTU=9000
[root@kvm-hw-inx01 sysconfig]# cat network-scripts/ifcfg-eth4
DEVICE="eth4"
NM_CONTROLLED="yes"
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
[root@kvm-hw-inx01 sysconfig]# cat network-scripts/ifcfg-eth5
DEVICE="eth5"
NM_CONTROLLED="yes"
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
</code></pre>
<p>The MTU of the 10G cards has been set to 9000, as the Arista switches support
<a href="https://en.wikipedia.org/wiki/Jumbo_frame">Jumbo Frames</a>.</p>
<p>Every VM is attached to two different networks:</p>
<ul>
<li>PZ: presentation zone (for general traffic) (10.18x.0.0/22 network)</li>
<li>FZ: filer zone (for NFS and database traffic) (10.18x.64.0/22 network)</li>
</ul>
<p>Both networks are seperated using the VLAN tags 2 (pz) and 3 (fz), which result
in <strong>bond0.2</strong> and <strong>bond0.3</strong>:</p>
<pre><code>[root@kvm-hw-inx01 network-scripts]# ip l | grep bond
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP qlen 1000
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP qlen 1000
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
139: bond0.2@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
140: bond0.3@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP
</code></pre>
<p>To keep things simple, the two vlan tagged (bonded) interfaces are added to a bridge each,
to which the VMs are attached later on. The configuration looks like this:</p>
<pre><code>[root@kvm-hw-inx01 network-scripts]# cat ifcfg-bond0.2
DEVICE="bond0.2"
ONBOOT=yes
VLAN=yes
BRIDGE=brpz
[root@kvm-hw-inx01 network-scripts]# cat ifcfg-brpz
DEVICE=brpz
TYPE=Bridge
ONBOOT=yes
DELAY=0
NM_CONTROLLED=no
MTU=9000
</code></pre>
<p>This is how a bridge looks like in production (with about 70 lines stripped):</p>
<pre><code>[root@kvm-hw-inx01 network-scripts]# brctl show
bridge name bridge id STP enabled interfaces
brfz 8000.024db29ca91f no bond0.3
tap13
tap73
[...]
brpz 8000.02f6742800b2 no bond0.2
tap0
tap1
[...]
</code></pre>
<p>Summarised, the network configuration of a KVM host looks like this:</p>
<pre><code>arista1 arista2
| |
[eth4 + eth5] -> bond0
|
|
/ \
bond0.2 bond0.3
/ \
brpz brfz
\ /
tap1 tap2
\ /
VM
</code></pre>
<h2>VM configuration</h2>
<p>The VM configuration can be found below <strong>/opt/local.ch/sys/kvm</strong>
on every KVM host. Every VM is stored below
<strong>/opt/local.ch/sys/kvm/vm/</strong> and contains the following
files:</p>
<pre><code>[root@kvm-hw-inx03 jira-vm-inx01.intra.local.ch]# ls
monitor pid start start-on-boot system-disk vnc
</code></pre>
<ul>
<li>monitor: socket to the monitor from KVM</li>
<li>pid: the pid of the VM</li>
<li>start: the script to start the VM (see below for an example)</li>
<li>start-on-boot: if this file exists, the VM will be started on boot</li>
<li>system-disk: the qcow2 image of the system disk</li>
<li>vnc: socket to the screen of the VM</li>
</ul>
<p>With the exception of monitor, pid and vnc are all files generated by cdist.
The start script of a VM looks like this:</p>
<pre><code>[root@kvm-hw-inx03 jira-vm-inx01.intra.local.ch]# cat start
#!/bin/sh
# Generated shell script - do not modify
#
/usr/libexec/qemu-kvm \
-name jira-vm-inx01.intra.local.ch \
-enable-kvm \
-m 8192 \
-drive file=/opt/local.ch/sys/kvm/vm/jira-vm-inx01.intra.local.ch/system-disk,if=virtio \
-vnc unix:/opt/local.ch/sys/kvm/vm/jira-vm-inx01.intra.local.ch/vnc \
-cpu host \
-boot order=nc \
-pidfile "/opt/local.ch/sys/kvm/vm/jira-vm-inx01.intra.local.ch/pid" \
-monitor "unix:/opt/local.ch/sys/kvm/vm/jira-vm-inx01.intra.local.ch/monitor,server,nowait" \
-net nic,macaddr=00:16:3e:02:00:ab,model=virtio,vlan=200 \
-net tap,script=/opt/local.ch/sys/kvm/bin/ifup-pz,downscript=/opt/local.ch/sys/kvm/bin/ifdown,vlan=200 \
-net nic,macaddr=00:16:3e:02:00:ac,model=virtio,vlan=300 \
-net tap,script=/opt/local.ch/sys/kvm/bin/ifup-fz,downscript=/opt/local.ch/sys/kvm/bin/ifdown,vlan=300 \
-smp 4
</code></pre>
<p>Most parameter values depend on output of sexy,
which uses the cdist type <strong>__localch_kvm_vm</strong>,
which in turn assembles this start script.
The above script may be useful for one or more of my readers,
as it includes a lot of tuning we have done to KVM.</p>
<h2>Automatic startup of VMs</h2>
<p>The virtual machines are brought up by an init script located at
<strong><em>/etc/init.d/kvm-vms</em></strong>. As every VM contains its own startup script
and is marked whether it should be started at boot, the init script
is pretty simple:</p>
<pre><code>basedir=/opt/local.ch/sys/kvm/vm
broken_lock_file_for_centos=/var/lock/subsys/kvm-vms
case "$1" in
start)
cd "$basedir"
# Specific VM given
if [ "$2" ]; then
vm_list=$2
else
vm_list=$(ls)
fi
for vm in $vm_list; do
vm_base_dir="$basedir/$vm"
start_script="$vm_base_dir/start"
# Skip start of machines which should not start
if [ ! -f "$vm/start-on-boot" ]; then
continue
fi
echo "Starting VM $vm ..."
logger -t kvm-vms "Starting VM $vm ..."
screen -d -m -S "$vm" "$start_script"
done
touch "$broken_lock_file_for_centos"
;;
</code></pre>
<p>As you can see, every VM is started in its own
<a href="http://www.gnu.org/software/screen/">screen</a> - so if screen decides to
hang up, only one VM is affected.
Furthermore screen supports only a limited number of windows it can server.
The process listing for a running virtual machine looks like this:</p>
<pre><code>root 64611 0.0 0.0 118840 852 ? Ss Mar11 0:00 SCREEN -d -m -S binarypool-vm-inx02.intra.local.ch /opt/local.ch/sys/kvm/vm/binarypool-vm-inx02.intra.local.ch/start
root 64613 0.0 0.0 106092 1180 pts/22 Ss+ Mar11 0:00 /bin/sh /opt/local.ch/sys/kvm/vm/binarypool-vm-inx02.intra.local.ch/start
root 64614 2.9 2.2 9106828 5819748 pts/22 Sl+ Mar11 5221:41 /usr/libexec/qemu-kvm -name binarypool-vm-inx02.intra.local.ch -enable-kvm -m 8192 -drive file=/opt/local.ch/sys/kvm/vm/binarypool-vm-inx02.intra.local.ch/system-disk,if=virtio -vnc unix:/opt/local.ch/sys/kvm/vm/binarypool-vm-inx02.intra.local.ch/vnc -cpu host -boot order=nc -pidfile /opt/local.ch/sys/kvm/vm/binarypool-vm-inx02.intra.local.ch/pid -monitor unix:/opt/local.ch/sys/kvm/vm/binarypool-vm-inx02.intra.local.ch/monitor,server,nowait -net nic,macaddr=00:16:3e:02:00:7f,model=virtio,vlan=200 -net tap,script=/opt/local.ch/sys/kvm/bin/ifup-pz,downscript=/opt/local.ch/sys/kvm/bin/ifdown,vlan=200 -net nic,macaddr=00:16:3e:02:00:80,model=virtio,vlan=300 -net tap,script=/opt/local.ch/sys/kvm/bin/ifup-fz,downscript=/opt/local.ch/sys/kvm/bin/ifdown,vlan=300 -smp 4
</code></pre>
<h2>Common Tasks</h2>
<p>The following sections show you how to do regular maintenance
tasks on the KVM infrastructure.</p>
<h3>Create a VM</h3>
<p>VMs can easily be created using the script <strong>vm/create-vm</strong> from the sysadmin-logs repository
(local.ch internally), which looks like this:</p>
<pre><code>sexy host add --type vm $fqdn
sexy host vm-host-set --vm-host $vmhost $fqdn
sexy host disk-add --size $disksize $fqdn
sexy host memory-set --memory $memory $fqdn
sexy host cores-set --cores $cores $fqdn
mac_pz=$(sexy mac generate)
mac_fz=$(sexy mac generate)
sexy host nic-add $fqdn -m $mac_pz -n pz
sexy host nic-add $fqdn -m $mac_fz -n fz
sexy net-ipv4 host-add "$net_pz" -m "$mac_pz" -f "$fqdn"
sexy net-ipv4 host-add "$net_fz" -m "$mac_fz" -f "$fz_fqdn"
echo "Updating git / github ..."
cd ~/.sexy
git add db
git commit -m "Added host $fqdn"
git pull
git push
# Apply changes: first network, so dhcp & dns are ok, then create VM
cat << eof
Todo for apply:
sexy net-ipv4 apply --all
sexy host apply --all
Start VM on $vmhost: ssh $vmhost /opt/local.ch/sys/kvm/vm/$fqdn/start
eof
</code></pre>
<h3>Delete a VM</h3>
<p>Run the script <strong>remove-host</strong>, which essentially does the following:</p>
<ul>
<li>Remove various monitoring / backup configurations</li>
<li>Detect if it is a VM, if so</li>
<li>Stop it</li>
<li>Remove it from the host</li>
<li>Add mac address to the list of free mac addresses</li>
<li>Delete host from the networks</li>
<li>Delete host from sexy database</li>
</ul>
<h3>Move VM to another server</h3>
<p>To move one VM to another host, the following steps are necessary:</p>
<ul>
<li>sexy host vm-host-set ... # to new host</li>
<li>stop vm</li>
<li>scp/rsync directory from old host to new host</li>
<li>sexy host apply --all # record db change</li>
<li>start vm on new host</li>
</ul>
Linux on the Lenovo X201https://www.nico.schottelius.org//blog/lenovo-x201-with-linux/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>The Lenovo X201, the successor of the X200, has arrived on my desk for some
days. Every new personal computer makes me think about which Linux distribution
to choose.</p>
<h3>Requirements</h3>
<p>I started with a list of aspects a good distribution would handle well:</p>
<ul>
<li>Straighforward and automatic installation</li>
<li>Support for crypted home, swap and probably even the root filesystem</li>
<li>Good package management</li>
<li>New packages (Linux > 2.6.18 for instance...)</li>
<li>Easy to integrate own packages into</li>
<li>Either via an own mirror or into the main distribution</li>
<li>Good documentation</li>
<li>Friendly and helpful community</li>
</ul>
<p>After some minutes thinking about the requirements, I've been aware
that testing all those aspects, and defining them properly, takes more
time than to just give the usual suspects a try.</p>
<h2>Debian</h2>
<h3>Lenny via fai</h3>
<p>As I'm running Debian here in the <a href="https://www.nico.schottelius.org//eth/">ETH</a>, my first approach
was to netinstall Debian via fai, which failed with a kernel panic,
because it could not find the root device.</p>
<p>This is normally an indicator, that the network card is unknown,
but debugging it from a netinstall is not so easy. Thus I decided
to give the installation from an usb stick a try.</p>
<h3>Lenny via usb stick</h3>
<p>Firstly I put the Debian Lenny Installer 5.0.2 via</p>
<pre><code>zcat ~nico/boot.img.gz > /dev/sdd
</code></pre>
<p>onto the usb stick. Booting the installer worked, but
it did not find any network interface, either.</p>
<p>Well, we all know that Lenny, as being Debian stable currently,
is almost outdated when it's released, so let's give Debian
testing a chance.</p>
<h3>Squeeze</h3>
<p>I remember that installing Debiang testing means to retrieve
the installer from some subpage on <a href="http://www.debian.org">debian.org</a>,
that contains the <strong>debian installer</strong>. As usual, it cannot be easily
found by selecting the usual interesting links on debian.org.</p>
<p>A <a href="http://www.google.com/search?q=debian%20installer">search via google for debian installer</a>
resulted in the
<a href="http://www.debian.org/devel/debian-installer/">correct link for the debian installer</a>.</p>
<p>It has been alarming that the
<a href="http://search.debian.org/cgi-bin/omega?P=debian+installer&DEFAULTOP=or&HITSPERPAGE=10&language=en">same search on debian.org</a> does <em>not</em> result in the correct result on the first
result page!</p>
<p>Using the
<a href="http://ftp.nl.debian.org/debian/dists/testing/main/installer-amd64/current/images/hd-media/boot.img.gz">Debian Squeeze Alpha1 release</a> of the debian installer resulted in a
funny, though not very productive installation:</p>
<ul>
<li>When selecting the language English, country Switzerland, it is not possible to
select the locale ch_DE.UTF-8!</li>
<li>The Keymap <a href="http://www.neo-layout.org/">Neo 2.0</a> is not in the list of available layouts.</li>
<li>The installer cannot find the iso (Well, there's none, it was started from an usb stick...)</li>
<li>"Detect network hardware" did not find any ethernet card.</li>
</ul>
<p>Ok, no Debian for the X201 currently.</p>
<h2>Ubuntu</h2>
<p>I am currently running Ubuntu on the X200, so giving it a try
on the X201, too.</p>
<h3>10.04 Beta2</h3>
<p>Beta2 is out, what's more loved than early betas and alphas?</p>
<p>I was trying to write the ISO (!) to a usb stick via <strong>usb-creator-gtk</strong>,
which fails with the following kernel messages:</p>
<pre><code>[1320073.833304] end_request: I/O error, dev sdd, sector 0
[1320073.845771] sd 72:0:0:0: [sdd] Device not ready
[1320073.845777] sd 72:0:0:0: [sdd] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[1320073.845786] sd 72:0:0:0: [sdd] Sense Key : Not Ready [current]
[1320073.845795] sd 72:0:0:0: [sdd] Add. Sense: Medium not present
[1320073.845804] end_request: I/O error, dev sdd, sector 0
[1320073.845829] unable to read partition table
</code></pre>
<p>Writing to a SD-Card however worked (although: reproducing that made
usb-creator-gtk more often fail than succeed: It simply did nothing,
said there's not enough space free and did not reformat the device).</p>
<p>Happily I can select the Neo 2.0 keyboard layout during the installation,
but not encrypt volumes in the desktop installer, because it's only
supported with the alternate installation, which I think is a major fault:</p>
<pre><code>Dear Ubuntu developers, include encryption via dm-crypt/cryptsetup
into your desktop installation, please.
The enhanced privacy is worth the added complexity!
</code></pre>
<p>The installer detected the network card and after the reboot wireless
lan was working perfectly and xorg was running with the gnome desktop. And as
a extra bonus for me, also real transparency of terminals was working!</p>
<p>On the other hand, I've a shiny new Ubuntu installation which is probably
not really what I want: I never use gnome and gdm is fine, but not really
needed.</p>
<p>But well, the main reason to give another distribution a try, is that
there's a new kid on the block:</p>
<h2>Archlinux</h2>
<p>Archlinux was brought to my attention some time ago, as it is
the first distribution, that includes <a href="https://www.nico.schottelius.org//software/ccollect/">ccollect</a>
(not in the main, but aur repo).</p>
<p>Having a look at archlinux, there are some points that straightly coming
to attention:</p>
<ul>
<li>Arch is x86 only (32 and 64 bit, though)</li>
<li>It does not use .rpm, nor .deb packages
(Slackware users, do you feel reminded? :-)</li>
<li>KISS (keep it simple and stupid)</li>
<li>At least one distribution understands why others have so many problems.</li>
<li>Least changes to upstream packages possible</li>
</ul>
<p>The last point is a very interesting one for
<a href="https://www.nico.schottelius.org//about/foss/">me, as a FOSS developer</a>, because it ensures that
problems are reported back to me and not corrected elsewhere.</p>
<pre><code>Dear arch developers, thanks for that decision!
</code></pre>
<p>For more information, have a look at
<a href="http://wiki.archlinux.org/index.php/The_Arch_Way">The Arch Way</a>.</p>
<h3>2009.08</h3>
<p>Once again, written the installer to an usb stick, booting
and - guessed it - the installer does not detect the network interface.</p>
<p>Ok, there must be something like Debian testing, some kind
of snapshot, daily or whatever release.</p>
<h3>2010-04-05 testing image</h3>
<p>After I had been searching around, I found
<a href="http://bbs.archlinux.org/viewtopic.php?pid=739859">an entry in the the forum</a>
and got a hint on <a href="irc://irc.freenode.org/#archlinux">IRC</a> that
there are <a href="http://build.archlinux.org/isos/">testing ISO images</a>
available.</p>
<p>The interesting thing is, the iso image can be copied
directly to an usb stick, because grub is being used!</p>
<p>The installer detected the network card and I gave
the auto prepare disk setup a chance, which creates partitions for</p>
<ul>
<li>/boot (ext2!),</li>
<li>swap,</li>
<li>/ and</li>
<li>/home.</li>
</ul>
<p>Interesting, but not what I would choose, as there no need for
/boot on x86 for a long time
(<a href="http://www.google.com/search?q=lilo%20lba32">see lba32 option for lilo</a>).
It also warned me, after I recreated the partition table, that there
is no /boot partition.</p>
<pre><code>Dear arch developers, why do you depend so much on /boot?
</code></pre>
<p>But in general, the arch installer can be used straightforwarded and
it says what it does (really like that). The encryption support is
a bit strange, as it does not prepare the crypttab config, which
could easily be integrated into the installer.</p>
<p>Arch has an easy integration of crypttab into the boot process,
but there are two drawbacks:</p>
<ul>
<li>Arch does not support Neo 2.0 keyboard layout</li>
<li>And the keyboard layout is loaded <strong><em>after</em></strong> I was asked for the
passphrase of the crypted devices.</li>
</ul>
<p>At the end, Archlinux installed fine on the X201 and I keep on using
it, to give the distribution in general a try.</p>
libpr0n: an image rendering library for Mozillahttps://www.nico.schottelius.org//blog/libpr0n-the-internet-is-for-porn/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>It is neither a secret that
<a href="http://en.wikipedia.org/wiki/Sex_sells">sex sells</a> nor that
Internet development is heavily influenced by
<a href="http://en.wikipedia.org/wiki/Pornography">pornography</a>.</p>
<p>I know the great image viewer
<a href="http://pornview.sourceforge.net/">pornview</a>
(which I replaced with <a href="http://gthumb.sourceforge.net/">gthumb</a>
some time ago), but today I found another interesting piece of code made
for viewing pornography: <a href="http://www.libpr0n.com/">libpr0n</a>. It is
described as</p>
<pre><code>"the smallest, fastest and most standards compliant Mozilla image library ever made"
</code></pre>
<p>According to the <a href="http://www.libpr0n.com/faq.html">f.a.q</a>, the aim is</p>
<pre><code>"to render pornographic images in an efficient way"
</code></pre>
<p>which reminded me of the sentence
<a href="http://en.wikipedia.org/wiki/Avenue_Q">"The Internet is for porn."</a>.</p>
<p>By the way, I found the library by digging into
<a href="http://ikiwiki.info/bugs/">ikiwiks buglist</a>, which
references the
<a href="http://ikiwiki.info/bugs/html5_support/">lack of html5 support in ikiwiki</a>,
which made me interested in <a href="http://html5.org/">HTML5</a>, which seems to
be <a href="http://wiki.whatwg.org/wiki/FAQ">developed quite open</a> by
the editor <a href="http://ian.hixie.ch/">Ian Hickson</a>, who writes
<a href="http://ln.hixie.ch/">a blog</a> and has
<a href="http://index.hixie.ch/">an index of websites he is related to</a>, which
referenced to <a href="http://www.libpr0n.com/">libpr0n</a>.</p>
Mixing redirects and rewrites with lighttpd and Plonehttps://www.nico.schottelius.org//blog/lighttpd-plone-rewrite-redirect/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>The situation</h2>
<p>As you may already know,
<a href="https://www.nico.schottelius.org//blog/restart-to-write-news/">I am</a>
<a href="https://www.nico.schottelius.org//blog/migration-1-configs/">migrating</a>
<a href="https://www.nico.schottelius.org//blog/migration-2-freebsd-raid-monitoring-foss/">many of</a>
<a href="https://www.nico.schottelius.org//blog/migration-3-ccollect/">my websites</a> into this one.</p>
<p>Today I also began to redirect stuff from my
previous personal website, http://nico.schottelius.org.
I am (still) running <a href="http://www.plone.org">Plone</a> on that
site, behind <a href="http://www.lighttpd.net/">lighttpd</a>. The
configuration of lighttpd looks like this:</p>
<pre><code>$HTTP["host"] =~ "^(nico|nico2)\.schottelius\.org$" {
url.rewrite-once = ( "^/(.*)" => "/VirtualHostBase/http/nico.schottelius.org/cms/VirtualHostRoot/$1" )
var.logdir = "/home/server/www/nico/nico.schottelius.org/logs/"
accesslog.filename = logdir + "access.log"
proxy.server = ( "" => (
( "host" => "192.168.6.2", "port" => 8082 ),
( "host" => "192.168.6.2", "port" => 8083 )
))
}
</code></pre>
<p>(<a href="https://www.nico.schottelius.org//configs/lighttpd-zope-http-and-https">a more detailled version can be found here</a>)</p>
<h2>The idea</h2>
<p>Now I created a new <a href="https://www.nico.schottelius.org//about/">about page here</a> and want to redirect
the old URLs <strong>"^/ueber/nico-schottelius$"</strong> and <strong>"^/about/nico-schottelius$"</strong>
from the Plone site to it.</p>
<p>First I tried the normal redirect like this:</p>
<pre><code> url.redirect = ( "^/ueber/nico-schottelius$" => "http://www.nico.schottelius.org/about/",
"^/about/nico-schottelius$" => "http://www.nico.schottelius.org/about/" )
</code></pre>
<h2>The solution</h2>
<p>Unfortunately, this did not work. You may already have spotted the bug...
The correct way to redirect pages from lighttpd in front of
<a href="http://www.zope.org">Zope</a>, which does <strong><em>rewriting</em></strong> is to match on the
<strong>rewritten</strong> path! Thus, the following code does the
<a href="http://nico.schottelius.org/about/nico-schottelius">correct redirect</a>:</p>
<pre><code>url.redirect = (
"^/VirtualHostBase/http/nico.schottelius.org/cms/VirtualHostRoot/ueber/nico-schottelius$"
=> "http://www.nico.schottelius.org/about/",
"^/VirtualHostBase/http/nico.schottelius.org/cms/VirtualHostRoot/about/nico-schottelius$"
=> "http://www.nico.schottelius.org/about/"
)
</code></pre>
<p>You can use <a href="http://curl.haxx.se">curl</a> to verify the redirect:</p>
<pre><code>[22:54] ikn% curl -i http://nico.schottelius.org/about/nico-schottelius
HTTP/1.1 301 Moved Permanently
Location: http://www.nico.schottelius.org/about/
Content-Length: 0
Date: Mon, 22 Jun 2009 21:01:39 GMT
Server: lighttpd/1.4.19
</code></pre>
Linux cannot ping itself, but others can ping the boxhttps://www.nico.schottelius.org//blog/linux-cannot-ping-self/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This is just a small reminder for me and everybody else who gets
fooled like me:</p>
<pre><code>If the loopback device (lo) is not up, you cannot ping localhost!
</code></pre>
<p>This is true for all local addresses, even though they are not
on the loopback interface! It's kind of confusing, because everybody
else can ping the box, only Linux cannot ping (or reach anyhow) localhost.
So the quick solution is</p>
<pre><code>ip link set lo up
</code></pre>
<p>This problem has been solved and described on other sites as well,
but if this one helps you not to spend three hours debugging your
nfs setup, it's worth populating the net with yet another article
regarding this problem.</p>
Linux virtual machine software is a real painhttps://www.nico.schottelius.org//blog/linux-virtual-machines-a-real-pain/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This report is a about my todays experience with virtual machines.</p>
<h2>UML (user-mode-linux)</h2>
<p>It began this morning, when I tried to setup a new virtual
machine with <a href="http://user-mode-linux.sourceforge.net/">User mode Linux</a>.
I could easily reuse an existing installation using
copy-on-write with the following command:</p>
<pre><code>linux umid=vm4 uml_dir=/home/nico/vm/uml con1=pts ubda=/home/nico/vm/cow/vm4,/home/nico/vm/images/debian eth0=tuntap,,,192.168.4.1 mem=4096M
</code></pre>
<p>After I issued</p>
<pre><code>apt-get update && apt-get dist-upgrade
</code></pre>
<p>in the virtual machine, it hung. It did not react to new ssh connections.
I've seen this behaviour quite often with <strong>user mode Linux</strong>, when I have
"a lot" of disk input/output. Ok, I wanted to use some kind of
framework for my virtual machines anyway, so for the time being,
let's forget about uml and try the libvirt+kvm.</p>
<h2>Libvirt</h2>
<p>The <a href="http://libvirt.org/">libvirt</a> project looks quite promising
from its documentation, especially in combination with
<a href="http://virt-manager.org/">virt-manager</a>. Trying to create a
new virtual machine with virt-manager is kind of strange, because
it insists of having an installation medium. Though, locating the
Debian live CD is not so difficult. But then came the big problem:
When I tried to create a new disk image, virt-manager just hung
for several minutes, without the host system doing anything.
Some time before I had massive problems using virt-manager and selecting
a different pool for the images, which caused several problems when
trying to start the VM.</p>
<p>But well, let's give <a href="http://www.libvirt.org/apps.html">virsh</a> a try,
the command line utility to manage libvirt. Creating a new disk image
with virsh is pretty easy:</p>
<pre><code>vol-create-as default jr.img 8G
</code></pre>
<p>A bit confusing is the fact that the <strong>vol-create</strong> command without
<strong>-as</strong> prefix expects a XML-file as input. Having a look at the
other create commands confirms guess:</p>
<pre><code>ikn:~% LANG=C LC_ALL=C virsh help | grep create
create create a domain from an XML file
net-create create a network from an XML file
nodedev-create create a device defined by an XML file on the node
pool-create create a pool from an XML file
pool-create-as create a pool from a set of args
vol-create create a vol from an XML file
vol-create-from create a vol, using another volume as input
vol-create-as create a volume from a set of args
ikn:~% virsh --version
0.7.4
</code></pre>
<p>Some commands do not support creation from the command line, but
only from an XML-file, which makes virsh useless for interactive
and scripting use.</p>
<p>This brings me to the new kid on the block: ganeti</p>
<h2>Ganeti</h2>
<p>When I first experienced problems with libvirt, some people pointed
me to <a href="http://code.google.com/p/ganeti/">ganeti</a>
(to speak truth, it was one of the ganeti developers).
Until today I delayed this idea, but after the problems with libvirt
I decided to give ganeti-2.0.5-1 (Debian package) a try. First of all
I tried to follow the
<a href="http://ganeti-doc.googlecode.com/svn/ganeti-1.2/install.html">installation tutorial</a>
referenced on the homepage, which is heavily orientated on using
<a href="http://www.xen.org/">Xen</a> and <a href="http://tldp.org/HOWTO/LVM-HOWTO/">LVM</a>, both
of them I do not plan to use. Trying to get ganeti running, I was meeting
some interesting problems:</p>
<pre><code>[11:26] tee:root# gnt-cluster init ganeti.schottelius.org
Failure: prerequisites not met for this operation:
This host's IP resolves to the private range (127.0.1.1). Please fix DNS or /etc/hosts.
</code></pre>
<p>This is described in the ganeti manual and easily fixed by commenting out the
relevant entry in <strong><em>/etc/hosts</em></strong>:</p>
<pre><code>[11:27] tee:root# grep tee.schottelius.org /etc/hosts
#127.0.1.1 tee.schottelius.org tee
</code></pre>
<p>After that I was a bit confused by ganeti not finding its cluster name:</p>
<pre><code>[11:27] tee:root# gnt-cluster init ganeti.schottelius.org
Failure: can't resolve hostname 'ganeti.schottelius.org'
[11:28] tee:root# ping ganeti.schottelius.org
PING ganeti.schottelius.org (77.109.138.195) 56(84) bytes of data.
64 bytes from ganeti.schottelius.org (77.109.138.195): icmp_seq=1 ttl=64 time=0.026 ms
^C
--- ganeti.schottelius.org ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
</code></pre>
<p>Retrying two times "solved" the problem, which is a bit confusing for me
as ganeti and ping both use the same resolver library. After that I met the
"no-lvm-problem":</p>
<pre><code>[11:38] tee:root# gnt-cluster init -b br0 ganeti.schottelius.org
Failure: prerequisites not met for this operation:
Error: volume group 'xenvg' missing
specify --no-lvm-storage if you are not using lvm
</code></pre>
<p>Specifying the required parameter led me into a new problem:</p>
<pre><code>[11:38] tee:root# gnt-cluster init -b br0 --no-lvm-storage ganeti.schottelius.org
Failure: prerequisites not met for this operation:
Invalid master netdev given (xen-br0): 'Device "xen-br0" does not exist.'
</code></pre>
<p>Which is interesting, <strong>ganeti seems to ignore the bridge paramater -b</strong>.
So, to use ganeti, I <strong>renamed the bridge from br0 to xen-br0</strong> in
<strong><em>/etc/network/interfaces</em></strong>:</p>
<pre><code>auto xen-br0
iface xen-br0 inet manual
bridge_ports eth1
</code></pre>
<p>And finally I was able to initialise the ganeti cluster:</p>
<pre><code>[15:06] tee:root# gnt-cluster init -b br0 --no-lvm-storage ganeti.schottelius.org
</code></pre>
<p>Then I tried to join the host into the cluster, which failed, but retrieving
status information also failed:</p>
<pre><code>[15:06] tee:root# gnt-node add tee.schottelius.org
Node tee.schottelius.org already in the cluster (as tee.schottelius.org) - please retry with '--readd'
[15:07] tee:root# gnt-node list
Node DTotal DFree MTotal MNode MFree Pinst Sinst
tee.schottelius.org ? ? ? ? ? 0 0
</code></pre>
<p>Trying to re-add it, results in an error without an error message
and does not fix the problem:</p>
<pre><code>[15:07] tee:root# gnt-node add --readd tee.schottelius.org
The authenticity of host 'tee.schottelius.org (77.109.138.222)' can't be established.
RSA key fingerprint is c7:d0:a8:32:ad:f0:9b:fa:1e:77:d5:1f:64:d8:9b:db.
Are you sure you want to continue connecting (yes/no)? yes
Thu Dec 31 15:08:23 2009 - INFO: Readding a node, the offline/drained flags were reset
Thu Dec 31 15:08:23 2009 - INFO: Node will be a master candidate
Failure: command execution error:
[15:08] tee:root#
[15:32] tee:root# gnt-node list
Node DTotal DFree MTotal MNode MFree Pinst Sinst
tee.schottelius.org ? ? ? ? ? 0 0
[15:34] tee:root#
</code></pre>
<p>At that point I was pointed to the
<a href="http://ganeti-doc.googlecode.com/svn/ganeti-2.0/install.html">more recent documentation</a>
of ganeti and began from scratch:</p>
<pre><code>[16:22] tee:vm# gnt-cluster destroy --yes-do-it
[16:23] tee:vm# gnt-cluster init --no-lvm-storage ganeti.schottelius.org
[16:26] tee:vm# gnt-node list
Node DTotal DFree MTotal MNode MFree Pinst Sinst
tee.schottelius.org ? ? ? ? ? 0 0
</code></pre>
<p>After double checking that the needed daemons are running
(/etc/init.d/ganeti restart), I got a good hint: One has to specify
the supervisor to use during initialisation:</p>
<pre><code>[16:34] tee:vm# gnt-cluster destroy --yes-do-it
[16:35] tee:vm# gnt-cluster init --no-lvm-storage -t kvm ganeti.schottelius.org
[16:36] tee:vm# gnt-node list
Node DTotal DFree MTotal MNode MFree Pinst Sinst
tee.schottelius.org ? ? 19.6G 2.7G 17.6G 0 0
</code></pre>
<p>Now I tried to add a new virtual machine instance, which resulted in another
error:</p>
<pre><code>[16:51] tee:vm# gnt-instance add -t file -s 4G -o debootstrap -n tee.schottelius.org jr.nachtbrand.ch
Failure: prerequisites not met for this operation:
Hypervisor parameter validation failed on node tee.schottelius.org: Instance kernel '/boot/vmlinuz-2.6-kvmU' not found or not a file
</code></pre>
<p>This seems to be some kind ganeti logic to have the kernel outside the
block device, which is similar to the user mode Linux approach. After linking one
of the host kernels and its initrd adding an instance succeeded:</p>
<pre><code>[16:59] tee:/boot# ln -s vmlinuz-2.6.30-2-amd64 vmlinuz-2.6-kvmU
[16:59] tee:/boot# ln -s initrd.img-2.6.30-2-amd64 initrd-2.6-kvmU
[17:00] tee:vm# gnt-instance add -t file -s 4G -o debootstrap -n tee.schottelius.org jr.nachtbrand.ch
[17:01] tee:/boot# gnt-instance list
Instance Hypervisor OS Primary_node Status Memory
jr.nachtbrand.ch kvm debootstrap tee.schottelius.org running 128M
</code></pre>
<p>It is also correctly connected to the bridge, seen as valid by <strong>gnt-os</strong>
and <strong>gnt-cluster verify</strong> looks good:</p>
<pre><code>[17:14] tee:/boot# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no
xen-br0 8000.0015176a26f7 no eth1
tap4
[17:16] tee:/boot# gnt-os diagnose
OS: debootstrap [global status: valid]
Node: tee.schottelius.org, status: VALID (path: /usr/share/ganeti/os/debootstrap)
[17:17] tee:/boot# gnt-cluster verify
Thu Dec 31 17:17:24 2009 * Verifying global settings
Thu Dec 31 17:17:24 2009 * Gathering data (1 nodes)
Thu Dec 31 17:17:24 2009 * Verifying node tee.schottelius.org (master)
Thu Dec 31 17:17:24 2009 * Verifying instance jr.nachtbrand.ch
Thu Dec 31 17:17:24 2009 * Verifying orphan volumes
Thu Dec 31 17:17:24 2009 * Verifying remaining instances
Thu Dec 31 17:17:24 2009 * Verifying N+1 Memory redundancy
Thu Dec 31 17:17:24 2009 * Other Notes
Thu Dec 31 17:17:24 2009 - NOTICE: 1 non-redundant instance(s) found.
Thu Dec 31 17:17:24 2009 * Hooks Results
</code></pre>
<p>As specified in the documentation, I tried to connect to the console:</p>
<pre><code>[17:28] tee:/boot# gnt-instance console jr.nachtbrand.ch
[17:30] tee:/boot# gnt-instance console --show-cmd jr.nachtbrand.ch
ssh -q -oEscapeChar=none -oHashKnownHosts=no -oGlobalKnownHostsFile=/var/lib/ganeti/known_hosts -oUserKnownHostsFile=/dev/null -oHostKeyAlias=ganeti.schottelius.org -oBatchMode=yes -oStrictHostKeyChecking=yes -t root@tee.schottelius.org '/usr/bin/socat STDIO,echo=0,icanon=0 UNIX-CONNECT:/var/run/ganeti/kvm-hypervisor/ctrl/jr.nachtbrand.ch.serial'
</code></pre>
<p>The problem is that the newly debootstrapped system
<em>does not have a serial console setup</em>.</p>
<p>As you can see, in the evening of this day I had a lot of new experiences,
but <em>no reliable running virtualisation framework</em>. That brings me to the
end of this report:</p>
<ul>
<li>User mode Linux does not work reliable under some I/O load.</li>
<li>Virt-manager is absolutely not able to change the simplest parameters.</li>
<li>Virsh is unusable, if you don't want to edit XML-files.</li>
<li>Ganeti has a lot of unhandled problems and still relies very much on Xen + LVM.</li>
</ul>
<p>As next Monday my vacation ends, I will have a look at the commercial virtualisation
frameworks. For the folks of the named FOSS stuff above: Guys, you've to improve
a lot, until one can call your software "good and clean software".</p>
LXC still insecure (since 2011)https://www.nico.schottelius.org//blog/lxc-insecure-since-2011/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>For a customer of mine I was researching whether
we could use <a href="http://linuxcontainers.org/">LXC</a> for
virtualisation.
The customer is migrating to Debian 7,
<a href="https://wiki.debian.org/OpenVz">which does not contain OpenVZ anymore</a>.</p>
<p>Although the
<a href="http://permalink.gmane.org/gmane.linux.kernel.containers.lxc.general/5102">Debian template bug</a> is still not fixed, I first thought it would still
be usable when writing our own templates. But it turns
out that LXC still allows to
<a href="http://blog.bofh.it/debian/id_413">execute code as root on the host since 2011</a>.</p>
<p>More background information for those of you who are currently considering
LXC:</p>
<ul>
<li> <a href="https://wiki.ubuntu.com/UserNamespace">Ubuntu / User Namespaces in Linux</a></li>
<li> <a href="https://wiki.gentoo.org/wiki/LXC#MAJOR_Temporary_Problems_with_LXC_-_READ_THIS">Gentoo Wiki</a></li>
<li> <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=680469">Debian Bug Report</a></li>
</ul>
<p>So for the moment my recommendation is <a href="http://wiki.qemu.org/Main_Page">QEMU</a> (KVM has been merged back into QEMU!).</p>
Installing Linux on a Macbook Air (4,2)https://www.nico.schottelius.org//blog/macbook-air-42-archlinux/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>These are my first impressions of the Archlinux installation
on the MacBook Air (4,2).</p>
<h2>General impression</h2>
<p>The installation feels pretty rough, very much like the installation
to a PPC based iBook some years ago: A lot of stuff is still different
from a usual PC, there is no intuitive debugging on bootup, the keyboard
and trackpad works differently, etc. etc.</p>
<p>But in general: <strong><em>Linux works on the MacBook Air</em></strong>, there are just
some workarounds needed today, which may vanish tomorrow already.</p>
<h2>Installation</h2>
<p>I choose to use Archlinux on the MacBook Air, but this is just a minor
detail, the hints can be used on any other distribution as well.</p>
<h2>Shrink MacOS X partiton</h2>
<p>In MacOS X I've shrinked the HFS partiton to 50GB using the disk utility
to make up some free space for Linux. Although removing MacOS X completly
was in my mind, it may be useful in the end to get Linux working.</p>
<h2>Install refit</h2>
<p><a href="http://refit.sourceforge.net/">Refit</a> is the de-facto standard for
multiboot on Macs, in the FOSS world, which allows me to easily
select a different OS.</p>
<h2>Prepare install medium</h2>
<p>I've used <strong>archlinux-2011.08.19-core-dual.iso</strong> and put it to a
USB stick:</p>
<pre><code>dd if=archlinux-2011.08.19-core-dual.iso of=/dev/sdc
</code></pre>
<h2>Bootup</h2>
<p>Without refit, you could hold down the option key on the mac
and select the USB drive. With refit, you'll see the usb stick
directly and can boot from it.</p>
<h2>Partitioning</h2>
<p>I've not touched the three partitions from MacOS X
(efi, customer, recovery hd), but added two new partitions:</p>
<ul>
<li>/dev/sda4: Used for /boot</li>
<li>/dev/sda5: Used for /, encrypted using Luks</li>
</ul>
<p>Using parted or sgdisk (preferred)</p>
<h2>Bootloader</h2>
<p>My first idea was to use Syslinux on vfat on /dev/sda4, but
that does not work: refit sees the partition and allows you to
boot from it, but trying to boot it goes into a black screen
heading the following line:</p>
<pre><code>No bootable device -- insert boot disk and press any key
</code></pre>
<p>After I reformatted /dev/sda4 with ext4 and used extlinux instead with</p>
<pre><code>extlinux --install /boot
</code></pre>
<p>the boot into syslinux works.</p>
<h2>Mouse</h2>
<p>I've read some forum entries about loading appletouch before usbhid, but
on this device it actually needs to be <strong><em>bcm5974</em></strong>. A temporay workaround
to behave like a synaptics touchpad is the following command:</p>
<pre><code>rmmod usbhid; rmmod bcm5974; modprobe bcm5974; modprobe usbhid
</code></pre>
<p>I'm somehow wondering whether module loading order should be significant
(I know it had been at some other situations, but it does not feel right),
but for now it definitely is.</p>
<h2>WLAN</h2>
<p>The <strong>bcma</strong> module is loaded by default, which needs to be blacklisted
(<a href="https://wiki.archlinux.org/index.php/Broadcom_wireless#Wi-Fi_card_does_not_work.2Fshow_up_since_kernel_upgrade_.28brcmsmac.29">arch1</a>,
<a href="https://wiki.archlinux.org/index.php/Kernel_modules#Blacklisting">arch2</a>
)</p>
<p>Temporay & Permanent fix:</p>
<pre><code># echo 'blacklist bcma' >> /etc/modprobe.d/modprobe.conf
# rmmod bcma
# rmmod brcmsmac
# modprobe brcmsmac
# mkinitcpio -p linux
</code></pre>
<h2>Keyboard</h2>
<p>The <strong>fn-key</strong> does currently not work on my installation
and thus I cannot access the emulated Page-Down/Page-Up/Home/End
keys, you usually get when using fn+cursor keys.</p>
<p>This also seems to be related to hid_apple, but issuing the following
commands stops input from working at all:</p>
<pre><code>rmmod usbhid; rmmod hid_apple; modprobe hid_apple; modprobe usbhid
</code></pre>
<p>I've opened a
<a href="https://bugs.archlinux.org/task/26425">bugreport on the archlinux site</a>.</p>
<h2>Keyboard Backlight</h2>
<p>The backlight driver (<strong><em>applesmc</em></strong>) works pretty well and can be used
by simple adding the brightness into a sysfs file.</p>
<p>I've started the <a href="http://git.schottelius.org/?p=kbsd">kbsd</a> project
to automatically adjust the brightness depending on the environment
light, as detected by the light sensor.</p>
<h2>Display</h2>
<p>The machine boots up into a 1280x800 screen, although the panel
natively supports 1440x900.</p>
<p>Using the intel driver xorg does not start at all,
using fbdev instead is a workaround. You can use fbdev by putting
the following content into <strong><em>/etc/X11/xorg.conf.d/20-fbdev.conf</em></strong>:</p>
<pre><code>Section "Device"
Driver "fbdev"
Identifier "card0"
EndSection
</code></pre>
<p>There is also a <a href="https://bugs.archlinux.org/task/26426">bugreport for the display issue</a>.</p>
<p>Keith Packard is also working on this issue,
as can be read <a href="http://keithp.com/blogs/MacBook-Air/">here</a> and
<a href="http://keithp.com/blogs/MacBook-Air-2/">here</a>.</p>
Correcting the multimedia keys mapping on the MacBook Air 4,2https://www.nico.schottelius.org//blog/macbook-air-42-correcting-multimedia-key-mapping-and-status/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>The mapping of the multimedia keys of the MacBook Air 4,2 (probably earlier
ones as well) was
<a href="https://lkml.org/lkml/2011/10/17/326">slightly off by one and had the Eject key mapped</a>,
although the MBA does not have an optical drive.</p>
<h2>Two trees available with fixes</h2>
<p>The patch against <a href="http://git.kernel.org/?p=linux/kernel/git/jikos/hid.git;a=summary">Jiris for-next branch of the hid-tree</a> can be found
in <a href="http://git.schottelius.org/?p=foreign/linux-jiri-hid;a=summary">my <strong>mba42-fixes</strong> branch</a>.</p>
<p>The second tree is the one I use to run with the
<a href="https://www.nico.schottelius.org//blog/macbook-air-42-touchpad-keyboard-correct-screen-resolution/">correct keyboard mapping and screen resolultion patches</a>, which contains the change in the
<a href="http://git.schottelius.org/?p=foreign/linux-keith-jiri-mba;a=summary"><strong>keyboardmappingfix</strong> branch</a>.</p>
<h2>General status / todo list</h2>
<p>The notebook is pretty good usable with the current patches applied. There are
some gotchas, though, which I'll try to fix in the next time:</p>
<ul>
<li><a href="https://lkml.org/lkml/2011/10/18/145">network process hang issue in the <strong>brcmsmac</strong> driver</a></li>
<li>The mouse pointer does not move when the mouse button is pressed (probably a configuration problem of the synaptics touchpad)</li>
<li>Brightness is not adjusted in xorg when pressing FN-F1 (but can be done via <strong>echo VALUE > /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight/brightness</strong>)</li>
<li>Display is not switched off when lid is closed (acpi reports the lid event, though) (probably related to the previous problem)</li>
<li>Incorrect representation when using a Mini-Display-Port to HDMI adapter on external screen</li>
<li>No output on external monitor when using a Mini-Display-Port to DVI adapter</li>
<li>I'm not sure how to adjust the keyboard backlight correctly:</li>
<li>Correct map the ambient light values (0-255) to the keyboard light brightness (0-255)</li>
<li> Straight brightness=255-ambient_light does not look good</li>
<li>Probably off, when the lid is closed</li>
<li>See <a href="http://git.schottelius.org/?p=kbsd;a=summary">kbsd</a> for an early idea of what can
be done</li>
</ul>
Status of Linux on the MacBook Air 4,2https://www.nico.schottelius.org//blog/macbook-air-42-linux-status-report/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>Following up my
<a href="https://www.nico.schottelius.org//blog/macbook-air-42-correcting-multimedia-key-mapping-and-status/">recent</a>
<a href="https://www.nico.schottelius.org//blog/macbook-air-42-touchpad-keyboard-correct-screen-resolution/">posts</a>
<a href="https://www.nico.schottelius.org//blog/macbook-air-42-archlinux/">about the MacBook Air 4,2</a> with Linux,
here's a status report of what works and what is still missing.</p>
<p>This report is based on <strong><em>Linux 3.2.0-rc5</em></strong>.</p>
<h2>Screen Resolution and external screens</h2>
<p>Great news, Linus' tree merged patches necessary for full screen resolution
and external screens work as well:</p>
<pre><code>Screen 0: minimum 320 x 200, current 2560 x 2340, maximum 8192 x 8192
eDP1 connected 1440x900+0+1440 (normal left inverted right x axis y axis) 30mm x 179mm
1440x900 60.0*+
VGA1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected 2560x1440+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
HDMI2 disconnected (normal left inverted right x axis y axis)
HDMI3 disconnected (normal left inverted right x axis y axis)
DP2 disconnected (normal left inverted right x axis y axis)
DP3 disconnected (normal left inverted right x axis y axis)
2560x1440 (0xb2) 241.5MHz
h: width 2560 start 2608 end 2640 total 2720 skew 0 clock 88.8KHz
v: height 1440 start 1443 end 1448 total 1481 clock 60.0Hz
</code></pre>
<p>An external monitor was tested via Mini-DP-to-HDMI adapter and Mini-DP-to-DVI
adapter with resolutions from 1920x1080 (24" Samsung) up to 2560x1440 (27" Dell).</p>
<h2>Keyboard/Multimedia Keys (FN+F1..F12)</h2>
<p>With the current kernel all multimedia keys match correctly.</p>
<h2>Keyboard/Backlight</h2>
<p>Keyboard backlight can be controlled using
<a href="http://git.schottelius.org/?p=kbsd;a=summary">kbsd</a>. This works fine,
but it could be improved to react on the light sensor.
The question simply is, how to map the light sensor values to keyboard
backlight settings. If you've got a good table or function to apply,
drop me a mail, so I can include it.</p>
<h2>Touchpad</h2>
<p>The touchpad works almost completly, the only
problem left is that
the mouse pointer does not move when the mouse button is pressed.
This may be a configuration problem of the synaptics touchpad, but
I haven't found a fix for this.</p>
<h2>Screen backlight</h2>
<p>The nice utility <strong>xbacklight</strong> still does not work,</p>
<pre><code>[11:14] brief:~% xbacklight
No outputs have backlight property
</code></pre>
<p>but dimming works manually via echo:</p>
<pre><code>[21:15] brief:~# echo 2200 > /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-eDP-1/intel_backlight/brightness
</code></pre>
<p>And the display is not switched off when lid is closed.
ACPI reports the lid event, though. Probably a related problem.</p>
<h2>WLAN / brcmsmac / Broadcom BCM43224</h2>
<p>This used to be fixed in my own patched kernel, but since I've got
a new macbook air 4,2, it does suffers the
<a href="https://lkml.org/lkml/2011/10/18/145">network process hang issue in the <strong>brcmsmac</strong> driver</a>
again. Iterating over several different kernels did not yet fix this problem.</p>
<p>Though interestingly after 2 suspend and resume cycles it works, until the 4th of 5th
suspend cycle, at which all network processes hang again, if the connection to the
AP is lost.</p>
<h2>Current problem summary</h2>
<ul>
<li>Clicking and moving the mouse pointer does not work</li>
<li>xbacklight does not recognise backlight controls</li>
<li>Display is not switched off on lid close</li>
<li>Network processes hang when the connection to the AP is lost/cannot be established</li>
</ul>
MacBook Air 4,2: Xorg/Synaptics touchpad click and move fixedhttps://www.nico.schottelius.org//blog/macbook-air-42-touchpad-click-and-move-fixed/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>For some time when you pressed button 1 on the Mac Book Air 4,2 under Linux/Xorg
and moved the finger (classic <strong>select</strong> something on the screen behaviour),
a button 3 (right click) even was emitted.</p>
<h2>Current status</h2>
<p>As reported on <a href="https://bugs.freedesktop.org/show_bug.cgi?id=45396">bugzilla</a>,
it seems Peter Hutterer
<a href="http://cgit.freedesktop.org/xorg/driver/xf86-input-synaptics/commit/?id=6c457c0c61a0834361f45a073148db7b4c9be40b">merged the relevant fixes</a>
into the master tree, which were submitted by
Chase Douglas in the patches
<a href="http://patchwork.freedesktop.org/patch/9210/">9210</a> until
<a href="http://patchwork.freedesktop.org/patch/9219/">9219</a>
(<a href="http://patchwork.freedesktop.org/patch/9214/">9214</a> and
<a href="http://patchwork.freedesktop.org/patch/9215/">9215</a> seem
to be the relevant ones)</p>
<p>I haven't tested it so far, but the commits around the merge look good!</p>
Getting the keyboard, touchpad and correct screen resolution working on the MacBook Air 4,2https://www.nico.schottelius.org//blog/macbook-air-42-touchpad-keyboard-correct-screen-resolution/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>If you are running Linux on the MacBook Air 4,2 and have any of the following problems,
this article is for you:</p>
<ul>
<li>FN key not working / modifying keys</li>
<li>Touchpad not working in Multitouch mode</li>
<li>Resolution is 1280x800 instead of 1440x900</li>
</ul>
<h2>Touchpad and Keyboard</h2>
<p>In Jiris <a href="http://git.kernel.org/?p=linux/kernel/git/jikos/hid.git;a=summary">for-next branch</a>
of the hid tree the keyboard and touchpad issues have been solved.</p>
<h2>Screen resolution</h2>
<p>Keith fixed the screen resolution issues in his tree that you can retrieve:</p>
<pre><code>git clone -b fix-edp-vdd-power git://people.freedesktop.org/~keithp/linux
</code></pre>
<h2>Getting both fixes</h2>
<p>Until the changes are merged into Linus' tree, I've setup a tree that merged the
two previous ones:</p>
<ul>
<li><a href="http://git.schottelius.org/?p=foreign/linux-keith-jiri-mba;a=summary">gitweb</a></li>
<li>git://git.schottelius.org/foreign/linux-keith-jiri-mba</li>
</ul>
<p>I've used the config.gz from Archlinux as base for .config and added the
new config options, which resulted in a
<a href="https://www.nico.schottelius.org//news/dot-config">working .config for the MacBook Air</a>.</p>
Managing custom software with environment modules in the Systems Grouphttps://www.nico.schottelius.org//blog/managing-custom-software-with-environment-modules-in-the-systems-group/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>The problem</h2>
<p>Maintaining custom software as a sysadmin is not easily possible for
a research group, because ressources needed to do so easily exceed
the available working time, as soon as the number of software
installations is getting too big or the software too complex.</p>
<p>Researchers on the other hand rely on up-to-date or unpackaged
software to do their work.</p>
<h2>General solution</h2>
<p>One way to solve this issue is to provide a way researchers can install
and maintain their own software, without interferring with the system
software.</p>
<h2>Software solution</h2>
<p>One possible software solution is provided by the
<a href="http://modules.sourceforge.net/">Environment Modules Project</a>.</p>
<h2>Implementation</h2>
<p>The path <strong>/pub/env-modules</strong> should contain the user maintained
software and is mounted via <strong>nfs</strong> and <strong>autofs</strong>.
The Environment Modules Package is installed below
<strong>/pub/env-modules/Modules</strong>, the files to configure modules
(<strong>modulefiles</strong>) reside below <strong>/pub/env-modules/modulefiles/</strong>.</p>
<h3>Installation of modules</h3>
<p>The usual three step work fine, if you've tcl installed:</p>
<pre><code>modules-3.2.8% ./configure --prefix=/pub/env-modules --with-module-path=/pub/env-modules/modulefiles
modules-3.2.8% make
modules-3.2.8% make install
</code></pre>
<h2>Usage</h2>
<h3>Creating a new module (sysadmin part)</h3>
<p>Create a new directory below <strong>/pub/env-modules</strong> and a link
below <strong>/pub/env-modules/modulefiles/</strong> to the newly created directory.
Now give ownership to the researcher who is maintaining the new software,
who can install the software and create a specific modulefile for the
software. For instance:</p>
<pre><code>% mkdir /pub/env-modules/cdist
% chown nicosc /pub/env-modules/cdist
# Delegate support for cdist maintenance into the user owned folder
% ln -s /pub/env-modules/cdist/modulefiles /pub/env-modules/modulefiles/cdist
</code></pre>
<h3>Creating a new module (user part)</h3>
<p>Install the software into your directory and create modulefiles below the
modulefiles directory:</p>
<pre><code>% git clone git://git.schottelius.org/cdist /pub/env-modules/cdist/
% mkdir /pub/env-modules/cdist/modulefiles
% cat << eof > /pub/env-modules/cdist/modulefiles/git
#%Module1.0#####################################################################
##
## cdist modulefile
##
##
##
proc ModulesHelp { } {
puts stderr "\tLet's you use cdist"
}
module-whatis "Configuration Management"
append-path PATH /pub/env-modules/cdist/bin
eof
</code></pre>
<h3>Using env modules</h3>
<p>To actually make use of the new modules, you need to add env modules into your
shell. The following commands illustrate the way for the <strong>bash</strong>:</p>
<pre><code>% . /pub/env-modules/Modules/3.2.8/init/bash
% module avail
----------------------------------- /pub/env-modules/Modules/versions -----------------------------------
3.2.8
------------------------------------- /pub/env-modules/modulefiles --------------------------------------
cdist/git
% module load cdist/git
% module list
Currently Loaded Modulefiles:
1) cdist/git
</code></pre>
Maybe systemd is not the best ideahttps://www.nico.schottelius.org//blog/maybe-systemd-is-not-the-best-idea/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<pre><code>[189135.535385] systemd[515]: segfault at 7f75cc68a160 ip 00007f75cf24782d sp 00007fff0ad54040 error 4 in systemd[7f75cf20f000+123000]
</code></pre>
<p>Maybe, an init system should be small and robust and focus on
<strong><em>initialising</em></strong> the operating system.</p>
<p>I have plans for a change - just follow this blog for updates.</p>
Migrate Ubuntu to cinithttps://www.nico.schottelius.org//blog/migrate-ubuntu-to-cinit/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>I was thinking about which OS to support with
<a href="https://www.nico.schottelius.org//software/cinit/">cinit</a> next. I think that
<a href="http://www.FreeBSD.Org">FreeBSD</a> and
<a href="http://www.ubuntu.com/">Ubuntu</a> are very interesting
to support with cinit now.</p>
<p>So I decided to write migration scripts for Ubuntus
<a href="http://upstart.ubuntu.com/">upstart</a> for the next version
of cinit.</p>
<p>Because it's a lot of work to migrate an existing init
system, I'm searching for friendly hackers, who help me
to get the migration script up and running:</p>
<p>The idea is to</p>
<ul>
<li>analyse current services</li>
<li>create new cinit services</li>
<li>clue everything together and bootup much faster</li>
</ul>
<p>The script
<a href="http://git.schottelius.org/?p=cLinux/cinit.git;a=blob;f=scripts/future-bin/cinit-conf.migrate.upstart.ubuntu.jaunty;hb=HEAD">cinit-conf.migrate.upstart.ubuntu.jaunty</a>
already contains a minimalistic analysis of Ubuntus upstart. What needs to be done is</p>
<ul>
<li>to read through the listed scripts inside <strong><em>cinit-conf.migrate.upstart.ubuntu.jaunty</em></strong></li>
<li>and to create scripts to create new services in <a href="http://git.schottelius.org/?p=cLinux/cinit.git;a=tree;f=bin">cinit/bin</a></li>
</ul>
<p>So if you want to help to support Ubuntu with cinit, you could</p>
<ul>
<li>analyse one init-script</li>
<li>and create the service creators (like <a href="http://git.schottelius.org/?p=cLinux/cinit.git;a=blob;f=bin/cinit-conf.svc.mount.tmpfs.linux.ubuntu">cinit-conf.svc.mount.tmpfs.linux.ubuntu</a>)</li>
</ul>
<p>If you are interested, please join the
<a href="http://l.schottelius.org/mailman/listinfo/cinit">cinit mailinglist</a> and let us
know, on which script you hack.</p>
Migrated nico.schotteli.ushttps://www.nico.schottelius.org//blog/migrated-nico.schotteli.us/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>The content of <strong>nico.schotteli.us</strong> has been migrated into this site.</p>
<p>If you are missing any software or documentation that used
to be on <strong>nico.schotteli.us</strong>, do not hesitate to contact me.</p>
Migrated tech.schottelius.orghttps://www.nico.schottelius.org//blog/migrated-tech.schottelius.org/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>The small and mostly static site
<strong>tech.schottelius.org</strong> has been merged into
this site at <a href="https://www.nico.schottelius.org//about/computers/">computers</a>.</p>
Migrated unix.schottelius.orghttps://www.nico.schottelius.org//blog/migrated-unix.schottelius.org/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>All bigger projects of <strong>unix.schottelius.org</strong> have been
moved to this site already. With <a href="https://www.nico.schottelius.org//software/cconf/">cconf</a>, the
last "interesting" bits have been moved.
I did not migrate decr-f, because it needs a major cleanup
anyway.</p>
<p>If you are missing any software or documentation that used
to be on <strong>unix.schottelius.org</strong> or
<strong>linux.schottelius.org</strong>, do not hesitate to contact me.</p>
Migrating away from puppet to cdisthttps://www.nico.schottelius.org//blog/migrating-away-from-puppet-to-cdist/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>For those not being subscribed to the puppet-users mailing list,
here's my
<a href="http://groups.google.com/group/puppet-users/browse_thread/thread/83801a03c0fea665?pli=1">"goodbye and thanks for the fish"-message</a>:</p>
<pre><code>Date: Mon, 4 Apr 2011 19:26:06 +0200
From: Nico -telmich- Schottelius <nico-puppet-users --at-- schottelius.org>
To: puppet-users --at-- @googlegroups.com
Cc: steven-puppet-users --at-- @armstrong.cc
Subject: [Puppet Users] Migrating away from puppet to cdist
Good morning puppet users,
we, some sysadmins [0] at the computer science departement [1]
at ETH Zurich [2], developed a new configuration management
system called cdist [3], to which we migrate from our puppet
configuration.
I'm writing to this list for two reasons:
1) Say thanks and goodbye to puppet-*
Puppet in contrast to other systems emphasised on "define what I want"
versus "define what todo", which is a great approach and we've
shameless cloned this approach.
Also we discussed a lot of ideas used in puppet (as well as other
systems), from which we learned.
Puppet was the first CM I seriously adopted and it initially saved
me a lot of time. Thanks to the puppet team!
2) Show other puppet users how to get around (common) puppet problems
We're pretty confident that cdist solves some issues we've seen
in puppet and in the sense of FOSS, we'd like to inform others
how we've solved those issues in cdist:
Bootstrap problem
With puppet we needed to have ruby + some gems on the target
hosts. In cdist we only use a posix shell on the target plus
common UNIX tools (like find, rm, grep), as defined by POSIX.
Complex CA / SSL setup / issues
We've had some trouble using ssl certificates, especially with
multi master and frequent reinstallations. In cdist we only
rely on SSH.
Defining configuration in multiple locations
Defining a type multiple times in different locations in puppet
requires use of virtual ressources. In cdist you don't need to
care about this, as long as the parameters stay exactly the
same.
Error messages
If you encountered errors like "400 Bad Request",
"undefined method `closed?'", "can't convert nil into String",
or "undefined method `closed?' for nil:NilClass", you'll be
happy to hear that cdist's error messages contain usable
information.
Very easy extension
Whereas puppet has modules, types and providers, cdist only knows
about types. A type in cdist contains some functionality,
independent of whether you or upstream decided to implement it.
Pull versus Push approach
Puppet requires one (or more for redundancy reasons) central server,
because clients usually contact the master and ask for changes.
Cdist operatas in push mode and can be run from a small machine
like the sysadmin notebook.
Integrated version control
Cdist is usually cloned via git from upstream and changes are
kept in a different git branch. This encourages you to use the
existing version control for your own configuration.
Integrated clean documentation
All cdist documentation is included into the release and can be
compiled into HTML or manpages. Cdist also includes a reference
document that contains all available paths, types and environment
variables.
Unobstrusive upgrade path
Upgrading cdist just requires one "git pull" on your master machine,
no update needed on any client.
Clean release cycle
When in puppet things stopped to work within a minor version,
the cdist release cycle clearly defines that any incompatibility
forces a change on at least the minor (1.x -> 1.y) version.
If you stay on a specific version, like 1.5, things will not break. Promised.
No automatic (magic) behaviour
In puppet you can use title or name without setting it explicitly.
This may be useful in some parts, but maybe surprising as well.
In cdist only the globally available environment variables are
documented and have the same meaning everywhere.
Codebase / Bugs
Puppet contains around 100k lines of code, with cdist you only need
to debug ~ 1k/2k lines (core/with types) lines of code
(according to sloccount[5]).
Age
Warning: Although most pointers above may make cdist look like
superior compared to puppet, cdist is still pretty young
(~4 months old) and may lack some functionality puppet already has.
cdist is usable in production environments already.
It may just not work in very fancy or ancient environments.
If you've any questions, do not hesitate to subscribe to the cdist
mailing list [4] and ask them there.
Cheers,
Nico
[0]: http://sans.ethz.ch
[1]: http://www.inf.ethz.ch
[2]: http://www.ethz.ch
[3]: http://www.nico.schottelius.org/software/cdist/
[4]: http://l.schottelius.org/mailman/listinfo/cdist
[5]: http://www.dwheeler.com/sloccount/
</code></pre>
Configuration files migrated and updatedhttps://www.nico.schottelius.org//blog/migration-1-configs/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Due to the migration from
<a href="http://nico.schottelius.org">nico.schottelius.org</a>,
the old <a href="http://www.plone.org">Plone</a> site, the
old <a href="http://nico.schottelius.org/documentations/configurations">configurations</a>
were moved to the local <a href="https://www.nico.schottelius.org//configs/">configurations folder</a>.</p>
<p>And additionally, a very good usable configuration file was added:
<a href="https://www.nico.schottelius.org//configs/dot-gitconfig">.gitconfig</a></p>
Migrated FreeBSD raid monitoring and FOSS articlehttps://www.nico.schottelius.org//blog/migration-2-freebsd-raid-monitoring-foss/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>FreeBSD Raid monitoring</h2>
<p>The well-known article about
<a href="https://www.nico.schottelius.org//docs/freebsd-raid-monitoring/">FreeBSD raid monitoring</a> is now also
migrated to this site.</p>
<p>Please update your references from</p>
<ul>
<li> http://nico.schottelius.org/documentations/freebsd/freebsd-raid-monitoring/</li>
</ul>
<p>to</p>
<ul>
<li> <a href="https://www.nico.schottelius.org//docs/freebsd-raid-monitoring/">http://www.nico.schottelius.org/docs/freebsd-raid-monitoring/</a></li>
</ul>
<p>There are also some updates pending for the article, which will soon be
implemented. I just waited for the new CMS to be available, to do it.</p>
<h2>The term FOSS</h2>
<p>Additionally, the <span class="createlink">artice about the term FOSS</span> is migrated from</p>
<ul>
<li> http://nico.schottelius.org/documentations/foss/the-term-foss/</li>
</ul>
<p>to</p>
<ul>
<li> <a href="https://www.nico.schottelius.org//docs/the-term-foss/">http://www.nico.schottelius.org/docs/the-term-foss</a></li>
</ul>
Migrated ccollecthttps://www.nico.schottelius.org//blog/migration-3-ccollect/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Ccollect - the simple backup tool</h2>
<p>The first software from the old
<a href="http://unix.schottelius.org">unix.schottelius.org</a> website
(previously <a href="http://linux.schottelius.org">linux.schottelius.org</a>)
has been migrated to this website: From today on, the old <strong><em>ccollect</em></strong>
home</p>
<ul>
<li> http://unix.schottelius.org/ccollect/</li>
</ul>
<p>moved to</p>
<ul>
<li> <a href="https://www.nico.schottelius.org//software/ccollect/">http://www.nico.schottelius.org/software/ccollect/</a></li>
</ul>
<p>Please update your links.</p>
<p>More stuff will be migrated soon!</p>
Migrated gpmhttps://www.nico.schottelius.org//blog/migration-4-gpm/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Another piece of software migrated to
<a href="https://www.nico.schottelius.org//news/www.nico.schottelius.org">www.nico.schottelius.org</a>:</p>
<p><strong><em>gpm</em></strong>, the general purpose mouse daemon.</p>
<p>gpm moved from</p>
<ul>
<li> http://unix.schottelius.org/gpm/</li>
</ul>
<p>moved to</p>
<ul>
<li> <a href="https://www.nico.schottelius.org//software/gpm/">http://www.nico.schottelius.org/software/gpm/</a></li>
</ul>
<p>Please update your links.</p>
Migrated small introduction to gpgmehttps://www.nico.schottelius.org//blog/migration-5-gpgme-introduction/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>The small manual
<a href="https://www.nico.schottelius.org//docs/a-small-introduction-for-using-gpgme/">A small introduction for using gpgme</a>
was migrated into this site.</p>
<p>The old URL is</p>
<ul>
<li> http://nico.schottelius.org/documentations/howtos/a-small-introduction-for-using-gpgme,</li>
</ul>
<p>the new URL is</p>
<ul>
<li> <a href="https://www.nico.schottelius.org//docs/a-small-introduction-for-using-gpgme/">http://www.nico.schottelius.org/docs/a-small-introduction-for-using-gpgme</a></li>
</ul>
<p>Please update your links.</p>
How to change the tempdir for Mozilla (general) and conkerorhttps://www.nico.schottelius.org//blog/mozilla-conkeror-change-tempdir/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Some weeks ago I got a good hint from
<a href="https://www.nico.schottelius.org//dokumentationen/axel-stefan-beckert/">Axel Stefan Beckert</a>, to try <a href="http://conkeror.org/">conkeror</a>
as an alternative for the
<a href="http://www.mozilla.com/en-US/firefox/firefox.html">Firefox</a> browser.</p>
<p>Although I am not used to the <a href="http://en.wikipedia.org/wiki/Emacs">emacs</a>
shortcuts, it is very good usable with the keyboard only.</p>
<p>In the last few days I have been missing one important feature,
one of the most important features of a browser:</p>
<pre><code>to be able to edit a textbox with an external editor
</code></pre>
<p>I often edit large wiki pages and rearange them, which is a pain
without a real editor.</p>
<p>Conkeror supports <a href="http://conkeror.org/ExternalEditing">external editing</a>,
but defaults to</p>
<ul>
<li>$VISUAL</li>
<li>$EDITOR</li>
<li>or emacs</li>
</ul>
<p>None of the is usable for me, because <strong>$VISUAL</strong> and <strong>$EDITOR</strong>
are set to <a href="http://www.vim.org/">vim</a> and vim requires a terminal.</p>
<p>After I was told on <a href="irc://irc.freenode.org/#conkeror">#conkeror</a>
to modify <strong><em>~/.conkerorrc/init.js</em></strong> to include</p>
<pre><code>editor_shell_command = "urxvt -e vim";
</code></pre>
<p>it worked like a charm (besides debugging it for some others days,
until I found out that there was always one instance of conkeror
running, so it never re-read the configuration file). I can now
edit textboxes in conkeror with vim!</p>
<p>But then I noticed, that conkeror creates a temporary file below
<strong><em>/tmp</em></strong>, which I do not like, because all my data should be
put on my <a href="http://code.google.com/p/cryptsetup/">encrypted home directory</a>,
not on the unencrypted root partition.</p>
<p>So I started to search for a configuration variable in the
<a>configuration window</a>, but did not find any hint.</p>
<p>As I am running conkeror from the git source, I began to dig through it
and started in <strong>modules/external-editor.js</strong>, where I found the
function <strong>open_with_external_editor()</strong>:</p>
<pre><code>76 function open_with_external_editor (lspec) {
77 keywords(arguments);
78 let [file, temp] = yield download_as_temporary(lspec);
79 yield open_file_with_external_editor(file, $line = arguments.$line, $temporary = temp);
80 }
</code></pre>
<p>Ok, what is <strong>download_as_temporary()</strong> doing? The file <strong>modules/save.js</strong> helped
me:</p>
<pre><code>228 function download_as_temporary (lspec) {
243 var file = get_temporary_file(suggest_file_name(lspec));
</code></pre>
<p>Well, well, so what's about the <strong>get_temporary_file()</strong> function? The file
<strong>modules/utils.js</strong> contains it:</p>
<pre><code>799 function get_temporary_file (name) {
800 if (name == null)
801 name = "temp.txt";
802 var file = file_locator.get("TmpD", Ci.nsIFile);
803 file.append(name);
804 // Create the file now to ensure that no exploits are possible
805 file.createUnique(Ci.nsIFile.NORMAL_FILE_TYPE, 0600);
806 return file;
807 }
</code></pre>
<p>Searching for <strong>Ci.nsIFile</strong> in conkerors source did not reveal
many information, so I got back to my <strong>seoc</strong> (search engine of choice) and
found some hints on the <a href="https://developer.mozilla.org">mozilla developer center</a>
about <a href="https://developer.mozilla.org/en/nsIFile">nsIFile</a> and
<a href="https://developer.mozilla.org/en/Code_snippets/File_I%2F%2FO">TmpD</a>
and a reference to the IRC channel <a href="irc://irc.mozilla.org/extdev">#extdev</a>.</p>
<p>After I described my problem in that IRC channel,
<a href="http://www.kaply.com/weblog/about/">Michael Kaply</a>
told me the answer to the question
"<a href="http://mxr.mozilla.org/mozilla1.9.2/source/xpcom/io/SpecialSystemDirectory.cpp#568">What defines or where is the TmpD variable defined?</a>":</p>
<p>The temporary directory is OS specific and in my case (unix) defined by the
environment variables</p>
<ul>
<li>$TMPDIR</li>
<li>$TMP</li>
<li>$TEMP (tried in that order)</li>
</ul>
<p>After I set</p>
<pre><code>TMP=~/.tmp
</code></pre>
<p>and restarted conkeror, pressed C-i in a textbox, the file is eventually saved
in the temporary directory <strong>.tmp</strong> in my home directory!</p>
msad.jpghttps://www.nico.schottelius.org//blog/news-2013-01-22/msad.jpg2016-02-25T13:34:32Z2015-02-03T14:47:26ZMy Bash and Zsh prompthttps://www.nico.schottelius.org//blog/my-bash-and-zsh-prompt/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This article is for those who wonder how my very simple, but helpful prompt
in the shell is created and to learn the motivation behind it.</p>
<h2>How it looks like</h2>
<p><a href="https://www.nico.schottelius.org//blog/my-bash-and-zsh-prompt/bash-zsh-prompt-screenshot-20111125.png"><img src="https://www.nico.schottelius.org//blog/my-bash-and-zsh-prompt/bash-zsh-prompt-screenshot-20111125.png" width="199" height="45" alt="Bash/Zsh Prompt" class="img" /></a></p>
<h2>How it is created</h2>
<p>Bash:</p>
<pre><code>PS1='[\t] \[\033[1m\]\h\[\033[0m\]:\W\$ '
</code></pre>
<p>Zsh:</p>
<pre><code>PS1="[%T] %B%m%b:%c%# "
</code></pre>
<h2>Motivation</h2>
<ul>
<li>I need the hostname to know on which box I am working</li>
<li>Time is helpful for copy & paste in logs (and to not waste space with <strong><em>xclock</em></strong>)</li>
<li>Short directory name (\W, %c) is helpful, long paths make the prompt
unusable and I usually know which tree I am in (if not: pwd helps)</li>
<li>No need for <strong>username@</strong> like most distros do: If I am a user,
I am <strong><em>nico</em></strong> (<strong><em>$</em></strong> in bash, <strong><em>%</em></strong> in zsh). Otherwise I am root (<strong><em>#</em></strong>).</li>
</ul>
My photo publishing approachhttps://www.nico.schottelius.org//blog/my-photo-publishing-approach/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>During a discussion about image management software like
<a href="http://live.gnome.org/gthumb">gthumb</a>,
<a href="http://f-spot.org/">f-spot</a> or <a href="http://yorba.org/shotwell/">shotwell</a>,
we came up with the question how far a management software should go.
Should it provide basic image manipulation? Publishing to the web?
Creation of static web albums?</p>
<p>For me the feature "create static webalbum" is very important,
as I don't want to publish my images/photos on sites like
<a href="http://picasaweb.google.com/">picasa</a> or
<a href="http://www.flickr.com/">flickr</a>, because I don't want to depend
on these companies. I especially do not want to recreate
all the albums, if one of those companies stops providing the service.</p>
<p>One of the main questions is, which program should handle the gallery
creation process:</p>
<ul>
<li>the image viewing / managing utility (most likely with a gui)</li>
<li>an external command line utility</li>
</ul>
<p>After some time, I think there are some key points:</p>
<ul>
<li>a special designed command line utility may be better suited for the job</li>
<li>a gui may be very good suited to actually select the right images</li>
</ul>
<h2>How I create and publish static photo galleries</h2>
<p>So in the end, I decided for the following setup:</p>
<ul>
<li>Use shotwell to organise photos</li>
<li>Use shotwell to export downscaled version of photos to a directory
(one per event)</li>
<li>Create a Makefile, which utilises llgal to (re-)create the whole website</li>
</ul>
<p>Thus I can change the gui, but do not need to change the gallery format.
You can find the result at
<a href="http://photo.nico.schottelius.org">photo.nico.schottelius.org</a>.</p>
<h2>SEE ALSO</h2>
<ul>
<li><a href="https://www.nico.schottelius.org//docs/static-image-gallery-generator-comparison/">static-image-gallery-generator-comparison</a></li>
</ul>
New comments about ccollect publishedhttps://www.nico.schottelius.org//blog/new-comments-about-ccollect/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>If you are a FOSS-developer, like <a href="https://www.nico.schottelius.org//about/">me</a>, you'll understand
that getting feedback for you application is a great thing and
helps either to improve the software or to feel good.</p>
<p>In case of <a href="https://www.nico.schottelius.org//software/ccollect/">ccollect</a>, the included script
<strong>tools/report_success</strong> submits aids users to give feedback.</p>
<p>If the user allows me to publish the quote, it will
appear on <a href="https://www.nico.schottelius.org//software/ccollect/quotes/">ccollect's quotes site</a>,
like it happened again today.</p>
<p>The motivation of this post is</p>
<ul>
<li>to show you that ccollect is great software</li>
<li>to show FOSS developers how to get happy with their great software</li>
</ul>
New Linux wireless gigabit Linux router for ungleich officehttps://www.nico.schottelius.org//blog/new-wireless-gigabit-linux-router-for-ungleich-office/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p><a href="https://www.nico.schottelius.org//blog/new-wireless-gigabit-linux-router-for-ungleich-office/rb2011uas2hnd-in.jpg"><img src="https://www.nico.schottelius.org//blog/new-wireless-gigabit-linux-router-for-ungleich-office/rb2011uas2hnd-in.jpg" width="798" height="372" alt="Mikrotik RB2011UAS-2HnD-IN" class="img" /></a></p>
<h2>Introduction</h2>
<p>We are getting a new 150Mbit/s (down) Internet connection
in the <a href="http://www.ungleich.ch">ungleich office</a> in August 2013.
Unfortunately, our current router <strong>katze</strong>, a
<a href="http://soekris.com/products/net5501.html">Soekris net5501</a>,
is not able to process 150 Mbit/s, as it contains only
Fast Ethernet interfaces.</p>
<p>So it's time for a geeky replacement.</p>
<h2>Searching for available products</h2>
<p>What's the best plattform to search for a Linux based
router that is probably good supported by FOSS?</p>
<p>I started to cross match devices from the local
vendor <a href="http://www.digitec.ch">digitec</a> with the list
of <a href="http://wiki.openwrt.org/toh/start">supported device of OpenWRT</a>.</p>
<p>Essentially I was looking for devices with</p>
<ul>
<li>high cpu speed (to be able to handle gigabit traffic)</li>
<li>some memory to flash an open image like OpenWRT on it</li>
<li>support for hardware already in the Linux kernel</li>
<li>support for at least 802.11n, 802.11ac optional</li>
</ul>
<h2>Selecting a router</h2>
<p>After digging into the specs of many routers, the
<a href="http://www.tp-link.com.de/products/details/?categoryid=2872&model=Archer+C7">TP-Link Archer C7</a> looked pretty well and even supports 802.11ac.
Drawback: It has only 8 MiB of flash attached
and its <a href="https://forum.openwrt.org/viewtopic.php?id=44201">OpenWRT and 802.11ac support</a> is still work in progress.</p>
<p>Given that all our clients support 802.11n only currently,
I was considering other routers as well. Remembering that
I've recently installed a
<a href="http://routerboard.com/RB750GL">RB750GL</a>
at <a href="http://www.panter.ch">panter</a> and seen some
Mikrotik devices on the OpenWRT page, I also checked out
their website, which brings me to the router I chose:
<a href="http://routerboard.com/RB2011UAS-2HnD-IN">RB2011UAS-2HnD-IN</a>:</p>
<ul>
<li>Gigabit, 802.11n supported</li>
<li>Should be OpenWRT supported using nand flash</li>
<li>Geeky LCD</li>
<li>11W power consumption</li>
</ul>
<p>As <a href="http://www.mikrotik.com/">Mikrotik</a>
seems to produce a lot of cool devices, the decision
was also made to support this company instead of the usual
big ones.</p>
News 2013-01-22https://www.nico.schottelius.org//blog/news-2013-01-22/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Dell sells notebook with Ubuntu on it</h2>
<p>As all of you know, Ubuntu is the most promoted Linux distro currently being on
the market. Dell, on the other hand, has trouble staying in the business
and rumours say that Dell is even searching for some investors.</p>
<p>Thus it's interesting to hear that
<a href="http://www.dell.com/us/soho/p/xps-13-linux/pd?dgc=AF&cid=6504&lid=167784&acd=240157118117280">Dell is now selling the XPS 13" with Ubuntu on it</a></p>
<h2>Linux is (becoming) a gaming plattform</h2>
<p>Various different news show you an interesting movement for 2013:
Android (and thus Linux) is becoming a gaming plattform:</p>
<p><a href="http://www.bit-tech.net/news/gaming/2012/07/17/valve-steam-linux/">Valve ported Steam to Linux</a>
and begins to
<a href="http://www.geek.com/articles/games/valve-starts-promoting-steam-for-linux-to-windows-users-20130121/">promote Linux to Windows users</a>.
<a href="http://www.ouya.tv/">The Ouya console is successfully being developed</a>
and even <a href="http://www.kickstarter.com/projects/872297630/gamestick-the-most-portable-tv-games-console-ever">they have a competitor, the GameStick</a>, which is also based on Android.</p>
<h2>Microsoft to die (finally)</h2>
<p>Thanks to the decreasing PC market and people focussing on swiping around their
tablets (instead of doing real work [tm]), Microsoft got a hard time.</p>
<p>It even got so worse that they are selling Windows 8 for almost nothing. But hey,
who wants a disease for free anyway?</p>
<p>Seeing the Nokia/Microsoft deal in which Nokia agreed to build Windows phones is
like seeing someone betting on a dying horse, but it is worse than that:
Why would you get a Windows phone, if you can buy an Android or IOS based phone?</p>
<p>Reading about
<a href="http://www.geek.com/articles/games/analyst-believes-microsoft-will-sell-off-xbox-division-maybe-even-to-sony-20130121/">Microsoft probably selling the
Xbox division</a> gives another indicator for someone totally being frustrated - probably
realising the good times are over.</p>
<p>This hasn't though changed the FUD approach of Microsoft, which
is <a href="https://www.nico.schottelius.org//news/msad.jpg">infamously known for their Anti-Campaings</a> like the new
<a href="http://www.scroogled.com/">scroogled</a> one.</p>
<h2>Cdist gets its own domain</h2>
<p>Suggestions to FOSS I write or maintain is highly appreciated.
Getting a domain for <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> was proposed and will
soon be activated.</p>
Nginx: Use X-Accel with conflicting regular expressionshttps://www.nico.schottelius.org//blog/nginx-prioritise-x-accel-before-regular-expressions/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Background</h2>
<p>At <a href="http://www.ungleich.ch">ungleich</a> we use <a href="http://nginx.org/">nginx</a> for <a href="http://rubyonrails.org/">Ruby on Rails</a>
hostings of our customers. Nginx is configured to
deliver static files using <a href="http://wiki.nginx.org/X-accel">X-Accel</a>.
We also want to have a longer expiry time for static files,
which is configured seperately in nginx.</p>
<h2>Configuration Options</h2>
<p>To support X-Accel, we have added this configuration block into nginx:</p>
<pre><code># Support for X-Accel
location /protected/ {
internal;
root /home/app/app/shared;
}
</code></pre>
<p>To support longer expiry times, we have added this configuration
block:</p>
<pre><code>location ~* \.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$ {
expires 1y;
# Need to enable proxying in this location as well
try_files $uri @unicorn;
}
</code></pre>
<h2>The Problem</h2>
<p>Using the configuration as stated above, we encounter the problem that
if an application wants to send a JPEG using X-Accel, the regular
expression block is selected (it has higher priority and matches
the .jpeg ending) and thus the application delivers it, instead of nginx
directly.</p>
<h2>The Solution</h2>
<p>Luckily though, <a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#location">nginx supports giving the prefix based
location block precedence</a>:
Instead of using <strong>location /protected/</strong>, we can use <strong>location ^~</strong>. Thus our
previous block can be rephrased to:</p>
<pre><code>location ^~ /protected/ {
internal;
root /home/app/app/shared;
}
</code></pre>
<p>And now the application can serve JPEG files via X-Accel.</p>
Known bugs of nscd with LDAPhttps://www.nico.schottelius.org//blog/nscd-bugs/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As <a href="https://www.nico.schottelius.org//blog/debian-with-ldap-forgets-users/">stated some time ago</a>,
I had the problem that users vanished after some time.</p>
<p>As I get a lot of e-mails regarding this problem, I therefore
documentate here the details I've found out so far:</p>
<h2>Ncsd works unreliable</h2>
<pre><code>Yes, indeed, switching off nscd removes the problem.
</code></pre>
<p>But there are more problems I experienced with nscd:</p>
<ul>
<li>Sometimes it consumes 100% cpu (and does not stop that until being killed)</li>
<li>Sometimes it just crashes.</li>
<li>Sometimes it causes users to "vanish" (the original problem)</li>
<li>Sometimes it hangs and thus slows down the whole system</li>
</ul>
<h2>The alternatives</h2>
<p>In D-INFK department in the ETH we're heavily dependent on the
LDAP database, as most services are using it as its primary
database. To overcome the problem, there are several solutions:</p>
<ul>
<li>Dump the ldap database into standard /etc/passwd and /etc/shadow</li>
<li>Shutdown nscd and let the LDAP-Server handle the load</li>
<li>Install <a href="http://busybox.net/~vda/unscd/">unscd</a></li>
</ul>
<p>I'm going for the last one now.</p>
Searching for a maintainer for offlineimap and hpodderhttps://www.nico.schottelius.org//blog/offlineimap-and-hpodder-need-a-maintainer/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As <a href="http://www.complete.org/JohnGoerzen">John Goerzen</a> created at
least one useful software,
<a href="http://software.complete.org/software/projects/show/offlineimap">offlineimap</a>,
which I daily use, I think I owe him at least this article.</p>
<p>He
<a href="http://changelog.complete.org/archives/1463-moral-obligations-of-free-software-authors">describes an interesting problem</a>, we FOSS developers have:</p>
<pre><code>He is happy with his software, but others would like to see more features.
</code></pre>
<p>I know that situation very well, if you think you're done, somebody just
pops up with a new situation or problem and I think:</p>
<pre><code>Yeah, how to solve that?
</code></pre>
<p>Yes, if I think it's an interesting problem, I take the time
to discuss it and maybe to develop a solution for it.</p>
<p>But sometimes I've other priorities (like family, studies or
- believe it or not - to enjoy the fine weather outside) and
then I reply:</p>
<pre><code>Dear hacker, cool problem! Here are my quality requirements,
can you just send me a patch that applies cleanly against
current development code (prefarable a git source)?
</code></pre>
<p>I cannot implement every (sensible) feature in software I've written
and I don't think it's my duty. That's what FOSS is all about:</p>
<pre><code>If people care, they fork or create a patch.
If not, it's probably not worth implementing it anyway.
</code></pre>
<p>Regarding the ressources you're providing, John, I handle it that
way: I always provide</p>
<ul>
<li>the source within version control (so things don't get messed up) and</li>
<li>a public contact point with archives (i.e. a mailing list)</li>
</ul>
<p>These two points ensure that everybody who wants to contribute, can so.
I don't think a bugtracker, wiki, etc is worth it for most projects,
(you don't have bugs in your software anyway, do you? ;-), but</p>
<pre><code>If people care, they can setup a wiki, bugtracker, etc themselves.
</code></pre>
<p>Coming back to the title, as said John is searching for a maintainer for</p>
<ul>
<li><a href="http://software.complete.org/software/projects/show/offlineimap">offlineimap</a> and</li>
<li><a href="http://software.complete.org/software/projects/show/hpodder">hpodder</a>.</li>
</ul>
<p>If you're reading this article, you may be interested in helping him ;-)</p>
OpenSSH 6.2: Add callback functionality (using dynamic remote port forwarding)https://www.nico.schottelius.org//blog/openssh-6.2-add-callback-functionality-using-dynamic-remote-port-forwarding/2016-03-18T08:08:35Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>This article describes a patch to OpenSSH 6.2 that I wrote to enable
<strong>ssh callback</strong> using dynamic ports. This is rather useful to have
for various types of software, including <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a>
and <a href="https://www.nico.schottelius.org//software/ccollect/">ccollect</a>.</p>
<h2>Background</h2>
<p>Assume you have two hosts:</p>
<ul>
<li>A <strong>target host</strong></li>
<li>A <strong>control host</strong>
(backup server in case of <a href="https://www.nico.schottelius.org//software/ccollect/">ccollect</a>,
configuration server in case of <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a>)</li>
</ul>
<p>Assume further that the target host can directly reach the control
host, but the control host cannot connect to the target host directly.</p>
<p>For instance, it's the case,
when the target host is hidden by NAT
or protected by a firewall.</p>
<h2>Approaches</h2>
<h3>Create a tunnel from the target host to the control host</h3>
<p>A very simple solution is to create a static tunnel
from the target host to the control host, which allows
the control host to connect back:</p>
<pre><code>targethost% ssh -R 42523:localhost:22 controlhost
controlhost% ssh -p 42523 localhost
</code></pre>
<p>The drawback is that the remote port needs to be defined
beforehand and both sides needs to know about it.</p>
<p>This is especially nasty, if you have a lot of
target hosts that need to be backed up / configured.</p>
<h3>Use dynamic port allocation</h3>
<p>The <a href="http://www.openssh.com/">OpenSSH</a> developers seem to have
spotted this problem and include an option to use a random
free port: If port 0 is chosen as the remote
forwarding port, the port is dynamically chosen by the
ssh server, which in our case runs on the controlhost.</p>
<p>Even better, the port information is also displayed on stdout:</p>
<pre><code>targethost% ssh -R 0:localhost:22 controlhost
Allocated port 59818 for remote forward to localhost:22
</code></pre>
<p>The problem here is: The shell on the remote side does not
know which port was chosen, as it is only printed on stdout
by the <strong>ssh client</strong>.</p>
<h3>Expose remote forwarding ports</h3>
<p><a href="https://www.nico.schottelius.org//blog/openssh-6.2-add-callback-functionality-using-dynamic-remote-port-forwarding/openssh-6.2p1-expose-remote-port-forwarding.diff">This patch</a>
against OpenSSH 6.2p1 creates a new environment variable
<strong><em>SSH_REMOTE_FORWARDING_PORTS</em></strong>, which contains all ports
that are used for remote forwarding:</p>
<pre><code>targethost % ssh -R 1234:localhost:22 controlhost
controlhost % echo $SSH_REMOTE_FORWARDING_PORTS
1234
</code></pre>
<p>As this works for all remotely forwarded ports, this can
also be used for dynamic port assignments:</p>
<pre><code>targethost % ssh -R 0:localhost:22 controlhost
controlhost % echo $SSH_REMOTE_FORWARDING_PORTS
54294
</code></pre>
<p>If more than one port forwarding definition is given, they are listed
space separated:</p>
<pre><code>targethost % ssh -R 0:localhost:22 -R 1234:localhost:22 controlhost
controlhost % echo $SSH_REMOTE_FORWARDING_PORTS
59056 1234
</code></pre>
<h3>Use socat</h3>
<p>Adapted from a proposal of
<a href="http://lists.mindrot.org/pipermail/openssh-unix-dev/2013-May/031350.html">Philipp Marek</a>.</p>
<p>A different approach is using socat like this:</p>
<pre><code>targethost% socat TCP:localhost:22,retry=forever "EXEC:ssh controlhost"
controlhost% cat .ssh/authorized_keys
command="~/myscript 1224" ssh-rsa ...
controlhost% cat ~/myscript
socat - TCP-LISTEN:1234 &
ssh -p 1234 ...
</code></pre>
<p>The drawback with this solution is to use pre-defined ports
as well as socat on the targethost exiting after the
first connection has been closed. It works for a single shot
callback, though.</p>
<h3>Use ProxyCommand with stdin/stdout</h3>
<p>As proposed by
<a href="http://lists.mindrot.org/pipermail/openssh-unix-dev/2013-May/031353.html">Darren Tucker</a> (some parts are copied & pasted from his original mail):</p>
<pre><code># Create fifo/named pipe for sshd
targethost% mkfifo sshd_in sshd_out
# Start ssh on the controlhost from the targethost
# and create a control socket. Use ProxyCommand=-
# to make use of stdin/stdout for proxying packets through.
targethost$ ssh <sshd_in >sshd_out -T -y controlhost "ssh -y -N -T -MS/tmp/ctl -oProxyCommand=- targethost" &
# Start a new sshd on the client, which listens on the newly
# created fifos
targethost$ /usr/sbin/sshd -i -f < sshd_in > sshd_out
# on the server, use the control socket to talk to the
# sshd running on the targethost
controlhost% ssh -S /tmp/ctl targethost
</code></pre>
<p>Drawback: Quite complicated setup required, thus probably error prone on day-to-day use.
Advantage: Very beautiful use of FIFOs, ssh, controlsockets and proxycommand. A setup
every geek must love.</p>
<h2>Limitations</h2>
<p>The given patch has some known limitations:</p>
<ul>
<li>The destination of the remote forwarding is not shown.
Debugging the ssh server shows that this information was present
in ssh1, but is absent in ssh2.</li>
<li>The number of listed ports is limited by the buffer size of 256 characters</li>
<li>Includes only remote port forwardings specified at startup, not the ones added later</li>
</ul>
<h2>Future</h2>
<p>The patch
<a href="http://lists.mindrot.org/pipermail/openssh-unix-dev/2013-May/031337.html">has been submitted</a>
to the
<a href="https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev">openssh-unix-dev mailinglist</a> for discussion.</p>
openssh-6.2p1-expose-remote-port-forwarding.diffhttps://www.nico.schottelius.org//blog/openssh-6.2-add-callback-functionality-using-dynamic-remote-port-forwarding/openssh-6.2p1-expose-remote-port-forwarding.diff2016-02-25T13:34:32Z2015-02-03T14:47:26ZPublished ceofhackhttps://www.nico.schottelius.org//blog/published-ceofhack/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Yesterday I published the first version of <span class="createlink">ceofhack</span>,
the p2p onion routing chat program.</p>
<p>With this version other developers can begin to implement their own fancy
stuff to be used in the EOF network. For instance, it is possible to
create transport protocols that (mis-)use DNS, HTTP, SMTP, ...</p>
<p>Before the release we had a testing night at the
<a href="http://www.ccczh.ch/Events/Chaosdock/2009/">Chaosdock</a>, which revealed
something quite interesting:</p>
<pre><code>Mac OS X does not have a proper poll implementation.
</code></pre>
<p>The poll() call on Mac OS X lead to an infinite loop, setting revents to
<strong><em>POLLNVAL</em></strong> on a file deskriptor used for reading.</p>
<p>But there is a <a href="http://www.clapper.org/software/poll/">poll emulation routine</a>
available.</p>
<p>(This issue was reported and debugged by ballessay.)</p>
Published creaturehttps://www.nico.schottelius.org//blog/published-creature/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Did you have a look at current virtual machine management programs?
Do you also feel that there's no easy-to-use, straightforward
solution out there?
My try to fix that problem is named <a href="https://www.nico.schottelius.org//software/creature/">creature</a>.</p>
<p>It's still in it's early design phase, but digging through the
README and the code can give you an impression of where it directs to.</p>
Published puppet module for efshhttps://www.nico.schottelius.org//blog/published-efsh-puppet-module/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>To help people easily deploy <a href="https://www.nico.schottelius.org//docs/efsh/">efsh</a>,
I released a puppet module, which realises the basic
directory structure.</p>
<p>It's still very incomplete, but you can get a first
impression on
<a href="http://git.sans.ethz.ch/?p=puppet-modules/efsh;a=summary">git.sans.ethz.ch</a>.</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
Published puppet modules: Collectd, Java, Prayer and ns_webmail_proxyhttps://www.nico.schottelius.org//blog/published-java-prayer-webmail-collectd-puppet-modules/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p><a href="https://sans.ethz.ch">Steven and I</a> are continuing to clean up
our puppet repo, which resulted in four new
(more or less standalone) puppet modules:</p>
<ul>
<li><a href="http://git.sans.ethz.ch/?p=puppet-modules/collectd;a=summary">collectd</a>: Create monitoring and monitored hosts with puppet</li>
<li><a href="http://git.sans.ethz.ch/?p=puppet-modules/java;a=summary">java</a>: Include all java stuff you ever dreamed of</li>
<li><a href="http://git.sans.ethz.ch/?p=puppet-modules/prayer;a=summary">prayer</a>: A lightweight webmail</li>
<li><a href="http://git.sans.ethz.ch/?p=puppet-modules/prayer;a=summary">ns_webmail_proxy</a>: Using prayer and postfix to create a webmail proxy that can also send e-mails</li>
</ul>
<p>If you enjoy them, we would be happy if you drop us a mail ;-)</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
Published list of projectshttps://www.nico.schottelius.org//blog/published-list-of-projects/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>The day before yesterday I was hacking on <a href="https://www.nico.schottelius.org//software/ccollect/">ccollect</a>
and finished a lot of things to be done for version 0.8.</p>
<p>When I was done, I was sitting in the train and thought:</p>
<pre><code>Which project to hack on, after this release?
</code></pre>
<p>There are a lot of projects I started, many of them not nearly finished.
And even more ideas, what I <strong>could</strong> work on
(a lot of my ideas can be found in the
<a href="http://git.schottelius.org/?p=nsdocuments">nsdocuments</a> repository).</p>
<p>A listing of my project directory shows over 30 different projects.
To get an overview of what I am working on already, I used the listing
as a base to create the new <a href="https://www.nico.schottelius.org//about/projects/">project list page</a>.</p>
<p>I'm now cleaning up the projects directory and may also publish some
(already finished) projects, which never made it to the public.</p>
<p>And until <strong>ccollect</strong> 0.8 is released, I will continue to think about
the next interesting project to work on.</p>
Published mbs: A machine booking system for the ETH Zurichhttps://www.nico.schottelius.org//blog/published-machine-booking-system-mbs-eth-zurich/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p><a href="https://www.nico.schottelius.org//software/mbs/">MBS</a> now has a homepage and will soon be used productively
in the ETH Zurich!</p>
Published puppet module for openntpd including ETHZ integrationhttps://www.nico.schottelius.org//blog/published-openntpd-ethz-puppet-module/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Continuing our effort to provide reusable puppet modules,
<a href="http://sans.ethz.ch">we</a> published an
<a href="http://git.sans.ethz.ch/?p=puppet-modules/openntpd;a=summary">openntpd puppet module</a>.</p>
<p>Because the default configuration does not work in ETH, when using
private ip addresses, which cannot reach the internet directly,
we also created an ETH-specific version, which can be found
in the
<a href="http://git.sans.ethz.ch/?p=puppet-modules/ethz;a=summary">ethz puppet module (class ntp)</a>.</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
Published puppet module for postgresqlhttps://www.nico.schottelius.org//blog/published-postgresql-puppet-module/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Yet another clean and shiny puppet-module published by
<a href="http://sans.ethz.ch">/sans/</a> that can be used by you!</p>
<p>This <a href="http://git.sans.ethz.ch/?p=puppet-modules/postgresql;a=summary">postgresql puppet module</a>
allows you to specify the version of postgresql and whether to enable or disable the
service!</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
Published smtp_loggerhttps://www.nico.schottelius.org//blog/published-smtp_logger/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Some days ago I cleaned up my
<a href="https://www.nico.schottelius.org//about/projects/">project directory</a> and found
<a href="https://www.nico.schottelius.org//software/smtp_logger/">smtp logger</a>, a SMTP debug utility.</p>
<p>I cleaned it a bit up and made it compile with <strong>-Werror</strong>.</p>
<p>So, another piece of <span class="createlink">free and open source software</span>
released, have fun with it!</p>
Published Static image gallery generator comparisonhttps://www.nico.schottelius.org//blog/published-static-image-gallery-generator-comparison/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As I tested some static image gallery generators,
I wrote down my notes and
published a
<a href="https://www.nico.schottelius.org//docs/static-image-gallery-generator-comparison/">small comparison of static image gallery generators</a>.</p>
Published Xorg terminal emulator font listhttps://www.nico.schottelius.org//blog/published-xorg-terminal-emulator-fonts/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>While I was trying to find out why
<a href="http://software.schmorp.de/pkg/rxvt-unicode.html">rxvt-unicode</a>
sometimes chose a different font I digged into the font selection problem
(i.e. which font to choose for my terminal).
Especially small fonts, which enable me to get a lot of stuff on the screen
and huge fonts for presentations are in my interest.</p>
<p>Thus I published a <a href="https://www.nico.schottelius.org//docs/xorg-terminal-emulator-fonts/">(growing) list of fonts</a>
I tried on my terminal emulator and "rated" their usability.</p>
Puppet bugs that motivated me to migrate away from puppet and write cdisthttps://www.nico.schottelius.org//blog/puppet-bugs-motivation-for-migration-and-cdist/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>For a long time I had a "secret list" of bugs that made me going crazy when
using puppet. But as I get more often asked <strong><em>Why have you writen
<a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> and migrate away from puppet?</em></strong>, I'm publishing the
list here, to give it a real home.</p>
<p>Early clearification, before rumors come up and bad blood created:
This is <strong>not intented</strong> to be a <strong>bash puppet page</strong>, but a
<strong>I don't need to re-explain why I moved away from puppet and
have written cdist page</strong>.</p>
<h2>Bugs</h2>
<p>This is the initial short list, explanations may follow.</p>
<ul>
<li><p><a href="http://projects.puppetlabs.com/issues/86">Puppet cannot create directories and their parents, 2006, rejected</a></p></li>
<li><p><a href="http://projects.puppetlabs.com/issues/1565">Puppet parser order dependant, 2008, still open in 2012</a></p></li>
<li><a href="http://projects.puppetlabs.com/issues/2538">Cannot get return code of command, 2009</a></li>
<li><a href="http://projects.puppetlabs.com/issues/3767">Local puppet != remote, 2010, >= 2 years idle</a></li>
<li><a href="http://projects.puppetlabs.com/issues/3936">Service stopping broken in debian, 2010, >= 10 months idle</a></li>
<li><a href="http://projects.puppetlabs.com/issues/3987">${var} does not work everywhere, 2010, >= 9 months idle</a></li>
<li><a href="http://projects.puppetlabs.com/issues/3997">"magic var" only in some scopes, 2010, rejected</a></li>
<li><a href="http://projects.puppetlabs.com/issues/3998">Mount/autorequire, 2010, >= 1 year idle</a></li>
<li><a href="http://projects.puppetlabs.com/issues/3998">Old bug marked as duplicate of a new one (see above)</a></li>
<li><a href="http://projects.puppetlabs.com/issues/4220">Parser bug</a></li>
<li><a href="http://projects.puppetlabs.com/issues/4680">SSL cert not submitted to a new puppetmaster, 2010, >= 1 year idle</a></li>
<li><a href="http://projects.puppetlabs.com/issues/4715">Reusing defines not possible, 2010, rejected</a></li>
<li><a href="http://projects.puppetlabs.com/issues/4780">Param a=$undef broken, 2010, fixed</a></li>
<li><a href="http://projects.puppetlabs.com/issues/4805">Templating broken, 2010, fixed</a></li>
<li><a href="http://projects.puppetlabs.com/issues/4922">Puppetd creates empty files, if it gets a 404, 2010, >= 11 months idle</a></li>
<li><a href="http://projects.puppetlabs.com/issues/4922">Fixed symptom, not source, rescheduling of import problem, see above</a></li>
<li><a href="http://projects.puppetlabs.com/issues/5048">"" (empty string) is not a valid resource reference, 2010, fixed</a></li>
<li><a href="http://projects.puppetlabs.com/issues/6209">Puppet changes the errror message on 2nd run, 2011, >= 1 year idle</a></li>
<li><a href="http://projects.puppetlabs.com/issues/6210">Fix error messages to be meaningful, 2011, >= 1 year idle</a></li>
<li><a href="https://www.nico.schottelius.org//blog/puppet-name-is-not-as-expected-but-classname/">In puppet, $name is not always what you expect, 2012</a></li>
<li><a href="http://projects.puppetlabs.com/issues/8229">Error "regexp buffer overflow" when backing up binary data, 2011</a></li>
<li><a href="http://projects.puppetlabs.com/issues/14577">Could not intern from pson: expected (with pseudo random values afterwards), 2012</a></li>
<li><a href="http://projects.puppetlabs.com/issues/16946">Regular expressions take precendence over direct node specifications, 2012</a></li>
</ul>
<h2>Contact</h2>
<p>If you think there's something wrong here and want to discuss the listing, do not hesitate
to discuss it on one of the
<a href="https://www.nico.schottelius.org//software/cdist/">cdist communication channels (irc, mailing list, mail)</a>.</p>
Puppet: Duplicate definition - on the same line!https://www.nico.schottelius.org//blog/puppet-duplicate-definition-on-the-same-line/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>I'm having a lot of fun with software,
<a href="https://www.nico.schottelius.org//blog/puppet-sometimes-loads-a-class/">again</a> with
<a href="http://reductivelabs.com/products/puppet">puppet</a>:
Puppet claims in its error message, that I define a resource twice:</p>
<pre><code>err: Could not retrieve catalog: Puppet::Parser::AST::Resource failed
with error ArgumentError: Duplicate definition:
File[nss_ldap_config] is already defined in file
/etc/puppet/modules/auth/manifests/init.pp at line 62;
cannot redefine at
/etc/puppet/modules/auth/manifests/init.pp:62
on node dryad16.ethz.ch
</code></pre>
<p>Well, nice error, isn't it? Maybe you already guessed it, line 62 is the end
of a define:</p>
<pre><code>54 define ldap_config() {
55 $ou = $name
56 file { "nss_ldap_config":
57 path => $auth::nss_ldap_config,
58 mode => 644,
59 owner => root,
60 group => root,
61 content => template("auth/ldap.erb"),
62 }
</code></pre>
<p>I would be pretty happy if puppet told me:</p>
<pre><code>You are using the define ldap_config() from file_x:line_x twice:
File file_a:line_a and File file_b:line_b use it, which defines
a duplicate definition.
</code></pre>
<p>Currently I've to</p>
<pre><code>grep -r ldap_config *
</code></pre>
<p>within my puppet config directory, to find the locations where the define
is called. Because it's not called twice within the same class, I've
to search manually through the classes that include the classes that use
the define to find out where an include is used that (probably indirectly)
includes a conflicting class.</p>
<p>Dear puppet developers, would you mind to include debug help as stated above?</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
Puppet empties new and existing fileshttps://www.nico.schottelius.org//blog/puppet-empties-new-and-existing-files/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After <a href="http://sans.ethz.ch">we</a> changed our puppetmaster to run
under unicorn plus nginx and the first 2.6.0 client connected,
I realised that puppetd created empty files on the client.
Not even new files created are affected, but also existing
files.</p>
<p>It seems that <a href="http://projects.puppetlabs.com/issues/4319">issue 4319</a>
is related to that problem, as the same entries are found in the
logfile.</p>
<p>The <a href="https://www.nico.schottelius.org//blog/puppet-empties-new-and-existing-files/debuglog">client reports about</a> duplicated files in the filebucket:</p>
<pre><code>info: FileBucket got a duplicate file /etc/pam.d/common-password ({md5}d41d8cd98f00b204e9800998ecf8427e)
info: FileBucket got a duplicate file /etc/pam.d/common-account ({md5}d41d8cd98f00b204e9800998ecf8427e)
info: FileBucket got a duplicate file /etc/pam.d/common-auth ({md5}d41d8cd98f00b204e9800998ecf8427e)
</code></pre>
<p>But hey, they are duplicate, because they all have the same checksum!
And the checksum is everywhere the same, because all files are empty:</p>
<pre><code>% touch test
% md5sum test
d41d8cd98f00b204e9800998ecf8427e test
</code></pre>
<p>In the syslog of the puppetmaster one can see</p>
<pre><code>Oct 1 14:31:14 sans-puppetca puppetmaster_unicorn: 129.132.85.166 - - [01/Oct/2010 14:31:14] "GET /production/file_content//autofs/auto.net HTTP/1.0" 404 44 0.0012
</code></pre>
<p>So in essence, what happens is:</p>
<ul>
<li>puppetd 2.6.0 submits two slashes in the path (//)</li>
<li>puppetmaster via unicorn does not find the file, because of the double slash</li>
<li>puppetd sees the 404 error and creates a empty file</li>
</ul>
<p>Switching to webbrick as a workaround works, because it accepts the two slashes.</p>
<p>Replacing the two slashes with one in the server does not fix the origin
of the problem, nor does it address the issue that puppetd creates
empty files, if it gets a 404 for the file content.</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
In puppet, $name is not always what you expecthttps://www.nico.schottelius.org//blog/puppet-name-is-not-as-expected-but-classname/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Situation</h2>
<p>I've tried to create a smart file definition for two files that belong into one
directory using this code snippet:</p>
<pre><code>file { ["check-disk-shell-net-snmp", "check_icinga_config.sh"]:
ensure => present,
path => "${check_base}/${name}",
source => "puppet:///modules/icinga2/${name}",
owner => icinga,
group => icinga,
mode => 775,
require => File["${check_base}"];
}
</code></pre>
<p>As described in the
<a href="http://docs.puppetlabs.com/references/2.7.0/type.html">puppet documentation</a>,
the path is usually constructed by using <strong>namevar</strong>, which I interpret as
"the variable named <strong>name</strong>".</p>
<h2>The problem</h2>
<p>What happens is actually something totally different (puppet --version: 2.7.5):</p>
<pre><code>err: Failed to apply catalog: Cannot alias File[check-disk-shell-net-snmp] to
["/opt/local.ch/sys/icinga/checks/icinga2::serverchecks"] at
/etc/puppet/modules/icinga2/manifests/serverchecks.pp:25; resource
["File", "/opt/local.ch/sys/icinga/checks/icinga2::serverchecks"] already defined at
/etc/puppet/modules/icinga2/manifests/serverchecks.pp:25
</code></pre>
<p>The internal alias message is a bit confusing
(I did not intentionally create an alias), but that puppet is using the classname
instead of the name supplied to file is surprising.</p>
<p><strong>Update:</strong> I've found the correct documentation part in the
<a href="http://docs.puppetlabs.com/guides/language_guide.html">puppet language guide</a>
that describes the feature I was trying to use:</p>
<pre><code>Most resources have an attribute (often called simply name) whose value
will default to the title if you don’t specify it. (Internally, this is
called the “namevar.”) For the file type, the path will default to the
title. A resource’s namevar value almost always has to be unique.
(The exec and notify types are the exceptions.)
</code></pre>
<h2>The solution</h2>
<p>Well, there are two solutions:</p>
<ul>
<li>rewrite to two file entries (simple, code redundancy, ugly)</li>
<li>switch over to using <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> (more initial effort, biased author)</li>
</ul>
<p>It is very good from time to time being remembered, which motivations I had
when starting the cdist project. In this case, it had been:</p>
<ul>
<li>Supply understandable, good error messages to the user</li>
<li>Do what the user expects</li>
<li>Consistent behaviour</li>
</ul>
<p>\
If you are interested, there is
<a href="http://firma.schottelius.org/english/infrastructure/">commercial support available</a> for
puppet to cdist migrations.
\</p>
Puppet: The quantum effect when loading classeshttps://www.nico.schottelius.org//blog/puppet-sometimes-loads-a-class/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After experiementing a bit with <a href="http://reductivelabs.com/trac/puppet">puppet</a>
I found a very interesting phenomenon: Sometimes a host fails to load a class
with this error:</p>
<pre><code> err: Could not retrieve catalog: Could not find class common::nico_ethz at /etc/puppet/manifests/nodes/nico.pp:35 on node bach21.ethz.ch
</code></pre>
<p>The strange thing is that it <em>does</em> work sometimes. After describing
the situation in the <a href="irc://irc.freenode.net/#puppet">IRC channel #puppet</a>,
I got an "interesting" explanation for that behaviour:</p>
<p>First of all I did a mistake, because I placed the class
<strong><em>common::nico_ethz</em></strong> into the file
<strong>modules/common/manifests/nico.pp</strong> instead of
<strong>modules/common/manifests/nico_ethz.pp</strong>.</p>
<p>But why does it work sometimes?
It works sometimes, because the <strong><em>puppetmaster</em></strong> compiles the catalog
for <strong><em>all</em></strong> nodes and reuses the compiled catalog for different
hosts. <strong>If</strong> a previous node loaded the class <strong><em>common::nico</em></strong>,
the complete content of <strong>modules/common/manifests/nico.pp</strong>
is in the catalog, including <strong><em>common::nico_ethz</em></strong>. This is the reason
why it sometimes works and that's also the reason why I am writing
this posting:</p>
<ul>
<li>Dear other puppet users: Be aware that sometimes a class may be included
indirectly and thus things work randomly (like
<a href="http://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat">Schrödinger's cat</a>)!</li>
<li>Dear puppet developers: It would be way more helpful, if a wrong
configuration <strong>always</strong> and not only <strong>sometimes</strong> fails!</li>
</ul>
<p>If you've a comment to this blog article, please redirect it to
<a href="http://reductivelabs.com/trac/puppet/wiki/GettingHelp#mailing-lists">the puppet users
mailinglist</a>,
to which I sent a notice about this article.</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
rb2011uas2hnd-in.jpghttps://www.nico.schottelius.org//blog/new-wireless-gigabit-linux-router-for-ungleich-office/rb2011uas2hnd-in.jpg2016-02-25T13:34:32Z2015-02-03T14:47:26ZReboot Linux if task blocked for more than n secondshttps://www.nico.schottelius.org//blog/reboot-linux-if-task-blocked-for-more-than-n-seconds/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>If you've run into the situation that your Linux box does not respond
to ssh anymore and you want it to reboot, because some processes are
taking away all the system resources, this article may be for you.</p>
<p>The usual message that can be seen on the console of such a system is</p>
<pre><code>INFO: task java:4242 blocked for more than 120 seconds.
</code></pre>
<p>According to
<a href="http://cateee.net/lkddb/web-lkddb/BOOTPARAM_HUNG_TASK_PANIC.html">cateee.net</a>
the panic on hung feature was added to Linux as of 2.6.30.
Looking at <strong>kernel/hung_task.c</strong>, around lines 96-99 and 105-106, Linux 2.6.35:</p>
<pre><code> 96 printk(KERN_ERR "INFO: task %s:%d blocked for more than "
97 "%ld seconds.\n", t->comm, t->pid, timeout);
98 printk(KERN_ERR "\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\""
99 " disables this message.\n");
[...]
105 if (sysctl_hung_task_panic)
106 panic("hung_task: blocked tasks");
</code></pre>
<p>We can see that if the sysctl_hung_task_panic is true (!=0),
the system will panic. A system that panic'ed isn't of much
use for me either (similar to hanging), thus I would like to
reboot it.</p>
<p>Setting up the sysctl <strong>kernel.panic</strong> to a value greater than
0 tells the kernel to reboot after that amount of seconds after
a panic.</p>
<p>Furthermore the default timeout after a task is considered hanging
is 120 seconds, which my users would like increase to 5 minutes.
Thus the full <strong>sysctl</strong> setup to make a Linux system reboot after a process
hung for 300 seconds, triggered through a panic is</p>
<pre><code># Reboot 5 seconds after panic
kernel.panic = 5
# Panic if a hung task was found
kernel.hung_task_panic = 1
# Setup timeout for hung task to 300 seconds
kernel.hung_task_timeout_secs = 300
</code></pre>
Released ccollect 0.8https://www.nico.schottelius.org//blog/released-ccollect-0.8/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Finally!</p>
<p>After <a href="https://www.nico.schottelius.org//blog/ccollect-0.8-many-changes-quiet-if-down/">some</a>
<a href="https://www.nico.schottelius.org//blog/ccollect-0.8-to-be-released-soon/">previous announcements</a>,
<a href="https://www.nico.schottelius.org//software/ccollect/">ccollect 0.8</a> is released!</p>
<p>There have been about
<a href="http://git.schottelius.org/?p=cLinux/ccollect.git;a=log"><em>100</em> commits</a>
and a lot of
<span class="createlink">changes from 0.7.1 to 0.8</span>.</p>
<p>So, have fun with it and
<a href="http://l.schottelius.org/mailman/listinfo/ccollect">let me know</a> if
you spot a bug or if you like <a href="https://www.nico.schottelius.org//software/ccollect/">ccollect</a>!</p>
Released ceofhack 0.6https://www.nico.schottelius.org//blog/released-ceofhack-0.6/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>A new version of <span class="createlink">ceofhack</span> has been released,
which includes a clean UI interface.</p>
<p>It also includes a "sample UI", <strong>ui-cmd</strong>, which can be used
on the command line for testing.</p>
<p>In theory, all new commands can be handled asynchronously,
the implementation is still synchronised though.</p>
<p>Have fun with it and let me know, whether it works for you or not!</p>
Released efsh 0.2https://www.nico.schottelius.org//blog/released-efsh-0.2/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After a two year break, a new version
of <a href="https://www.nico.schottelius.org//docs/efsh/">efsh</a> has been released, version 0.2.</p>
<p>This is mainly a cleanup release and I'm wondering what
you think about efsh, if you haven't heard about it before.</p>
Names of remote management systems (rmm, drac, ilom, imm, ilo)https://www.nico.schottelius.org//blog/remote-management-names-rmm-drac-ilom-imm-ilo/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After my meeting with a big hardware vendor today, I just got an additional
bunch of presentation slides and buzzwords. And yet another name for a
remote management utility.</p>
<p>Although all of the tested remote management systems comply with
<a href="http://en.wikipedia.org/wiki/IPMI">IPMI 2.0</a>, every vendor gives its
baby a "shiny" new name:</p>
<ul>
<li><a href="http://www.intel.com">Intel</a> uses
<a href="http://pixel01.cps.intel.com/design/servers/ISM/rmm2.htm">RMM2</a>
(remote management 2),</li>
<li><a href="http://www.dell.com">Dell</a> uses
<a href="http://www1.euro.dell.com/content/topics/global.aspx/power/en/ps2q02_bell">DRAC</a>
(Dell Remote Access Control),</li>
<li><a href="http://www.sun.com">Sun</a>
(now <a href="http://www.sun.com/third-party/global/oracle/">Oracle</a>) uses
<a href="http://www.sun.com/systemmanagement/ilom.jsp">ILOM</a>
(Integrated Lights Out Manager)</li>
<li><a href="http://www.ibm.com">IBM</a> uses
<a href="http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.csm16.install.doc/am7il_blademm.html">IMM</a>
(Integrated Management Module)</li>
<li>and <a href="http://www.hp.com">HP</a> uses
<a href="http://h18000.www1.hp.com/products/servers/management/ilo/">ILO</a>
(Integrated Lights-Out).</li>
</ul>
<p>What's the reason for this? Why did the car industry manage to have the same
name for the same feature
(like <a href="http://en.wikipedia.org/wiki/Anti-lock_braking_system">ABS</a>
or <a href="http://en.wikipedia.org/wiki/Electronic_stability_control">ESP</a>) and
the IT industry not?</p>
Replaced old PGP key 9885188C with 31877DF0https://www.nico.schottelius.org//blog/replaced-pgp-key-9885188C-with-31877DF0/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As the old key 9885188C expired some time ago, I replaced it.
The <a href="https://www.nico.schottelius.org//about/pgp-key-31877DF0.txt">new key</a> has the following
fingerprint</p>
<pre><code>7ED9 F7D3 6B10 81D7 0EC5 5C09 D7DC C8E4 3187 7DF0
</code></pre>
<p>and is signed by the previous one. Please resign the new one,
if you signed the previous one.</p>
Started to write news againhttps://www.nico.schottelius.org//blog/restart-to-write-news/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>A long time ago I stopped to publish news on the
<a href="http://nico.schottelius.org/notizbuch-blog">old newssite</a>, for several reasons.
The most important one is that</p>
<ul>
<li>there have been many other much more important things in my life</li>
</ul>
<p>As I am currently coming back to my usual technical stuff like</p>
<ul>
<li>testing software</li>
<li>finding bugs</li>
<li>writing patches</li>
<li>coding software</li>
<li>publishing and getting feedback for it</li>
<li>visiting conferences</li>
<li>etc.</li>
</ul>
<p>I thougt it's a good time to start writing news again.
Especially as one of my biggest software projects seems to reach
version 1.0...</p>
<p>By the way, I also had to migrate the plone site to a new server...</p>
Take ruby, ncurses and ceofhack, get fuihttps://www.nico.schottelius.org//blog/ruby-ncurses-ceofhack-fui/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>For the lessons of the software development I have to create
a nice project in some object orientated language. After
digging around what would interest me, I chose
<a href="http://www.ruby-lang.org/">ruby</a>, because it feels like it could be
an interesting language.</p>
<p>The next question was, what kind of software to write. As
the EOF project, namely <span class="createlink">ceofhack</span> still
needs a user interface, I decided to write one.</p>
<p>As I need an user interface, I would use myself, I wanted to use
something I can use on the console, which led me to
<a href="http://www.gnu.org/software/ncurses/">ncurses</a>.</p>
<p>Eventually I found out that there is also support for ncurses
in ruby, <a href="http://ncurses-ruby.berlios.de/">ncurses-ruby</a>.</p>
<p>Perfectly! The only thing that has been missing was a name. As
I am a very simple thinking person, I chose <em>fui</em>, as an abreviation
of <em>fancy user interface</em>.</p>
<p>The
<a href="http://git.schottelius.org/?p=fui;a=summary">git repository</a>
has already been published, expect more news soon!</p>
Ruby on Rails: Fix the hostname does not match the server certificate errorhttps://www.nico.schottelius.org//blog/ruby-on-rails-fix-hostname-does-not-match-the-server-certificate/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>The problem</h2>
<p>If you encounter this problem when running Ruby on Rails:</p>
<pre><code>OpenSSL::SSL::SSLError (hostname "localhost" does not match the server certificate)
</code></pre>
<p>it is likely due to
<a href="http://api.rubyonrails.org/classes/ActionMailer/Base.html">ActionMailer</a>
using the <strong>smtp</strong> as the <strong><em>delivery_method</em></strong> and
your local mail server supporting TLS/SSL,
but not having a correct/valid certificate.</p>
<h2>The solution</h2>
<p>You can add a valid certificate, but if the server is just used
for sending out mails, this may not be suited.
In that case, you can change the <strong><em>delivery_method</em></strong> to
<strong>sendmail</strong>, which makes ActionMailer use the sendmail
binary directly.</p>
Next /sans/ meeting with puppet as main topichttps://www.nico.schottelius.org//blog/sans-puppet-meeting-20100825/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Back from the holidays,
<a href="http://sans.ethz.ch/meetings/2010-08-25/">the sysadmins are meeting</a>
in the ETH to discuss <a href="http://www.puppetlabs.com/">Puppet</a> on</p>
<pre><code>Wednesday, 2010-08-25, 15:00, CAB/E/79 (meeting point)
</code></pre>
<p>If you're interested, feel free to join us!</p>
scp.pnghttps://www.nico.schottelius.org//blog/adobe-source-code-pro-font-not-for-small-sizes/scp.png2016-02-25T13:34:32Z2015-02-03T14:47:26ZSearching a notebook for a digital nomadhttps://www.nico.schottelius.org//blog/searching-notebook-for-a-digital-nomad/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Who I am</h2>
<p>Dear reader of this post, my name is Nico Schottelius, I am digital nomad searching
for a great notebook.
My requirements are (in my opinion) simple, but I struggle to find any vendor marketing
a notebook that fulfills my needs.</p>
<h2>What I search</h2>
<p>It is as easy as this:</p>
<ul>
<li>12 to 14 inch screen
<ul>
<li>Preferred something that does not have senseless big borders</li>
<li>Size should be less or equal than 30cm x 17cm</li>
</ul>
</li>
<li>Resolution at least 1600x900
<ul>
<li>1600x900 allows three terminals next to each other</li>
<li>3200x1800 is ok as well, but needs some high dpi tunings under Linux</li>
</ul>
</li>
<li>Screen should be readable in sun light
<ul>
<li>Bright enough</li>
<li>Not reflecting too much (like the Lenovo Yoga 2 Pro!)</li>
</ul>
</li>
<li>At least 8G RAM
<ul>
<li>16 GiB RAM is ok as well</li>
</ul>
</li>
<li>At least 512 GB SSD
<ul>
<li>1TB ok as well</li>
<li>I store pictures on the notebook</li>
</ul>
</li>
<li>Battery lifetime >= 6h
<ul>
<li>Real 6 hours, not vendor claimed 6 hours</li>
<li>Preferred 12+ hours, so I can stay without power a day long</li>
</ul>
</li>
<li>Linux supported
<ul>
<li>Not needed to be certified, but Linux should be easily installable to it</li>
<li>Reference distributions: Archlinux and Debian</li>
</ul>
</li>
<li>Good keyboard
<ul>
<li>I know, very subjective, but something that let's me type hours without problems in my hands</li>
<li>Thinkpads are usually fine, Mac stuff usually as well</li>
</ul>
</li>
<li>Weight: Less than 2 KG (4.4 lbs)
<ul>
<li>Would be great to be in the range of 1.0 KG (2.2 lbs)</li>
</ul>
</li>
</ul>
<h2>Suitable devices</h2>
<p>none so far</p>
<h2>Unsuitable devices (tested)</h2>
<ul>
<li>Mac Book Air 13"
<ul>
<li>Resolution is 1400x900</li>
</ul>
</li>
<li>Lenovo X1 Carbon
<ul>
<li>Broken keyboard, no capslock key
<ul>
<li>I need capslock for the <a href="http://neo-layout.org/index_en.html">Neo Keyboard Layout</a></li>
</ul>
</li>
<li>Old Models: SSD up to 256G only</li>
</ul>
</li>
<li>Lenovo Yoga 2 Pro
<ul>
<li>Unreadable in sun light</li>
<li>Poor keyboard</li>
</ul>
</li>
</ul>
<h2>Call for help</h2>
<p>If you know or own any device that may fit the given description, please
<a href="https://www.nico.schottelius.org//about/">contact me</a> or leave a comment on
<a href="https://news.ycombinator.com/item?id=7911658">hackernews</a>.</p>
Sexy and cdist @ local.chhttps://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>This article describes the real world setup of
<a href="https://www.nico.schottelius.org//software/sexy/">sexy</a> and
<a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> at <a href="http://www.local.ch">local.ch</a>.
Sexy and cdist are used to configure</p>
<ul>
<li>dhcp servers</li>
<li>dns servers</li>
<li>KVM hosts</li>
</ul>
<p>at local.ch.</p>
<p>As I am soon leaving local.ch, this blog post is written for
those interested in sexy and cdist, as well as the other
sysadmins at local.ch to remember how things are setup.</p>
<p>The following picture will give you a general impression how things
are setup.</p>
<p><a href="https://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-cdist-interaction-local.ch.png"><img src="https://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-cdist-interaction-local.ch.png" width="694" height="508" alt="How cdist and sexy act at local.ch" class="img" /></a></p>
<h2>Sexy installation</h2>
<p>As you may be aware, sexy is an inventory management utility. It manages
<strong>hosts</strong> and <strong>IPv4 networks</strong> (IPv6 support planned - but currently not required).</p>
<p>Sexy uses a <a href="https://www.nico.schottelius.org//docs/cconfig/">cconfig</a> database, which is stored at <strong>~/.sexy</strong>.
At local.ch, almost all important configurations are backed up at
<a href="http://www.github.com">github</a>. The sexy database is backed up there in a private
repository named <strong>sexy-database</strong>.</p>
<p>Sexy requires only Python 3 to be installed.</p>
<h3>Sexy database</h3>
<p><a href="https://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-database-overview.png"><img src="https://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-database-overview.png" width="826" height="783" alt="Overview of the sexy database" class="img" /></a></p>
<p>As you can see in the above image,
the sexy database contains three databases:</p>
<ul>
<li>host</li>
<li>mac</li>
<li>net-ipv4</li>
</ul>
<p>The <strong>host</strong> database contain all hosts, including mac addresses, host type (VM or hardware),
network cards, etc.</p>
<p>The <strong>mac</strong> database contains the prefix for generating new mac addresses (we are using 00:16:3e -
guess which vendor it is!) and the used mac addresses. The mac database is essentially used
for generating mac addresses for virtual machines.</p>
<p>The <strong>net-ipv4</strong> database contains the configured <strong>IPv4 networks</strong>.
Each IPv4 network contains</p>
<ul>
<li>the network mask</li>
<li>last used address</li>
<li>list of free addresses</li>
<li>list of hosts</li>
</ul>
<p>Every hosts in an <strong>IPv4 network</strong> contains</p>
<ul>
<li>an IPv4 address</li>
<li>a mac address</li>
</ul>
<h3>Sexy backends</h3>
<p>Sexy uses backends to interact with other systems. As can be seen
below, both the <strong>host</strong> and <strong>net-ipv4</strong> backends write configuration
files in cdist.</p>
<p><a href="https://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-backends-local.ch.png"><img src="https://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-backends-local.ch.png" width="826" height="647" alt="Sexy backends @local.ch" class="img" /></a></p>
<p>Sexy outputs the VM to KVM host mapping into a cdist manifest stored in
<strong>cdist/manifest/kvm-hosts</strong>. The cdist type <strong>__localch_kvm_vm</strong> is
being used to create VMs.</p>
<p>Sexy also generates <a href="https://www.isc.org/downloads/bind/">BIND</a> zone files
as well as <a href="https://www.isc.org/downloads/DHCP/">DHCP</a> configuration files.
These files are stored within the cdist types
<strong>__localch_bind</strong> and <strong>__localch_dhcpd</strong>.</p>
<h2>cdist installation</h2>
<p>Similar to sexy, cdist requires only Python 3 to be installed, but only on the
computer you use to configure the target hosts. The target hosts only require
shell and ssh.</p>
<p>cdist normally reads its configuration from <strong>~/.cdist</strong>. As the
current installation is old-style, the custom configuration and cdist code
is both stored at <strong>~/localch/vcs/cdist</strong>, which is also backed up
as a private repository named <strong>cdist</strong> at github.</p>
<p>cdist is currently being used directly from the sysadmin notebooks and thus
requires to synchronise the repository before running.
cdist is being utilised using scripts from the <strong>sysadmin-logs</strong> repository,
which are stored in the <strong>cdist</strong> folder. They mainly wrap around
<strong>cdist config -vp <hostnames...></strong>.</p>
<h2>Interaction with other systems</h2>
<h3>Sexy connection to cdist</h3>
<p>To be able to interact with cdist, the sexy backends do have some paths hardcoded.
One of them being <strong>~/localch/vcs/cdist</strong>, which refers to the cdist installation.</p>
<p>On <strong>sexy host apply --all</strong>, sexy will regenerate the
cdist manifest <strong>~/localch/vcs/cdist/conf/manifest/kvm-hosts</strong>, which contains
the configuration for all kvm hosts.</p>
<h3>cdist with dhcp and dns servers</h3>
<p>To configure the dhcp servers, the script <strong>sysadmin-logs/cdist/dhcp-servers</strong> can
be used, to configure the dns servers, the script
<strong>sysadmin-logs/cdist/dns-servers</strong> can be used.
If you want to change both systems at the same time, the
script <strong>sysadmin-logs/cdist/dhcp-dns-together</strong>.</p>
<p>All three scripts depend on sexy and the sexy database being installed, as they lookup
the host names using sexy.</p>
<h3>cdist creates virtual machines</h3>
<p>The KVM infrastructure is based on very simple assumptions: All files are contained
on the host, machines are started from simple shell scripts. The shell scripts are
maintained or created within cdist. Virtual machines are not started by default,
because the installation process is triggered manually at PXE bootup.</p>
<h2>Outlook</h2>
<p>In one of the next articles I'll cover the KVM VM infrastructure of local.ch.</p>
Sexy example: Small backend change and you are managing DNShttps://www.nico.schottelius.org//blog/sexy-backend-change-dns-support/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>This previous article about
<a href="https://www.nico.schottelius.org//blog/sexy-network-bootstrap/">bootstrapping a network with sexy</a>
explained in detail how to manage a network and how to configure
it with cdist.</p>
<p>This article shows you what needs to be changed to support DNS resolution
in addition to the configured DHCP service.</p>
<h2>Background</h2>
<p>I am using <a href="http://www.thekelleys.org.uk/dnsmasq/doc.html">dnsmasq</a> on my
router, which can act as a DNS and DHCP server. DNS A entries can be added
to the configuration using the <strong>host-record</strong> command.</p>
<h2>The change</h2>
<p>Taking the previously net-ipv4 backend,
<a href="http://git.schottelius.org/?p=sexy-database;a=commit;h=e7f45dccc1feace042bec1549079f073aa476739">the required change is very small</a>:</p>
<pre><code>- line="dhcp-host=${mac},$ipv4a,$hostname"
- echo "${line}" >> "${tmp}"
+ echo "dhcp-host=${mac},$ipv4a,$hostname" >> "${tmp}"
+ echo "host-record=$hostname,$fqdn,$ipv4a" >> "${tmp}"
</code></pre>
<p>Thanks to the modular configuration and the easiness of both sexy and cdist,
this change and a call to <strong>sexy net-ipv4 apply --all</strong> is everything that is needed
to make dnsmasq serve internal DNS names.</p>
<h2>The result</h2>
<p>What this article should show is that whatever you do in the backend, sexy is not affected
at all and you can dramatically change whatever happens on <strong>sexy net-ipv4 apply --all</strong>.</p>
<p>You can browse
<a href="http://git.schottelius.org/?p=sexy-database;a=summary">the sexy database</a>
as well as
the <a href="http://git.schottelius.org/?p=cdist-nico;a=summary">cdist configuration</a>.</p>
sexy-backends-local.ch.pnghttps://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-backends-local.ch.png2016-02-25T13:34:32Z2015-02-03T14:47:26Zsexy-cdist-interaction-local.ch.pnghttps://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-cdist-interaction-local.ch.png2016-02-25T13:34:32Z2015-02-03T14:47:26ZSexy and cdist interaction: Sexy chooses hosts, cdist configureshttps://www.nico.schottelius.org//blog/sexy-cdist-interaction-sexy-chooses-cdist-configures/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>Version 2 of <a href="https://www.nico.schottelius.org//software/sexy/">sexy</a>,
the Swiss Army Knife for inventory management, is already
<strong>using</strong> and <strong>usable</strong> from <span class="createlink">cdist</span>.</p>
<p>This is the first blog post of a series showing examples of
using sexy and cdist.</p>
<h2>Example</h2>
<p>Cdist is executed with a list of hosts to operate on:</p>
<pre><code>% cdist config
usage: cdist config [-h] [-d] [-v] [-c CDIST_HOME] [-i MANIFEST] [-p] [-s]
[--remote-copy REMOTE_COPY] [--remote-exec REMOTE_EXEC]
host [host ...]
</code></pre>
<p>Sexy in turn is able to manage hosts, mac addresses and networks:</p>
<pre><code>% sexy
usage: sexy [-h] [-d] [-v] [-V] {net-ipv4,host,mac} ...
sexy: error: too few arguments
</code></pre>
<p>Sexy knows about a command to list hosts, named <strong>host list</strong>.
So I can use sexy to tell cdist which hosts to configure. For instance
all dhcp servers:</p>
<pre><code>% sexy host list | grep dhcp
dhcp-vm-inx01.intra.local.ch
dhcp-vm-inx02.intra.local.ch
dhcp-vm-snr01.intra.local.ch
dhcp-vm-snr02.intra.local.ch
% ./bin/cdist config -vp $(sexy host list | grep dhcp)
INFO: dhcp-vm-inx01.intra.local.ch: Running global explorers
INFO: dhcp-vm-snr01.intra.local.ch: Running global explorers
INFO: dhcp-vm-snr02.intra.local.ch: Running global explorers
INFO: dhcp-vm-inx02.intra.local.ch: Running global explorers
...
</code></pre>
<p>Sexy, isn't it?</p>
sexy-database-overview.pnghttps://www.nico.schottelius.org//blog/sexy-and-cdist-at-local.ch/sexy-database-overview.png2016-02-25T13:34:32Z2015-02-03T14:47:26ZSexy is being renamed to cinvhttps://www.nico.schottelius.org//blog/sexy-is-being-renamed-to-cinv/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>You may have noticed that searching
for the magnificent tool named <strong>sexy</strong> on a search engine of your
choice did not easily lead you to the Swiss Army Knife for inventory management.</p>
<p>To fix this problem, sexy is currently being renamed to
<a href="https://www.nico.schottelius.org//software/cinv/">cinv</a>.</p>
Bootstrapping a network with sexyhttps://www.nico.schottelius.org//blog/sexy-network-bootstrap/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>This article explains how to begin to manage a network
with <a href="https://www.nico.schottelius.org//software/sexy/">sexy</a>. Because I just moved house,
I take my home network as an example.</p>
<h2>Prerequisites</h2>
<p>First of all, you need to have sexy installed, as described on
the <a href="https://www.nico.schottelius.org//software/sexy/">sexy homepage</a>. Secondly, if you already played
around with sexy, you should empty the sexy database, which is located
at <strong>~/.sexy</strong>:</p>
<pre><code>% rm -rf ~/.sexy
</code></pre>
<p>Or, if you are using <strong>git</strong> to manage your ~/.sexy directory, create a fresh
branch, which does not contain any files:</p>
<pre><code>% cd ~/.sexy
% git checkout -b network_bootstrap
# Ensure all (committed and non-committed) files are gone
% rm -rf db/ backend/
% git rm -r db/ backend/
% git commit -m "Empty sexy database"
</code></pre>
<h2>Add the first host</h2>
<p>First of all, let us add a host. Sexy wants to know its type (virtual machine
or hardware). Sexy expects all names as fully qualified domain names (FQDNs):</p>
<pre><code>% sexy host add -t hw katze.intern.schottelius.org
</code></pre>
<p><strong>Hint:</strong> You can use the <strong>-h</strong> flag to get help for any command.
Using <strong>host list</strong>, we can verify the host has been added:</p>
<pre><code>% sexy host list
katze.intern.schottelius.org
</code></pre>
<p>Now we can network cards to this host:</p>
<pre><code>% sexy host nic-add -m 00:00:24:c8:da:bc -n eth0 katze.intern.schottelius.org
% sexy host nic-add -m 00:00:24:c8:da:bd -n eth1 katze.intern.schottelius.org
</code></pre>
<h2>Add the network</h2>
<p>Currently, sexy only allows you to manage IPv4 based networks
- IPv6 may be added in future releases. So the command to remember for now, is
<strong>net-ipv4</strong>:</p>
<pre><code>% sexy net-ipv4 add --mask 22 192.168.24.0
% sexy net-ipv4 list
192.168.24.0
</code></pre>
<p>Now we created the network 192.168.24.0/22.</p>
<h2>Add a host to a network</h2>
<p>In sexy, the host and net-ipv4 areas are disconnected: You can use sexy to manage
only hosts, to manage only networks or to manage both. To allow this flexibility,
the network part does not know about any information from the host part.
Luckily enough, you don't need to re-enter the information, but you can retrieve
them from the database.</p>
<p>The previously added host, <strong>katze.intern.schottelius.org</strong>, is the router of
my home network and it should use the first IPv4 address in the network.
The <strong>net-ipv4 host-add</strong> command can be used to add a host:</p>
<pre><code>% sexy net-ipv4 host-add
usage: sexy net-ipv4 host-add [-h] [-d] [-v] -m MAC_ADDRESS -f FQDN
[-i IPV4_ADDRESS]
network
</code></pre>
<p>So adding the host to a network requires giving in at least the mac address,
which we entered before. So we can use the following line to add the host to
our new network:</p>
<pre><code>% host=katze.intern.schottelius.org
% mac=$(sexy host nic-addr-get -n eth0 $host)
% sexy net-ipv4 host-add -m $mac -f $host 192.168.24.0
</code></pre>
<p>Sexy will be default use the next free address and as this is the first host in
the network, it used .1:</p>
<pre><code>% sexy net-ipv4 host-ipv4-address-get 192.168.24.0 -f katze.intern.schottelius.org
192.168.24.1
</code></pre>
<h2>Making use of the entered information</h2>
<p>Sexy does not know which DNS or DHCP server you may be using.
To implement changes to your architecture (probably using
a software like <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a>), sexy supports
<strong>backends</strong> to do the change.</p>
<p>For my home network, I am going to use
<a href="http://www.thekelleys.org.uk/dnsmasq/doc.html">dnsmasq</a>, because the
router is a small <a href="http://soekris.com/net5501.htm">Soekris net5501</a>.</p>
<p>The backends are stored in <strong>~/.sexy/backend</strong> and for this
example tutorial, I will create <strong>~/.sexy/backend/net-ipv4/apply</strong>:</p>
<pre><code>% cat ~/.sexy/backend/net-ipv4/apply
#!/bin/sh -e
cdist_base="/home/users/nico/p/cdist/nico"
cdist_bin="$cdist_base/bin/cdist"
dst_dir="$cdist_base/conf/type/__nico_router/files/dnsmasq.d"
tmp=$(mktemp /tmp/foooooo.XXXXXXXXXXXXXXXX)
for network in "$@"; do
dstfile="${dst_dir}/${network}-dhcp.conf"
cat << eof > "$tmp"
# WARNING: sexy generated file, do *not* edit directly.
eof
for fqdn in $(sexy net-ipv4 host-list $network); do
mac=$(sexy net-ipv4 host-mac-address-get -f "$fqdn" "$network")
ipv4a=$(sexy net-ipv4 host-ipv4-address-get -f "$fqdn" "$network")
hostname=$(echo $fqdn | sed 's/\..*//')
line="dhcp-host=${mac},$ipv4a,$hostname"
echo "${line}" >> "${tmp}"
done
mv "${tmp}" "${dstfile}"
done
cd "${dst_dir}"
git add .
git commit -m "Update Sexy generated network configuration" -o -- . 2>/dev/null || true
echo "Transferring changes to git remote"
git pull --quiet
git push --quiet
"$cdist_bin" config -v zuhause.schottelius.org
</code></pre>
<p>In essence this backend creates the dnsmasq configuration and executes cdist afterwards
to apply the changes. I personally prefer a backend to be shell script, but it can be
any kind of executable.</p>
<h2>Adding more hosts</h2>
<p>To make this tutorial useful and my router actually provide a dhcp
server, I'll add my notebook and the fileserver to sexy:</p>
<pre><code>% sexy host add -t hw loch.intern.schottelius.org
% sexy host nic-add -m f4:6d:04:71:c5:ce loch.intern.schottelius.org
% sexy net-ipv4 host-add -m $(sexy host nic-addr-get -n nic0 loch.intern.schottelius.org) -f loch.intern.schottelius.org 192.168.24.0
% sexy host add -t hw brief.intern.schottelius.org
% sexy host nic-add -m b8:8d:12:15:fd:fa brief.intern.schottelius.org
% sexy net-ipv4 host-add -m $(sexy host nic-addr-get -n nic0 brief.intern.schottelius.org) -f brief.intern.schottelius.org 192.168.24.0
</code></pre>
<p>As you can see, if I do not specify the name of the nic, sexy automatically uses <strong>nic0</strong>
for the first nic. This decision was made, as network device names vary between
operating systems and even operating system versions.</p>
<h2>Applying the configuration</h2>
<p>The previously created backend will get executed with all existing networks,
if you run the apply command with the <strong>--all</strong> parameter:</p>
<pre><code>% sexy net-ipv4 apply --all
</code></pre>
<h2>The result</h2>
<p>Using only the steps above, I've created a sexy maintained network,
<strong>192.168.24.0/22</strong>, which calls <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> to configure
the router with dnsmasq.</p>
<p>You can browse
<a href="http://git.schottelius.org/?p=sexy-database;a=summary">the real sexy database</a>
created during this tutorial, as well as
the <a href="http://git.schottelius.org/?p=cdist-nico;a=summary">cdist configuration</a>
that is used to configure the router.</p>
Solution proposal for the io select/poll problemhttps://www.nico.schottelius.org//blog/solution-proposal-for-the-io-select-poll-problem/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>The situation</h2>
<p>If you have used select(2) or poll(2) more than once, you may have noticed
the regular pattern that comes up again and again:</p>
<ul>
<li>The main task of a program is to listen and react on multiple input/output connections.</li>
<li>For each i/o connection (file descriptor), you have</li>
<li>a function that opens the file descriptor (Let's call it <strong><em>conn_open</em></strong>.) and</li>
<li>another function to be executed if an event happens on the file descriptor
(<strong><em>conn_handle</em></strong>).</li>
<li>After <strong><em>conn_handle</em></strong> has been called, the number of connections may have changed:
<strong><em>conn_handle</em></strong> may</li>
<li>add (think of accept(2))</li>
<li>or remove (think of close(2)) connections.</li>
<li><strong><em>conn_open</em></strong> and <strong><em>conn_handle</em></strong> are closely related and belong to the
same "object" or code (<strong><em>conn_object</em></strong>).</li>
</ul>
<h2>The problem</h2>
<p>Each and every time this situation occurs, you have to (re-)write
code to handle that case. I have seen it in some applications I have
been writing, for instance <span class="createlink">ceofhack</span> or <a href="https://www.nico.schottelius.org//software/fui/">fui</a>.</p>
<h2>The solution proposal</h2>
<p>Write a solution to the problem
<a href="http://c2.com/xp/OnceAndOnlyOnce.html">once and only once</a>,
so you and me
<a href="http://en.wikipedia.org/wiki/Don%27t_repeat_yourself">don't reapeat ourselves</a>.</p>
<p>First of all, begin with the obvious part:</p>
<h3>What do we have?</h3>
<p>We have to bring together <strong>n times</strong></p>
<ul>
<li>open functions</li>
<li>file descriptors on which the following events can happen:</li>
<li>data is ready to be read from the file descriptor</li>
<li>data can be written to the file descriptor</li>
<li>an error occured on the file descriptor</li>
<li>handle functions</li>
</ul>
<p>Furthermore, we have</p>
<ul>
<li><strong>one</strong> main loop that listens for events</li>
</ul>
<h3>How to connect them properly?</h3>
<p>I assume that every <strong><em>conn_object</em></strong> knows best, which function to use for
opening and handling and which type of event is interesting.</p>
<p>Thus if we create a special function <strong><em>conn_open</em></strong>
for every <strong><em>conn_object</em></strong> that</p>
<ul>
<li>returns this information to the caller, the caller can create</li>
<li>a list containing the needed information and</li>
<li>loop over the event list and call the corresponding handler.</li>
</ul>
<p>The <strong><em>conn_open</em></strong> function may look like this:</p>
<pre><code>connection_entry = conn_open();
</code></pre>
<p>Where <strong><em>connection_list</em></strong> is a list of <strong><em>connection_entries</em></strong> like this:</p>
<pre><code>struct connection_list {
struct connection_entry *next;
} list;
enum type {
IN,
OUT,
ERR
};
struct connection_entry {
int fd;
void (*handler)(struct connection_list *);
int type;
};
</code></pre>
<p>The <strong><em>conn_open</em></strong> function can add or remove entries from the list.
Whether the list is an array, linked list, hash or whatever may be implementation specific.</p>
<h3>The main loop</h3>
<p>Before launching the main listener loop, we need to initialise
the list and run the <strong>conn_open</strong> function of every <strong>conn_object</strong>:</p>
<pre><code>struct connection_list list = init_connection_list();
connection_add(&list, a_conn_open());
connection_add(&list, b_conn_open());
</code></pre>
<p>Having done this, our main loop now looks pretty simple, doesn't it?</p>
<pre><code>while(1) {
/* create poll or select list from connection list, whatever you prefer */
poll_or_select_struct = connection_list_topoll_or_select(&connection_list);
changed_events = poll_or_select(&poll_or_select_struct);
for(event = changed_events; event != NULL; event = event->next) {
exec_handler(event->fd, &connection_list);
}
}
</code></pre>
<p>The function <strong><em>exec_handler</em></strong> would search for the registered handler
of the changed file descriptor, which could look like this:</p>
<pre><code>conn_handle(fd, &connection_list)
</code></pre>
<p>Firstly we pass the <strong>fd</strong>, because one handler may be registered for more
than one connection.
Secondly we add the <strong><em>connection_list</em></strong>, because the handler may add
more connections or remove itself.</p>
<p>Although I'm mainly speaking about <strong><em>poll</em></strong> and <strong><em>select</em></strong>, the idea also
applies to <a href="http://people.freebsd.org/~jlemon/papers/kqueue.pdf">kqueue</a>,
<a href="http://www.kernel.org/doc/man-pages/online/pages/man4/epoll.4.html">epoll</a>
and co.</p>
<h2>Further thoughts</h2>
<h3>Combine handle and open functions</h3>
<p>For a second I thought the functions <strong><em>conn_handle</em></strong> and
<strong><em>conn_open</em></strong> could be merged into a single function,
which would get a negative file descriptor, if called the first
time.
But as this means the <strong><em>conn_handle</em></strong> would need to check every
time it is called whether the to call the open part or not,
this is probably not a good idea.</p>
<h3>Creating an implementation</h3>
<p>I am currently actively (!) working on <a href="https://www.nico.schottelius.org//software/fui/">fui</a> and
think about creating an implementation in ruby and if it works
fine, another one in C for <span class="createlink">ceofhack</span>.</p>
<h3>Getting feedback</h3>
<p>I would appreciate any feedback regading this idea, whether the
problem is no problem at all, has been solved before or the idea
may be a solution for your problem, too: You can contact me for
this special post at
<strong>nico-io-poll-select-idea</strong> (near) <strong>schottelius.org</strong>.</p>
Started Archive.Schottelius.Orghttps://www.nico.schottelius.org//blog/started-archive.schottelius.org/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After a long time running websites, I decided to take some websites
of mine down and to migrate the interesting parts into this website.</p>
<p>The unmodified versions have now a place to rest until I stop
running websites: <a href="http://archive.schottelius.org">archive.schottelius.org</a>
(and also the German version <a href="http://archiv.schottelius.org">archiv.schottelius.org</a>)
is the new home for my old websites.</p>
Setting static nameserver and search path with dhcpcdhttps://www.nico.schottelius.org//blog/static-nameserver-and-search-path-with-dhcpcd/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>If you're changing networks a lot, but want to keep a some static
settings, this is <strong>one</strong> way to do it.</p>
<h2>Motivation</h2>
<p>As most wireless networks are featured with unreliable and slow connections,
I'm running my own (caching only) dns server on my notebook, to keep the
answers in my local cache. Thus I always want to have</p>
<pre><code>nameserver 127.0.0.1
</code></pre>
<p>as the first entry in my <strong><em>resolv.conf</em></strong>.</p>
<p>Additionally, I always want to have <strong>schottelius.org</strong> and <strong>ethz.ch</strong>
in my search path, resulting in</p>
<pre><code>search schottelius.org ethz.ch
</code></pre>
<p>Thus I am always able to type only the hostname, independent of my location.</p>
<h2>Implementation</h2>
<p>I am currently using <a href="http://roy.marples.name/projects/dhcpcd/">dhcpcd</a>,
which is shipped with <a href="http://www.archlinux.org/">archlinux</a> by default.</p>
<p>The package contains <strong>/usr/lib/dhcpcd/dhcpcd-hooks/20-resolv.conf</strong>,
which takes <strong><em>/etc/resolv.conf.head</em></strong> and <strong><em>/etc/resolv.conf.tail</em></strong>
into account.</p>
<p>According to <strong>resolv.conf(5)</strong>, if multiple nameservers are specified,
they will be asked in the order listed, so</p>
<pre><code>echo nameserver 127.0.0.1 > /etc/resolv.conf.head
</code></pre>
<p>ensures that my local nameserver is asked firstly. As the <strong>domain</strong> and
<strong>search</strong> field override each other, the last entry wins:</p>
<pre><code>echo search schottelius.org ethz.ch > /etc/resolv.conf.tail
</code></pre>
<h2>Further information</h2>
<p>The same can easily be done with other modular dhcp-clients, like udhcpc
(part of <a href="http://www.busybox.net/">busybox</a>).</p>
<p>The behaviour of your resolver library may be different, be sure to
check your local system documentation.</p>
<p>There are a lot of small caching nameservers available. I have good
experiences with <a href="http://cr.yp.to/djbdns/dnscache.html">dnscache</a>,
<a href="http://www.thekelleys.org.uk/dnsmasq/">dnsmasq</a> and
<a href="http://www.unbound.net/">unbound</a>.</p>
How to use stdin and here documents for templating in cdisthttps://www.nico.schottelius.org//blog/stdin-here-documents-templating-in-cdist/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>In the shell you can see the use of
<a href="http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_07_04">here documents</a> from time to time. They are very practical if you want to
feed some data with line breaks (also referred to as
"document") into another programm at the current position
("here") in the shell.</p>
<p><a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> allows you to make use of stdin and thus
also of here documents in your types. This article gives you some
examples on how you can use them.</p>
<h2>Here documents in short</h2>
<p>For those who have found this page, but are not familiar with here documents,
here is a short example of how you can use them:</p>
<pre><code>cat << eof
Hello world,
this is a here document.
eof
</code></pre>
<p>The interesting part of here documents is that you can
use parameter expansion, command substitution, and arithmetic expansion
(as described in the <strong>here documents</strong> article linked above - opengroup
offers a great reference for shell coders/users):</p>
<pre><code>name="Nico"
cat << eof
Hello $name,
1+1 = $((1+1))
ls ~ = $(ls ~)
eof
</code></pre>
<p>Just try it - copy and paste the above code into your shell and it will
display the result of 1+1 and the contents of your home directory.</p>
<h2>Here documents and stdin in cdist</h2>
<p>Whenever you execute a type in a manifest in cdist like this:</p>
<pre><code>__file /tmp/testfile
</code></pre>
<p>cdist also reads stdin that is supplied to the type.
Not every type that is shipped with cdist makes use
of stdin, but <a href="https://www.nico.schottelius.org/software/cdist/man/latest/man7/cdist-type__file.html">__file</a>
does (always check the manpage of the cdist types - if a type makes
use of stdin, it is documented in there).</p>
<p>Indeed, if <strong>__file</strong> sees that you use "-" as the value for the
<strong>source</strong> parameter, it will use stdin for the content of the file
that it maintains:</p>
<pre><code>echo "Hello file" | __file /tmp/testfile --source -
</code></pre>
<p>Instead of using echo, we could also use the previously mentioned here document:</p>
<pre><code>__file /tmp/testfile --source - << eof
Hello world,
this is a here document.
eof
</code></pre>
<p>Beware, you could use cat like this</p>
<pre><code>cat << eof | __file /tmp/testfile --source -
Hello world,
this is a here document.
eof
</code></pre>
<p>but it is a
<a href="https://en.wikipedia.org/wiki/Cat_(Unix">useless use of cat (UUOC)</a>#Useless_use_of_cat).</p>
<h2>Templating using here documents in cdist</h2>
<p>Here documents are very powerful and they are very useful for templating.
Indeed, the <a href="https://github.com/ungleich/cdist-examples/tree/master/type/__ungleich_nginx_site">__ungleich_nginx_site type</a> uses a template like this in its manifest:</p>
<pre><code>template_in=$__type/files/nginx-template
template_out=$__object/files/nginx-template
export www_dir="$base_dir/www"
export log_dir="$base_dir/logs"
... (including more exports)
mkdir "$__object/files"
sh -e "$template_in" > "$template_out"
</code></pre>
<p>The following code shows the template (<strong>$__type/files/nginx-template</strong>):</p>
<pre><code>cat << eof
#
# Do not change this file. Changes will be overwritten by cdist.
#
server {
# Only bind on the reserved IP address
listen $listen;
eof
servername="$name"
for a in $alias; do
servername="$servername $a"
done
servername="$servername"
cat <<eof
server_name $servername;
location / {
root $www_dir;
eof
if [ -f "$__object/parameter/locationopt" ]; then
echo " # User given location parameters"
while read line; do
echo " $line"
done < "$__object/parameter/locationopt"
fi
cat <<eof
}
access_log $log_dir/access.log;
}
eof
</code></pre>
<h2>Had fun?</h2>
<p>The shell is indeed very powerful, you just need to know how to use it.
This is why cdist was even originally written in shell script and is
still configured in shell script (and will continue to be so).</p>
<p>If you are shell junkie, you may find more addictive drugs
<a href="https://www.nico.schottelius.org//blog/">in this blog</a>.</p>
Sysadmin bootstrap - the beginninghttps://www.nico.schottelius.org//blog/sysadmin-bootstrap-1/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>If you read this blog regulary, you probably know that
I work as a <a href="https://www.nico.schottelius.org//tags/sysadmin/">sysadmin</a> at the
<a href="http://www.ethz.ch">ETH Zürich</a>.</p>
<p>I decided to think about all the services I provide and
to document them very well, so that anybody else with
some technical background could redo, what I did.</p>
<h2>Motivation</h2>
<p>For most jobs I've had, it was always one aim to make myself
redundant, because this proves that I did a good job.</p>
<p>Secondly this gives me a good chance to review my infrastructure
and verify that every services can be brought up again very
fast and with a minimal amount of time spent from my side.</p>
<h2>Step one: The console</h2>
<p>Probably the most important device in a sysadmins life is the
own machine. All the cool tools I've found, scripted or
documentation boundled together took me years of time.</p>
<p>Sure, most of the stuff is also mirrored to my brain, but
definitely not perfect, nor very easy to share with others.</p>
<p>Thus if I had to restart my job as a sysadmin, the first steps
I would take, are the following:</p>
<ul>
<li>Get a machine (if it's a notebook, desktop, mobile phone doesn't matter)</li>
<li>Setup with an OS</li>
<li>Copy existing data to my home folder</li>
<li>Setup company specific details</li>
</ul>
<p>Now I would be ready to start working. But from my experiences, I would
add two more things before beginning to work:</p>
<ul>
<li>Setting up regular backup</li>
<li>Get a second, mostly identical machine</li>
</ul>
<p>I guess the reasons for having a backup is clear for every sysadmin reading
this. The reason for the second device may not be clear for everybody:</p>
<p>If my console dies (and hardware does that pretty often), I'm unable to
work until I setup a new machine. Thus the first security to create in
my job as a sysadmin, is to ensure <strong>I</strong> can work and resume working
after a damage as soon as possible.</p>
<h2>Upcoming posts</h2>
<p>This post is the start of a small series containing all the steps needed
to the current infrastructure I maintain in the
<a href="http://www.systems.ethz.ch">Systems Group</a>. Upcoming posts will be both
tagged with <strong>eth</strong> and <strong>sysadmin</strong>.</p>
Sysadmin bootstrap - seek for informationhttps://www.nico.schottelius.org//blog/sysadmin-bootstrap-2/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After <a href="https://www.nico.schottelius.org//blog/sysadmin-bootstrap-1/">having setup the sysadmin console</a>,
it's now time to find out</p>
<pre><code>Where to start?
</code></pre>
<h2>Background</h2>
<p>When I start a new job somewhere, the biggest challenge in the beginning
is to find out, what to do, where, when and how. In most companies this
becomes clear after a few days. In ETH, it's a bit different: There are
many internal service providers, each with very different focus. To find out
which are the machines I have to take care of, where they are located
and which services are running on them took me about a year.</p>
<h2>Step one: My bosses and the local sysadmins</h2>
<p>The ones who hired me should best know, what needs to be done.
After getting some basic information, the next
information centre here in ETH is the ISG, the
"IT Service Group", which takes care of general IT issues
in the department. In my case this is the the
<a href="http://www.isg.inf.ethz.ch/">isginf</a>.</p>
<h3>Your local ISG</h3>
<p>So, what's about that ISG thingy? Why is it needed if I am hired
as a sysadmin? Aren't we competing against each other? The simple
answer is: <strong>No.</strong> While the ISG is also doing sysadmin related
stuff, their focus is on a broader view than the one of a dedicated
sysadmin. To quote from
<a href="http://www.isg.inf.ethz.ch/AboutUs">their site</a>:</p>
<pre><code>...purchasing and repairing hardware....
...operating various central services...
...administration of several hundreds of workstations...
...allow the users...to concentrate on their main tasks...
</code></pre>
<p>So as a sysadmin, maintaining hundreds of server and cluster machines,
focussing on the groups demands, the sysadmin is mostly working in
a different area. On the other hand, a sysadmin can have the ISG
maintain stuff in the group, which they can do good
and the sysadmin probably does not want to do (like Windows workstations).</p>
<h2>Step two: Retrieve information from newly know sources</h2>
<p>After the initial contact with the local folks, I'm ready to
dig into other sources I am know aware of. In this case:</p>
<ul>
<li><a href="https://wiki.systems.ethz.ch/">The Systems Group wiki</a></li>
<li><a href="https://trac.systems.inf.ethz.ch/trac/systems/ns/">The Sysadmin wiki</a></li>
<li><a href="http://sans.ethz.ch/">/sans/: sysadmins home</a></li>
<li><a href="https://www1.ethz.ch/id/services/a_z">Informatikdienste</a></li>
</ul>
<h3>Systems Group in detail</h3>
<p>Most of the information related to the sysadmin job for the Systems
Group can be found in the wiki. Though the wiki is probably always
a little bit behind the reality, it is heavily used and gives pointers
to the right places.</p>
<h3>/sans/: Sysadmins home</h3>
<p>This is still a pretty young project, but unique at ETH: It is a
regular sysadmin meeting and information exchange, without the formal
stuff or the the borders of departements. You may find a lot of
sysadmin related information directly on the website or find out more
in one of the meetings.</p>
<h3>ID: Informatikdienste</h3>
<p>The ID are offering basic IT services for the whole ETH. Most of the
network stuff and things like the <a href="http://blogs.ethz.ch/">Blog service</a>
is made by them. Although as a sysadmin you're mostly just using their
services without interacting with the people behind them, it's a good thought
to get in contact with them: In case the network is not working anymore,
they are the right ones to ask for help.</p>
<p>There's a "little" trap though: The ID combine a lot of "smaller"
departements and one has to find the right one for the right problem.</p>
<h2>Step three: Not offering your own services</h2>
<p>While after some time a sysadmin will find out that a particular service
is missing in the group, it is probably worth <strong>not</strong> to offer it yourself:
Although one may get the impression that this particular service is not
available at ETH, you may almost always be wrong:</p>
<pre><code>Almost every service is already being provided somewhere at ETH.
</code></pre>
<h2>Step four: Offering your own services</h2>
<p>After one has tried out the various services, you may still be unsatisfied:
Either because the service is not being offered they way you require or it
takes you a lot of time and energy to use it. While it may be worth and useful
to run your own service, always triple check step three!</p>
The power of vim: mail editinghttps://www.nico.schottelius.org//blog/the-power-of-vim-mail-editing/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>In this article I will show you some of the mappings I use within
vim for mail editing.</p>
<h2>Background</h2>
<p>I am a console centric person, using
<a href="http://www.mutt.org/">mutt</a> and
<a href="http://www.vim.org/">vim</a> for mails.
I use one e-mail address per person or situation
(i.e. you can reach me at nico-thepowerofvim at schottelius.org).
to be able to filter or delete e-mail addresses easily.</p>
<h2>Macros / Mappings</h2>
<p>One of the really nice things of vim is being able to use macros
(<a href="http://vimdoc.sourceforge.net/htmldoc/usr_05.html#05.3">map</a> in
vim slang).</p>
<h3>Replace Mail Address (static)</h3>
<p>Many times I would like to replace the mail address mutt selected
for sending with my company address. For this I use a mapping assigned to F2:</p>
<pre><code>map <F2> G$/^From:<CR>/<<CR>lc/><CR>nico.schottelius --AT-- ungleich.ch<ESC>/^$<CR>
</code></pre>
<p>This essentially finds the first From: line, removes everything betwenn < and > and inserts my company mail address.</p>
<h3>Replace Mail Address (dynamic)</h3>
<p>In most cases however, I use a mail address in the style
<strong>nico-something@schottelius.org</strong> (there is no catch all, if you were thinking about this). I have mutt setup to use
<strong>nico-something@schottelius.org</strong> as the from address and have a
macro mapped to F3 that allows me to replace the part after
<strong>nico-</strong> until the <strong>@</strong> easily:</p>
<pre><code><F3> G$/^From:<CR>/<nico-<CR>wwlc/@<CR>
</code></pre>
<h3>Replace everything until the signature</h3>
<p>Quite often I am done with replying to an email, but have
leftovers from the original mail. To be able to delete everything
easily until the end of the mail (i.e. until -- and a space),
I use F4:</p>
<pre><code>map <F4> c/^-- <CR>
</code></pre>
Treat Virtual Machines like Hardwarehttps://www.nico.schottelius.org//blog/treat-virtual-machines-like-hardware/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>This report is based on my experiences as a system engineer,
who learned a sensible way to run small to large infrastructures:
Consistent.</p>
<h2>Appeal</h2>
<p>Treat virtual machines like hardware - this is an appeal to everyone
in the virtualisation area (providers, vendors, sysadmins, name it).</p>
<p>Before I go into detail, let me first explain you the situation.</p>
<h2>Once upon a time...</h2>
<p>... virtualisation was new and slow. <a href="http://www.qemu.org">qemu</a> was the only
emulator that was kind of usable. Slow, but working. Then,
some time later, the CPU vendors began to add support for virtualisation
in hardware and soon virtual machines were kind of running smoothly.
Xen and VMWare appeared and took their share in the market, while qemu
had its closed source driver, kqemu.</p>
<p>Then light virtualisation began its time with software like
<a href="http://user-mode-linux.sourceforge.net/">User-mode linux</a> and OpenVZ. All of sudden somebody propagated the buzzword
cloud and with it the use of virtualisation spread, as well as the
problem I am adressing in this article:</p>
<h2>Virtualisation is treated differently to hardware</h2>
<p>With mass building virtual machines, new problems have risen:
How to manage those virtual machines? How to manage the networks?
What about their IP addresses? Where to store, define and assign them?
What are the virtual properties of the VM? How many disks, how much memory and
how many cpu (cores) are utilised?
To solve these problems, specialised tools like
<a href="http://www.libvirt.org">libvirt</a>, <a href="https://www.openstack.org/">openstack</a> and many other are deployed.
Problem solved?</p>
<h2>Problem created: (D)RY</h2>
<p>Interestingly, au contraire de common belief, with inventing tools specific to
virtualisation management, new problems have been created:
The domain specific tools can only be used for management of VMs
(sic!) and thus require the sysadmin to learn a new tool with different
characteristics from existing tools to manage hardware (I am excluding the effort to run
and maintain a second tool, because I assume in a automated environment this is negligible).</p>
<p>So instead of using the existing tools, the <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself">DRY principle</a> was violated.
You may claim that virtual machines are different from hardware and thus require
a different utility, but ...</p>
<h2>... Virtual machines are made to closely resemble hardare</h2>
<p>Indeed, the idea of virtual machines is that
<strong><em>a virtual machine should behave like its hardware equivalent</em></strong>.
As such I postulate</p>
<pre><code>Treat virtual machines like real machines
</code></pre>
<p>Some of you may now be wondering, ...</p>
<h2>... How to treat VMs and hardware the same</h2>
<p>In my opinion (I believe in the <a href="https://en.wikipedia.org/wiki/KISS_principle">Kiss principle</a>), managing large scale infrastructures can be as easy
as managing small infrastructures - given you take the right approach. From a technical
point of view, to manage an infrastucture you need</p>
<ul>
<li>an inventory management tool (like <a href="https://www.nico.schottelius.org//software/cinv/">cinv</a>) that
<ul>
<li>is the central tool to record all your hosts</li>
<li>defines IP address mapppings (mac<->ip, f.i. <a href="https://en.wikipedia.org/wiki/DHCP">DHCP</a> and ip<->name, like <a href="https://en.wikipedia.org/wiki/DNS">DNS</a>)</li>
<li>assists you with lifecycle management of your hosts</li>
</ul>
</li>
<li>a configuration management system (like <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a>) that
<ul>
<li>realises your centrally defined configurations</li>
<li>manages all your configurations (including VMs!)</li>
</ul>
</li>
</ul>
<h2>Summary</h2>
<p>So why treat virtual machines like hardware? Because they can easily be managed the same way
and they are supposed to very very similar.
We at <a href="http://www.ungleich.ch">ungleich</a> take this approach for every infrastructure of our customers
and so far succeed very well with this approach. We do in fact eat our own dogfood
and manage the inventories of our customers (HW and VM!) with <a href="https://www.nico.schottelius.org//software/cinv/">cinv</a>
and configure their infrastructures with <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a>.
To support multiple customers, we keep their configurations under version
control in different branches using <a href="http://git-scm.com/">git</a> (Logically cdist and
cdist don't support this, because git can do it much better - we
follow the <a href="http://en.wikipedia.org/wiki/Unix_philosophy">UNIX philosophy</a> when developing software).</p>
<h2>Future</h2>
<p>There is a lot of work going on at ungleich in the area of virtualisation using
cdist and cinv. We plan to publish more [!!cdistexamples]] and documentation about
this soon - so stay tuned if you are interested in seeing the
<strong><em>world's simplest virtualisation infrastructure [tm]</em></strong> soon.</p>
Tunneling the qemu or kvm vnc unix socket via sshhttps://www.nico.schottelius.org//blog/tunneling-qemu-kvm-unix-socket-via-ssh/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>Perhaps you were searching for a way to tunnel the
<a href="http://en.wikipedia.org/wiki/Unix_domain_socket">unix socket</a>
provided by
<a href="http://www.qemu.org/">qemu</a>
or <a href="http://www.linux-kvm.org/page/Main_Page">kvm</a>
for <a href="http://en.wikipedia.org/wiki/Virtual_Network_Computing">vnc</a>
through an <a href="http://www.openssh.com/">ssh</a> tunnel, too?
When I was searching for an answer, I found
<a href="http://www.karlrunge.com/x11vnc/ssvnc.html">ssvnc</a>, which
I did not give a try, because I wanted to solve the problem
with <a href="http://www.dest-unreach.org/socat/">socat</a> and ssh.</p>
<p>I started kvm on the <strong>remote</strong> machine using the following command:</p>
<pre><code>kvm -vnc unix:/home/services/vms/vnc-socket ...
</code></pre>
<p>Then I connected socat locally on the remote machine to test the settings:</p>
<pre><code>socat STDIO UNIX-CONNECT:/home/services/vms/vnc-socket
</code></pre>
<p>Which worked pretty fine. On the local side we need a listener on a
tcp port around port 5500+ (not a must, but the standard vnc port), which
can be created like that:</p>
<pre><code>socat STDIO TCP-LISTEN:5500
</code></pre>
<p>As reading and writing is not possible with a single pipe, one <strong>cannot</strong> just do
pipe into socat like this:</p>
<pre><code>ssh root@tee.schottelius.org "socat STDIO UNIX-CONNECT:/home/services/vms/vnc-socket" | socat STDIO TCP-LISTEN:5500
</code></pre>
<p>But socat has another nice option, the <strong>EXEC</strong> parameter, which solves the problem:</p>
<pre><code>socat TCP-LISTEN:5500 EXEC:'ssh root@tee.schottelius.org "socat STDIO UNIX-CONNECT:/home/services/vms/vnc-socket"'
</code></pre>
<p>And now I can connect locally via <a href="http://www.tightvnc.com/">tightvnc</a>:</p>
<pre><code>xtightvncviewer -encodings "copyrect tight hextile zlib corre rre raw" localhost:5500
</code></pre>
<p>I specify a different order for the encodings, because xtightvncviewer
prefers raw "encoding", if it connects to localhost, which is not desired
here.</p>
<p>And that's it, vnc connected to a unix socket from kvm tunneled through ssh!</p>
ungleich is blogginghttps://www.nico.schottelius.org//blog/ungleich-blog/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>A pointer for everyone reading this blog:
We started to blog at <a href="http://www.ungleich.ch">ungleich</a> -
crazy sysadmin topics can now be found
in the <a href="http://www.ungleich.ch/blog/">ungleich blog</a>
as well!</p>
Puppet: Update the puppetmaster before the puppet clientshttps://www.nico.schottelius.org//blog/update-puppetmaster-before-puppet-clients/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After I updated one server today from <a href="http://www.debian.org/">Debian</a>
Lenny to Squeeze, puppetd stopped to work and printed the following error:</p>
<pre><code>sgssr240003:~# puppetd --server puppet.inf.ethz.ch --test --ca_port 8141
warning: peer certificate won't be verified in this SSL session
err: Could not request certificate: Error 405 on SERVER: Method Not Allowed
Exiting; failed to retrieve certificate and watiforcert is disabled
</code></pre>
<p>I was a bit confused and did not find useful hints regarding that error message.
In the IRC channel <a href="irc://irc.freenode.org/#puppet">#puppet</a> I was told that this
can happen, if the puppet client (<strong>puppetd</strong>) is newer than the puppetmaster.</p>
<p>And indeed, when I compared the versions, puppetmasterd was running version
<strong>0.24.8</strong>, whereas puppetd was <strong>0.25.1</strong>.</p>
<p>After I upgraded puppetmasterd to <strong>0.25.1</strong>, it is runs fine again.</p>
<p>If you also have been running into this problem, the article is for you!</p>
<h1>Update #1</h1>
<p>I switched over to use <a href="https://www.nico.schottelius.org//software/cdist/">cdist</a> instead of Puppet.</p>
How to change the font in urxvt (rxvt-unicode) dynamicallyhttps://www.nico.schottelius.org//blog/urxvt-change-font-dynamically/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>After <a href="https://www.nico.schottelius.org//blog/published-xorg-terminal-emulator-fonts/">I had a look at some fonts for terminal emulators</a>,
I chose some of the fonts to be used for my terminal.
<a href="http://software.schmorp.de/pkg/rxvt-unicode.html">rxvt-unicode</a> has excellent support for
dynamic font changes, as described
in the <a href="http://pod.tst.eu/http://cvs.schmorp.de/rxvt-unicode/doc/rxvt.7.pod#Can_I_switch_the_fonts_at_runtime">urxvt faq</a>.
I decided to write a tiny script around the printf call
named <a href="https://github.com/telmich/nsbin/blob/master/urxvt-font-change">urxvt-font-change</a>.</p>
<p>This resulted in a clean <a href="https://www.nico.schottelius.org//configs/dot-Xresources">.Xresources</a> file, which allows me to change the font
using <strong><em>Control-Alt-{1-6,0}</em></strong>.</p>
Use smart passphrases - stop enforcing weak and complicated passwordshttps://www.nico.schottelius.org//blog/use-smart-passphrases-stop-enforcing-weak-and-complicated-passwords/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>As a <a href="http://en.wikipedia.org/wiki/Netizen">netizen</a>
you may have defined one or more passwords for several situations.
As a sysadmin you may have to setup password policies for your infrastructure.
Sometimes you will encounter the requirement to use
lower case characteres, capitalisation, numbers and special characters
or various combinations of the previous.</p>
<p>I urge you both to drop this behaviour <strong>now</strong>. This article explains
why it is more sensible to use passphrases instead of complicated
passwords.</p>
<h2>Passphrase vs. Passwords</h2>
<p>A <a href="http://en.wikipedia.org/wiki/Passphrase">passphrase</a> is a combination
of <strong>words</strong> that is used to secure access:</p>
<pre><code>iamathepassphrasedefinedbydaniel
</code></pre>
<p>A <a href="http://en.wikipedia.org/wiki/Password">password</a> on the other hand
is usually a combination of <strong>characters</strong>:</p>
<pre><code>7z/+tt38
</code></pre>
<p>There are at least 4 very simple reasons to prefer passhprases over passwords:</p>
<ul>
<li>passphrases are easier to remember (try yourself with the previous examples)</li>
<li>passphrases are more secure</li>
<li>passphrases can be typed faster than passwords (and thus enhance security even more)</li>
<li>passphrases are easier to type on foreign keyboards</li>
</ul>
<h2>How secure are passphrases really?</h2>
<p>Let's take the common constraints of passwords:</p>
<ul>
<li>Upper and lower case (26+26 characters)</li>
<li>Number (10 characters)</li>
<li>Special characters (some - depends on how you count)</li>
<li>Length about 8-10 characters</li>
</ul>
<p>Let us assume we have 128 possibilities for each character.
With 10 characters this would result in 128<sup>10</sup> possible passwords:</p>
<pre><code>1180591620717411303424 (1.18059e+21)
</code></pre>
<p>Let us take a look at the possible combination of passphrases.
Passphrases are a bit more difficult to define, as it is
<a href="http://en.wikipedia.org/wiki/English_language#Number_of_words_in_English">not strictly defined how many words the English language knows about</a>. I will use 600000 as one of the lower numbers given in the linked article,
which gives us the following number of possibilities:</p>
<pre><code>1 word = 600000
2 words = 360000000000 (3.6e+11)
3 words = 216000000000000000 (2.16e+17)
4 words = 129600000000000000000000 (1.296e+23)
5 words = 77760000000000000000000000000 (7.776e+28)
6 words = 46656000000000000000000000000000000 (4.6656e+34)
7 words = 27993600000000000000000000000000000000000 (2.79936e+40)
</code></pre>
<p>As you can easily see, <em>when you use only 4 words, your passphrase is
more secure than most passwords</em>. The passphrase example above counted
7 words and is still easy to remember.</p>
<h2>What now</h2>
<p>Let us make the world easier.</p>
<p>If you are user and you have to create weak and complicated passwords
due to some policy, give the provider a link to this article so she
can understand why changing their policy is sensible.</p>
<p>If you are a sysadmin or provider you can change your password policy
to require 15 ore more characters, which would result in
931322574615478515625 or 1.67726e+21 possibilities - even more
than in your previous policy.</p>
<h2>For the geeks</h2>
<p>I am aware of <a href="http://en.wikipedia.org/wiki/Unicode">Unicode</a>,
but most characters are not found on common keyboards - at least
the ones I use do not exceed 200 keys. Even if you could
enter all Unicode characters
(for instance using <a href="http://en.wikipedia.org/wiki/Unicode_input">ISO 14755</a>),
it still remains questionable
<a href="http://www.fileformat.info/info/unicode/char/1f3e9/index.htm">whether the application accepts all unicode characters</a>.</p>
<p>If that wasn't enough: You can also use other languages to write
your passphrase. Learned a sentence on your last holidays?
Use it (as a base) for your passphrase.</p>
<p>Yes, there are some words that are more common in languages. On the
other hand, if fantasy words that only you know about are included,
the attacker is required to guess the full string, which is quite
a lot of guesses, even if she assumes all characters are lower case..</p>
<p>...by the way, if you consider
the example passphrase from above as a string of 32 lower case
characters,
it would give you 1901722457268488241418827816020396748021170176
or 1.90172e+45 possible passwords.</p>
<p><a href="http://xkcd.com/936/">XKCD</a> also has a nice cartoon describing
this solution.</p>
How to use the volume keys on the Lenovo X200 and X201https://www.nico.schottelius.org//blog/volume-keys-on-lenovo-x200-x201-xbindkeys/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>If you want to make use of the volume up/down buttons on your
X201, you can simply install
<a href="http://www.nongnu.org/xbindkeys/xbindkeys.html">xbindkeys</a> and
add the following to your <strong>~/.xbindkeysrc</strong>:</p>
<pre><code>#VolDown
"amixer -q set Master 5%-"
m:0x0 + c:122
NoSymbol
#VolUp
"amixer -q set Master 5%+"
m:0x0 + c:123
NoSymbol
</code></pre>
<p>Beware: The keycodes are different on the X200 and the X201! For the
X200, use the following configuration:</p>
<pre><code>#VolDown
"amixer -q set Master 5%-"
m:0x0 + c:174
NoSymbol
#VolUp
"amixer -q set Master 5%+"
m:0x0 + c:176
NoSymbol
</code></pre>
<p>For testing, you can launch</p>
<pre><code>xbindkeys --verbose -n
</code></pre>
<p>Afterwards, you can include <strong>xbindkeys</strong> into your <strong>.xinitrc</strong> without any parameters,
as it automatically forks into the background:</p>
<pre><code>xbindkeys
</code></pre>
<p>The <strong>volume mute</strong> button also generates an event on the X201
(none on the X200), but it seems like muting is done in hardware,
so no need for a mapping.</p>
<p>Have fun!</p>
What is configuration management?https://www.nico.schottelius.org//blog/what-is-configuration-management/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>As I'm currently thinking about writing a configuration management tool,
I'm sitting in the train and philosophise about
the general question: <strong><em>What is configuration management (CM)?</em></strong></p>
<h2>View 1: The System</h2>
<p>Let's imagine I am a computer system.
What happens to me, if I am put under version control?
Somebody (<strong>a process</strong>) which is running on me, changes
stuff on me, so that she (the process) is happy afterwards.</p>
<p>I recognise that some files are added, changed or deleted.
Some processes are being run, some killed.</p>
<p>That's probably it, because I don't own anything else she
can change.</p>
<h2>View 2: The Sysadmin</h2>
<p>I'm one of those guys, who are told to do world domination,
but got bored. I'm a sysadmin. I am very, very lazy.</p>
<p>I installed a lot of systems, just for fun.
Now somebody (probably even a user!) tells me, he wants
something to be changed, because he wants to actually
use the system (pretty awkward idea, but heard about
those situations).</p>
<p>As my boss told me that we cannot exist without users,
I even consider doing the change, though I'm afraid:</p>
<ul>
<li>What happens if the user requests more changes?</li>
<li>What if the machine crashes?</li>
<li>What if another sysadmin needs to add changes?</li>
<li>What if two systems should look very identical, though not completely?</li>
</ul>
<p>This leads to some easy objectives:</p>
<ul>
<li>CM must be easy to read and understand, so I can understand tomorrow what I did today.</li>
<li>CM must be able to redo the work</li>
<li>CM must provide a way to have multiple commiters</li>
<li>Having a way to reuse already defined stuff is helpful</li>
</ul>
<p>Oh, there's another interesting point:
To be able to communicate with a user and to understand him,
it would be very helpful, if I can tell my CM <strong>this is what
the user wants</strong> instead of <strong>do x, y and z</strong>, which neither
the user understand, nor do I know why I did it.</p>
<h2>View 3: The manager</h2>
<p>Yes, it can get even worse, there may be managers or bosses
around, who pay the poor sysadmin. The sysadmin claims to do
her best job, but as a manager, I don't understand what she's
doing. Nor do I really care. I care about the users
(which could be customers as well!) and that their demands.
And about how many users and how many demands my sysadmin
fulfilled. And I want fancy graphics, 24 bit coloured pie
charts in 3D and whatever comes to my crazy manager mind.
And numbers. Many numbers.</p>
<h2>View 4: Merging the views</h2>
<p>Assuming these are the players in my first round of CM
brainstorming, there are some outcomes:</p>
<ul>
<li>CM must be easy to use, so the lazy sysadmin will use it</li>
<li>CM includes ideas from users</li>
<li>The configuration management is maintained by the sysadmin</li>
<li>Your managers are happy, if the CM outputs "manager readable data"</li>
</ul>
<h2>More stuff</h2>
<p>I'll add more ideas about CM here soon. If you (dis|)agree with me,
just [let me know|about] so I can include your critics in the next article.</p>
Why CentOS does not stop your init scripthttps://www.nico.schottelius.org//blog/why-centos-does-not-stop-your-init-script/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<h2>Introduction</h2>
<p>If you created a simple init script below <strong>/etc/init.d</strong>, which
gets started, but not stopped on reboot or system halt, this
article is for you.</p>
<h2>Background</h2>
<p>I assume you ensured that the <strong>chkconfig</strong> information in the
script is correct and that you ran chkconfig $name on. The output
of chkconfig should look like this:</p>
<pre><code>[root@kvm-hw-snr01 ~]# chkconfig --list | grep kvm-vms
kvm-vms 0:off 1:off 2:on 3:on 4:on 5:on 6:off
[root@kvm-hw-snr01 ~]#
</code></pre>
<p>Although this looks correct, there is a small block in <strong>/etc/rc.d/rc</strong>
that prevents your init script from being called on stop:</p>
<pre><code># First, run the KILL scripts.
for i in /etc/rc$runlevel.d/K* ; do
# Check if the subsystem is already up.
subsys=${i#/etc/rc$runlevel.d/K??}
[ -f /var/lock/subsys/$subsys -o -f /var/lock/subsys/$subsys.init ] || continue
check_runlevel "$i" || continue
# Bring the subsystem down.
[ -n "$UPSTART" ] && initctl emit --quiet stopping JOB=$subsys
$i stop
[ -n "$UPSTART" ] && initctl emit --quiet stopped JOB=$subsys
done
</code></pre>
<p>So only if your script creates
/var/lock/subsys/<strong>yourscriptname</strong> or /var/lock/subsys/<strong>yourscriptname</strong>.init,
it will be called on stop.</p>
<h2>Solution</h2>
<p>You can include the following three lines into your script to get
your script stopped:</p>
<pre><code>broken_lock_file_for_centos=/var/lock/subsys/kvm-vms
# In the start block
touch "$broken_lock_file_for_centos"
# In the stop block
rm -f "$broken_lock_file_for_centos"
</code></pre>
How to disable the touchpad in Xorg with xinputhttps://www.nico.schottelius.org//blog/xorg-disable-touchpad-with-xinput/2016-02-25T13:34:32Z2015-02-03T14:47:26Z
<p>If you, like me, think that evdev and xinput should somehow help you
to disable the touchpad without the need of some daemon or gui tool,
you're right!</p>
<p>Having a look at the output of <strong>xinput list</strong> is already looks promising:</p>
<pre><code>[18:11] kr:~% xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Macintosh mouse button emulation id=7 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=8 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=9 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=6 [slave keyboard (3)]
↳ AT Translated Set 2 keyboard id=10 [slave keyboard (3)]
↳ Integrated Camera id=11 [slave keyboard (3)]
↳ Sleep Button id=12 [slave keyboard (3)]
↳ Video Bus id=13 [slave keyboard (3)]
↳ Power Button id=14 [slave keyboard (3)]
[18:11] kr:~%
</code></pre>
<p>Having a deeper look at device 9, the touchpad, with <strong>xinput list-props 9</strong>
reveals an interesting setting:</p>
<pre><code>[18:12] kr:~% xinput list-props 9
Device 'SynPS/2 Synaptics TouchPad':
Device Enabled (125): 0
Device Accel Profile (244): 0
Device Accel Constant Deceleration (245): 1.000000
Device Accel Adaptive Deceleration (247): 1.000000
Device Accel Velocity Scaling (248): 10.000000
Evdev Reopen Attempts (242): 10
Evdev Axis Inversion (249): 0, 0
Evdev Axis Calibration (250): <no items>
Evdev Axes Swap (251): 0
Axis Labels (252): "Abs X" (262), "Abs Y" (263), "Abs Pressure" (264), "Abs Tool Width" (265)
Button Labels (253): "Button Left" (126), "Button Unknown" (243), "Button Right" (128), "Button Wheel Up" (129), "Button Wheel Down" (130)
Evdev Middle Button Emulation (254): 2
Evdev Middle Button Timeout (255): 50
Evdev Wheel Emulation (256): 0
Evdev Wheel Emulation Axes (257): 0, 0, 4, 5
Evdev Wheel Emulation Inertia (258): 10
Evdev Wheel Emulation Timeout (259): 200
Evdev Wheel Emulation Button (260): 4
Evdev Drag Lock Buttons (261): 0
[18:12] kr:~%
</code></pre>
<p>If you expect something like <strong>xinput set-prop</strong> even without
reading the manpage, you are right:</p>
<pre><code>[18:12] kr:~% xinput set-prop 9 125 0
</code></pre>
<p>This disables the touchpad and can be integrated into .xinitrc very well!</p>
Solution for Postfix with Postgres: SASL error: authentication failed: authentication failurehttps://www.nico.schottelius.org//blog/postfix-postgres-sasl-error-authentication-failed-authentication-failure/2016-02-25T13:34:32Z2008-10-06T22:00:00Z
<p>It took me some hours to find the origin of the error,
so I'll documentate some hints here for you.
I am running postfix with postgresql on Debian.
But it should apply more or less to MySQL and FreeBSD and other OS, too.</p>
<p>Some error messages you may have seen:</p>
<pre><code>Oct 7 18:01:38 tee postfix/smtpd[7741]: warning: SASL authentication failure: no secret in database
Oct 7 18:01:38 tee postfix/smtpd[7741]: warning: ikn.inf.ethz.ch[129.132.130.3]: SASL DIGEST-MD5 authentication failed: authentication failure
</code></pre>
<p>or on the client side postfix:</p>
<pre><code>Oct 7 17:56:39 ikn postfix/smtp[30807]: 777134113A: to=<nicosc@inf.ethz.ch>, relay=mx3.schottelius.org[77.109.138.221]:25, delay=0.08, delays=0.02/0/0.06/0, dsn=4.7.8, status=deferred (SASL authentication failed; server mx3.schottelius.org[77.109.138.221] said: 535 5.7.8 Error: authentication failed: authentication failure)
</code></pre>
<p>Some hints and at the end my final solution:</p>
<ul>
<li>make sure postfx can connect to the postgresql database</li>
<li>check one, two and three times that <strong><em>pg_hba.conf</em></strong> is right</li>
<li>try to login manually via psql</li>
<li>check postgresql logs, raise debuglevels,
add <strong><em>log_connections = on</em></strong> and <strong><em>log_statement = 'all'</em></strong></li>
<li>make sure you do not have whitespaces at the end of
<strong><em>/etc/postfix/sasl/smtpd.conf</em></strong> (that was my problem here, due to copy and paste!)</li>
<li><p>You are missing <strong><em>sasl/smtpd.conf</em></strong>, if you get the following infamous error (i.e. cyrus-sasl has no config found):</p>
<p> warning: SASL authentication problem: unable to open Berkeley db /etc/sasldb2: No such file or directory</p></li>
</ul>
<p>For reference, here are the important parts of my working configuration:</p>
<p><strong><em>main.cf:</em></strong></p>
<pre><code>smtp_use_tls = yes
smtp_tls_note_starttls_offer = yes
smtpd_use_tls=yes
smtpd_tls_cert_file=/etc/ssl/mx3.schottelius.org.crt
smtpd_tls_key_file=/etc/ssl/mx3.schottelius.org.key
smtpd_sasl_auth_enable = yes
smtpd_sasl2_auth_enable = yes
smtpd_sasl_security_options = noanonymous
broken_sasl_auth_clients = no
smtpd_tls_auth_only = yes
#smtpd_sasl_path = smtpd # not needed
smtpd_client_restrictions = permit_mynetworks
permit_sasl_authenticated
</code></pre>
<p><strong><em>sasl/smtpd.conf:</em></strong></p>
<pre><code>pwcheck_method: auxprop
auxprop_plugin: sql
mech_list: plain login cram-md5 digest-md5
sql_engine: pgsql
sql_hostnames: 127.0.0.1
sql_user: postfix
sql_passwd: thepassword
sql_database: mail
sql_select: select password from mailboxes where name='%u' and domain='%r' and smtp_enabled=1
</code></pre>