Planet ALUG

June 19, 2021

Chris Lamb

Raiders of the Lost Ark: 40 Years On

"Again, we see there is nothing you can possess which I cannot take away."

The cinema was a rare and expensive treat in my youth, so I first came across Raiders of the Lost Ark by recording it from television onto a poor quality VHS. I only mention this as it meant I watched a slightly different film to the one intended, as my copy somehow missed off the first 10 minutes. For those not as intimately familiar with the film as me, this is just in time to see a Belloq demand Dr. Jones hand over the Peruvian head (see above), just in time to learn that Indy loathes snakes, and just in time to see the inadvertent reproduction of two Europeans squabbling over the spoils of a foreign land.

What this truncation did to my interpretation of the film (released thirty years ago today on June 19th 1981) is interesting to explore. Without Jones' physical and moral traits being demonstrated on-screen (as well as missing the weighing the gold head and the rollercoaster boulder scene), it actually made the idea of 'Indiana Jones' even more of a mythical archetype. The film wisely withholds Jones' backstory, but my directors cut deprived him of even more, and counterintuitively imbued him with even more of a legendary hue as the elision made his qualities an assumption beyond question. Indiana Jones, if you can excuse the cliché, needed no introduction at all.

§

Good artists copy, great artists steal. And oh boy, does Raiders steal. I've watched this film about twenty times over the past two decades and it's now firmly entered into my personal canon. But watching it on its thirtieth anniversary was different not least because I could situate it in a broader cinematic context. For example, I now see the Gestapo officer in Major Strasser from Casablanca (1942), in fact just as I can with many of Raiders' other orientalist tendencies: not only in its breezy depictions of backwards sand people, but also of North Africa as an entrepôt and playground for a certain kind of Western gangster. The opening as well, set in an equally reductionist pseudo-Peru, now feels like Werner Herzog's Aguirre, the Wrath of God (1972) — but without, of course, any self-conscious colonial critique.

The imagery of the ark appears to be borrowed from James Tissot's The Ark Passes Over the Jordan, part of the fin de siecle fascination with the occult and (ironically enough given the background of Raiders' director), a French Catholic revival.

I can now also appreciate some of the finer edges that make this film just so much damn fun to watch. For instance, the comic book conceit that Jones and Belloq are a 'shadowy reflection' of one other and that they need 'only a nudge' to make one like the other. As is the idea that Belloq seems to be actually enjoying being evil. I also spotted Jones rejecting the martini on the plane. This feels less like a comment on corrupting effect of alcohol (he drinks rather heavily elsewhere in the film), but rather a subtle distancing from James Bond. This feels especially important given that the action-packed cold open is, let us be honest for a second, ripped straight from the 007 franchise.

John William's soundtracks are always worth mentioning. The corny Raiders March does almost nothing for me, but the highly-underrated 'Ark theme' certainly does. I delight in its allusions to Gregorian chant, the diabolus in musica and the Hungarian minor scale, fusing the Christian doctrine of the Holy Trinity (the stacked thirds, get it?), the ars antiqua of the Middle Ages with an 'exotic' twist that the Russian Five associated with central European Judaism.

The best use of the ark leitmotif is, of course, when it is opened. Here, Indy and Marion are saved by not opening their eyes whilst the 'High Priest' Belloq and the rest of the Nazis are all melted away. I'm no Biblical scholar, but I'm almost certain they were alluding to Leviticus 16:2 here:

The Lord said to Moses: “Tell your brother Aaron that he is not to come whenever he chooses into the Most Holy Place behind the curtain in front of the atonement cover on the ark, or else he will die, for I will appear in the cloud above the mercy seat.”

But would it be too much of a stretch to also see the myth of Orpheus and Eurydices too? Orpheus's wife would only be saved from the underworld if he did not turn around until he came to his own house. But he turned round to look at his wife, and she instantly slipped back into the depths:

For he who overcome should turn back his gaze
Towards the Tartarean cave,
Whatever excellence he takes with him
He loses when he looks on those below.

Perhaps not, given that Marion and the ark are not lost in quite the same way. But whilst touching on gender, it was interesting to update my view of archaeologist René Belloq. To countermand his slight queer coding (a trope of Disney villains such as Scar, Jafar, Cruella, etc.), there is a rather clumsy subplot involving Belloq repeatedly (and half-heartedly) failing to seduce Marion. This disavows any idea that Belloq isn't firmly heterosexual, essential for the film's mainstream audience, but it is especially important in Raiders because, if we recall the relationship between Belloq and Jones: 'it would take only a nudge to make you like me'. (This would definitely put a new slant on 'Top men'.)

However, my favourite moment is where the Nazis place the ark in a crate in order to transport it to the deserted island. On route, the swastikas on the side of the crate spontaneously burn away, and a disturbing noise is heard in the background. This short scene has always fascinated me, partly because it's the first time in the film that the power of the ark is demonstrated first-hand but also because gives the object an other-worldly nature that, to the best of my knowledge, has no parallel in the rest of cinema.

Still, I had always assumed that the Aak disfigured the swastikas because of their association with the Nazis, interpreting the act as God's condemnation of the Third Reich. But now I catch myself wondering whether the ark would have disfigured any iconography as a matter of principle or whether their treatment was specific to the swastika. We later get a partial answer to this question, as the 'US Army' inscriptions in the Citizen Kane warehouse remain untouched.

Far from being an insignificant concern, the filmmakers appear to have wandered into a highly-contested theological debate. As in, if the burning of the swastika is God's moral judgement of the Nazi regime, then God is clearly both willing and able to intervene in human affairs. So why did he not, to put it mildly, prevent Auschwitz? From this perspective, Spielberg appears to be limbering up for some of the academic critiques surrounding Holocaust representations that will follow Schindler's List (1993).

§

Given my nostalgic and somewhat ironic attachment to Raiders, it will always be difficult for me to objectively appraise the film. Even so, it feels like it is underpinned by an earnest attempt to entertain the viewer, largely absent in the affected cynicism of contemporary cinema. And when considered in the totality of Hollywood's output, its tonal and technical flaws are not actually that bad — or at least Marion's muddled characterisation and its breezy chauvinism (for example) clearly have far worse examples.

Perhaps the most remarkable thing about the film in 2021 is that it hasn't changed that much at all. It spawned one good sequel (The Last Crusade), one bad one (The Temple of Doom), and one hardly worth mentioning at all, yet these adventures haven't affected the original Raiders in any meaningful way. In fact, if anything has affected the original text it is, once again, George Lucas himself, as knowing the impending backlash around the Star Wars prequels adds an inadvertent paratext to all his earlier works.

Yet in a 1978 discussion prior to the creation of Raiders, you can get a keen sense of how Lucas' childlike enthusiasm will always result in something either extremely good or something extremely bad — somehow no middle ground is quite possible. Yes, it's easy to rubbish his initial ideas — 'We'll call him Indiana Smith! — but hasn't Lucas actually captured the essence of a heroic 'Americana' here, and that the final result is simply a difference of degree, not kind?

June 19, 2021 05:01 PM

June 03, 2021

Jonathan McDowell

Digging into Kubernetes containers

Having build a single node Kubernetes cluster and had a poke at what it’s doing in terms of networking the next thing I want to do is figure out what it’s doing in terms of containers. You might argue this should have come before networking, but to me the networking piece is more non-standard than the container piece, so I wanted to understand that first.

Let’s start with a process listing on the host.

ps faxno user,stat,cmd

There are a number of processes from the host kernel we don’t care about:

kernel processes
    USER STAT CMD
       0 S    [kthreadd]
       0 I<    \_ [rcu_gp]
       0 I<    \_ [rcu_par_gp]
       0 I<    \_ [kworker/0:0H-events_highpri]
       0 I<    \_ [mm_percpu_wq]
       0 S     \_ [rcu_tasks_rude_]
       0 S     \_ [rcu_tasks_trace]
       0 S     \_ [ksoftirqd/0]
       0 I     \_ [rcu_sched]
       0 S     \_ [migration/0]
       0 S     \_ [cpuhp/0]
       0 S     \_ [cpuhp/1]
       0 S     \_ [migration/1]
       0 S     \_ [ksoftirqd/1]
       0 I<    \_ [kworker/1:0H-kblockd]
       0 S     \_ [cpuhp/2]
       0 S     \_ [migration/2]
       0 S     \_ [ksoftirqd/2]
       0 I<    \_ [kworker/2:0H-events_highpri]
       0 S     \_ [cpuhp/3]
       0 S     \_ [migration/3]
       0 S     \_ [ksoftirqd/3]
       0 I<    \_ [kworker/3:0H-kblockd]
       0 S     \_ [kdevtmpfs]
       0 I<    \_ [netns]
       0 S     \_ [kauditd]
       0 S     \_ [khungtaskd]
       0 S     \_ [oom_reaper]
       0 I<    \_ [writeback]
       0 S     \_ [kcompactd0]
       0 SN    \_ [ksmd]
       0 SN    \_ [khugepaged]
       0 I<    \_ [kintegrityd]
       0 I<    \_ [kblockd]
       0 I<    \_ [blkcg_punt_bio]
       0 I<    \_ [edac-poller]
       0 I<    \_ [devfreq_wq]
       0 I<    \_ [kworker/0:1H-kblockd]
       0 S     \_ [kswapd0]
       0 I<    \_ [kthrotld]
       0 I<    \_ [acpi_thermal_pm]
       0 I<    \_ [ipv6_addrconf]
       0 I<    \_ [kstrp]
       0 I<    \_ [zswap-shrink]
       0 I<    \_ [kworker/u9:0-hci0]
       0 I<    \_ [kworker/2:1H-kblockd]
       0 I<    \_ [ata_sff]
       0 I<    \_ [sdhci]
       0 S     \_ [irq/39-mmc0]
       0 I<    \_ [sdhci]
       0 S     \_ [irq/42-mmc1]
       0 S     \_ [scsi_eh_0]
       0 I<    \_ [scsi_tmf_0]
       0 S     \_ [scsi_eh_1]
       0 I<    \_ [scsi_tmf_1]
       0 I<    \_ [kworker/1:1H-kblockd]
       0 I<    \_ [kworker/3:1H-kblockd]
       0 S     \_ [jbd2/sda5-8]
       0 I<    \_ [ext4-rsv-conver]
       0 S     \_ [watchdogd]
       0 S     \_ [scsi_eh_2]
       0 I<    \_ [scsi_tmf_2]
       0 S     \_ [usb-storage]
       0 I<    \_ [cfg80211]
       0 S     \_ [irq/130-mei_me]
       0 I<    \_ [cryptd]
       0 I<    \_ [uas]
       0 S     \_ [irq/131-iwlwifi]
       0 S     \_ [card0-crtc0]
       0 S     \_ [card0-crtc1]
       0 S     \_ [card0-crtc2]
       0 I<    \_ [kworker/u9:2-hci0]
       0 I     \_ [kworker/3:0-events]
       0 I     \_ [kworker/2:0-events]
       0 I     \_ [kworker/1:0-events_power_efficient]
       0 I     \_ [kworker/3:2-events]
       0 I     \_ [kworker/1:1]
       0 I     \_ [kworker/u8:1-events_unbound]
       0 I     \_ [kworker/0:2-events]
       0 I     \_ [kworker/2:2]
       0 I     \_ [kworker/u8:0-events_unbound]
       0 I     \_ [kworker/0:1-events]
       0 I     \_ [kworker/0:0-events]

There are various basic host processes, including my SSH connections, and Docker. I note it’s using containerd. We also see kubelet, the Kubernetes node agent.

host processes
    USER STAT CMD
       0 Ss   /sbin/init
       0 Ss   /lib/systemd/systemd-journald
       0 Ss   /lib/systemd/systemd-udevd
     101 Ssl  /lib/systemd/systemd-timesyncd
       0 Ssl  /sbin/dhclient -4 -v -i -pf /run/dhclient.enx00e04c6851de.pid -lf /var/lib/dhcp/dhclient.enx00e04c6851de.leases -I -df /var/lib/dhcp/dhclient6.enx00e04c6851de.leases enx00e04c6851de
       0 Ss   /usr/sbin/cron -f
     104 Ss   /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
       0 Ssl  /usr/sbin/dockerd -H fd://
       0 Ssl  /usr/sbin/rsyslogd -n -iNONE
       0 Ss   /usr/sbin/smartd -n
       0 Ss   /lib/systemd/systemd-logind
       0 Ssl  /usr/bin/containerd
       0 Ss+  /sbin/agetty -o -p -- \u --noclear tty1 linux
       0 Ss   sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
       0 Ss    \_ sshd: root@pts/1
       0 Ss    |   \_ -bash
       0 R+    |       \_ ps faxno user,stat,cmd
       0 Ss    \_ sshd: noodles [priv]
    1000 S         \_ sshd: noodles@pts/0
    1000 Ss+           \_ -bash
       0 Ss   /lib/systemd/systemd --user
       0 S     \_ (sd-pam)
    1000 Ss   /lib/systemd/systemd --user
    1000 S     \_ (sd-pam)
       0 Ssl  /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.4.1

And that just leaves a bunch of container related processes:

container processes
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id fd95c597ff3171ff110b7bf440229e76c5108d5d93be75ffeab54869df734413 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id c2ff2c50f0bc052feda2281741c4f37df7905e3b819294ec645148ae13c3fe1b -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 589c1545d9e0cdf8ea391745c54c8f4db49f5f437b1a2e448e7744b2c12f8856 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6f417fd8a8c573a2b8f792af08cdcd7ce663457f0f7218c8d55afa3732e6ee94 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id afa9798c9f663b21df8f38d9634469e6b4db0984124547cd472a7789c61ef752 -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true --port=0
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4b3708b62f4d427690f5979848c59fce522dab6c62a9c53b806ffbaef3f88e62 -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --port=0 --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=true
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 89f35bf7a825eb97db7035d29aa475a3a1c8aaccda0860a46388a3a923cd10bc -address /run/containerd/containerd.sock
       0 Ssl   \_ kube-apiserver --advertise-address=192.168.53.147 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2dabff6e4f59c96d931d95781d28314065b46d0e6f07f8c65dc52aa465f69456 -address /run/containerd/containerd.sock
       0 Ssl   \_ etcd --advertise-client-urls=https://192.168.53.147:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.53.147:2380 --initial-cluster=udon=https://192.168.53.147:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.53.147:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.53.147:2380 --name=udon --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 73fae81715b670255b66419a7959798b287be7bbb41e96f8b711fa529aa02f0d -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 26d92a720c560caaa5f8a0217bc98e486b1c032af6c7c5d75df508021d462878 -address /run/containerd/containerd.sock
       0 Ssl   \_ /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=udon
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7104f65b5d92a56a2df93514ed0a78cfd1090ca47b6ce4e0badc43be6c6c538e -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 48d735f7f44e3944851563f03f32c60811f81409e7378641404035dffd8c1eb4 -address /run/containerd/containerd.sock
       0 Ssl   \_ /usr/bin/weave-npc
       0 S<        \_ /usr/sbin/ulogd -v
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 36b418e69ae7076fe5a44d16cef223d8908016474cb65910f2fd54cca470566b -address /run/containerd/containerd.sock
       0 Ss    \_ /bin/sh /home/weave/launch.sh
       0 Sl        \_ /home/weave/weaver --port=6783 --datapath=datapath --name=12:82:8f:ed:c7:bf --http-addr=127.0.0.1:6784 --metrics-addr=0.0.0.0:6782 --docker-api= --no-dns --db-prefix=/weavedb/weave-net --ipalloc-range=192.168.0.0/24 --nickname=udon --ipalloc-init consensus=0 --conn-limit=200 --expect-npc --no-masq-local
       0 Sl        \_ /home/weave/kube-utils -run-reclaim-daemon -node-name=udon -peer-name=12:82:8f:ed:c7:bf -log-level=debug
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 534c0a698478599277482d97a137fab8ef4d62db8a8a5cf011b4bead28246f70 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ffd6b668ddfbf3c64c6783bc6f4f6cc9e92bfb16c83fb214c2cbb4044993bf0 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4a30785f91873a7e6a191e86928a789760a054e4fa6dcd7048a059b42cf19edf -address /run/containerd/containerd.sock
       0 Ssl   \_ /coredns -conf /etc/coredns/Corefile
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 649a507d45831aca1de5231b49afc8ff37d90add813e7ecd451d12eedd785b0c -address /run/containerd/containerd.sock
       0 Ssl   \_ /coredns -conf /etc/coredns/Corefile
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 62b369de8d8cece4d33ec9fda4d23a9718379a8df8b30173d68f20bff830fed2 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7cbb177bee18dbdeed21fb90e74378e2081436ad5bf116b36ad5077fe382df30 -address /run/containerd/containerd.sock
       0 Ss    \_ /bin/bash /usr/local/bin/run.sh
       0 S         \_ nginx: master process nginx -g daemon off;
   65534 S             \_ nginx: worker process
       0 Ss   /lib/systemd/systemd --user
       0 S     \_ (sd-pam)
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6669168db70db4e6c741e8a047942af06dd745fae4d594291d1d6e1077b05082 -address /run/containerd/containerd.sock
       0 Ss    \_ /pause
       0 Sl   /usr/bin/containerd-shim-runc-v2 -namespace moby -id d5fa78fa31f11a4c5fb9fd2e853a00f0e60e414a7bce2e0d8fcd1f6ab2b30074 -address /run/containerd/containerd.sock
     101 Ss    \_ /usr/bin/dumb-init -- /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
     101 Ssl       \_ /nginx-ingress-controller --publish-service=ingress-nginx/ingress-nginx-controller --election-id=ingress-controller-leader --ingress-class=nginx --configmap=ingress-nginx/ingress-nginx-controller --validating-webhook=:8443 --validating-webhook-certificate=/usr/local/certificates/cert --validating-webhook-key=/usr/local/certificates/key
     101 S             \_ nginx: master process /usr/local/nginx/sbin/nginx -c /etc/nginx/nginx.conf
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 Sl                \_ nginx: worker process
     101 S                 \_ nginx: cache manager process

There’s a lot going on there. Some bits are obvious; we can see the nginx ingress controller, our echoserver (the other nginx process hanging off /usr/local/bin/run.sh), and some things that look related to weave. The rest appears to be Kubernete’s related infrastructure.

kube-scheduler, kube-controller-manager, kube-apiserver, kube-proxy all look like core Kubernetes bits. etcd is a distributed, reliable key-value store. coredns is a DNS server, with plugins for Kubernetes and etcd.

What does Docker claim is happening?

docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED      STATUS      PORTS     NAMES
d5fa78fa31f1   k8s.gcr.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   3 days ago   Up 3 days             k8s_controller_ingress-nginx-controller-5b74bc9868-bczdr_ingress-nginx_4d7d3d81-a769-4de9-a4fb-04763b7c1605_0
6669168db70d   k8s.gcr.io/pause:3.4.1                "/pause"                 3 days ago   Up 3 days             k8s_POD_ingress-nginx-controller-5b74bc9868-bczdr_ingress-nginx_4d7d3d81-a769-4de9-a4fb-04763b7c1605_0
7cbb177bee18   k8s.gcr.io/echoserver                 "/usr/local/bin/run.…"   3 days ago   Up 3 days             k8s_echoserver_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
62b369de8d8c   k8s.gcr.io/pause:3.4.1                "/pause"                 3 days ago   Up 3 days             k8s_POD_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
649a507d4583   296a6d5035e2                          "/coredns -conf /etc…"   4 days ago   Up 4 days             k8s_coredns_coredns-558bd4d5db-flrfq_kube-system_f8b2b52e-6673-4966-82b1-3fbe052a0297_0
4a30785f9187   296a6d5035e2                          "/coredns -conf /etc…"   4 days ago   Up 4 days             k8s_coredns_coredns-558bd4d5db-4nvrg_kube-system_1976f4d6-647c-45ca-b268-95f071f064d5_0
9ffd6b668ddf   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_coredns-558bd4d5db-flrfq_kube-system_f8b2b52e-6673-4966-82b1-3fbe052a0297_0
534c0a698478   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_coredns-558bd4d5db-4nvrg_kube-system_1976f4d6-647c-45ca-b268-95f071f064d5_0
36b418e69ae7   df29c0a4002c                          "/home/weave/launch.…"   4 days ago   Up 4 days             k8s_weave_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_1
48d735f7f44e   weaveworks/weave-npc                  "/usr/bin/launch.sh"     4 days ago   Up 4 days             k8s_weave-npc_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_0
7104f65b5d92   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_weave-net-mchmg_kube-system_b9af9615-8cde-4a18-8555-6da1f51b7136_0
26d92a720c56   4359e752b596                          "/usr/local/bin/kube…"   4 days ago   Up 4 days             k8s_kube-proxy_kube-proxy-6d8kg_kube-system_8bf2d7ec-4850-427f-860f-465a9ff84841_0
73fae81715b6   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-proxy-6d8kg_kube-system_8bf2d7ec-4850-427f-860f-465a9ff84841_0
89f35bf7a825   771ffcf9ca63                          "kube-apiserver --ad…"   4 days ago   Up 4 days             k8s_kube-apiserver_kube-apiserver-udon_kube-system_1af8c5f362b7b02269f4d244cb0e6fbf_0
afa9798c9f66   a4183b88f6e6                          "kube-scheduler --au…"   4 days ago   Up 4 days             k8s_kube-scheduler_kube-scheduler-udon_kube-system_629dc49dfd9f7446eb681f1dcffe6d74_0
2dabff6e4f59   0369cf4303ff                          "etcd --advertise-cl…"   4 days ago   Up 4 days             k8s_etcd_etcd-udon_kube-system_c2a3008c1d9895f171cd394e38656ea0_0
4b3708b62f4d   e16544fd47b0                          "kube-controller-man…"   4 days ago   Up 4 days             k8s_kube-controller-manager_kube-controller-manager-udon_kube-system_1d1b9018c3c6e7aa2e803c6e9ccd2eab_0
fd95c597ff31   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-scheduler-udon_kube-system_629dc49dfd9f7446eb681f1dcffe6d74_0
589c1545d9e0   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-controller-manager-udon_kube-system_1d1b9018c3c6e7aa2e803c6e9ccd2eab_0
6f417fd8a8c5   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_kube-apiserver-udon_kube-system_1af8c5f362b7b02269f4d244cb0e6fbf_0
c2ff2c50f0bc   k8s.gcr.io/pause:3.4.1                "/pause"                 4 days ago   Up 4 days             k8s_POD_etcd-udon_kube-system_c2a3008c1d9895f171cd394e38656ea0_0

Ok, that’s interesting. Before we dig into it, what does Kubernetes say? (I’ve trimmed the RESTARTS + AGE columns to make things fit a bit better here; they weren’t interesting).

noodles@udon:~$ kubectl get pods --all-namespaces
NAMESPACE       NAME                                        READY   STATUS
default         hello-node-59bffcc9fd-8hkgb                 1/1     Running
ingress-nginx   ingress-nginx-admission-create-8jgkt        0/1     Completed
ingress-nginx   ingress-nginx-admission-patch-jdq4t         0/1     Completed
ingress-nginx   ingress-nginx-controller-5b74bc9868-bczdr   1/1     Running
kube-system     coredns-558bd4d5db-4nvrg                    1/1     Running
kube-system     coredns-558bd4d5db-flrfq                    1/1     Running
kube-system     etcd-udon                                   1/1     Running
kube-system     kube-apiserver-udon                         1/1     Running
kube-system     kube-controller-manager-udon                1/1     Running
kube-system     kube-proxy-6d8kg                            1/1     Running
kube-system     kube-scheduler-udon                         1/1     Running
kube-system     weave-net-mchmg                             2/2     Running

So there are a lot more Docker instances running than Kubernetes pods. What’s happening there? Well, it turns out that Kubernetes builds pods from multiple different Docker instances. If you think of a traditional container as being comprised of a set of namespaces (process, network, hostname etc) and a cgroup then a pod is made up of the namespaces and then each docker instance within that pod has it’s own cgroup. Ian Lewis has a much deeper discussion in What are Kubernetes Pods Anyway?, but my takeaway is that a pod is a set of sort-of containers that are coupled. We can see this more clearly if we ask systemd for the cgroup breakdown:

systemd-cgls
Control group /:
-.slice
├─user.slice 
│ ├─user-0.slice 
│ │ ├─session-29.scope 
│ │ │ ├─ 515899 sshd: root@pts/1
│ │ │ ├─ 515913 -bash
│ │ │ ├─3519743 systemd-cgls
│ │ │ └─3519744 cat
│ │ └─user@0.service …
│ │   └─init.scope 
│ │     ├─515902 /lib/systemd/systemd --user
│ │     └─515903 (sd-pam)
│ └─user-1000.slice 
│   ├─user@1000.service …
│   │ └─init.scope 
│   │   ├─2564011 /lib/systemd/systemd --user
│   │   └─2564012 (sd-pam)
│   └─session-110.scope 
│     ├─2564007 sshd: noodles [priv]
│     ├─2564040 sshd: noodles@pts/0
│     └─2564041 -bash
├─init.scope 
│ └─1 /sbin/init
├─system.slice 
│ ├─containerd.service …
│ │ ├─  21383 /usr/bin/containerd-shim-runc-v2 -namespace moby -id fd95c597ff31…
│ │ ├─  21408 /usr/bin/containerd-shim-runc-v2 -namespace moby -id c2ff2c50f0bc…
│ │ ├─  21432 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 589c1545d9e0…
│ │ ├─  21459 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6f417fd8a8c5…
│ │ ├─  21582 /usr/bin/containerd-shim-runc-v2 -namespace moby -id afa9798c9f66…
│ │ ├─  21607 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4b3708b62f4d…
│ │ ├─  21640 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 89f35bf7a825…
│ │ ├─  21648 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 2dabff6e4f59…
│ │ ├─  22343 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 73fae81715b6…
│ │ ├─  22391 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 26d92a720c56…
│ │ ├─  26992 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7104f65b5d92…
│ │ ├─  27405 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 48d735f7f44e…
│ │ ├─  27531 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 36b418e69ae7…
│ │ ├─  27941 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 534c0a698478…
│ │ ├─  27960 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 9ffd6b668ddf…
│ │ ├─  28131 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 4a30785f9187…
│ │ ├─  28159 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 649a507d4583…
│ │ ├─ 514667 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 62b369de8d8c…
│ │ ├─ 514976 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 7cbb177bee18…
│ │ ├─ 698904 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6669168db70d…
│ │ ├─ 699284 /usr/bin/containerd-shim-runc-v2 -namespace moby -id d5fa78fa31f1…
│ │ └─2805479 /usr/bin/containerd
│ ├─systemd-udevd.service 
│ │ └─2805502 /lib/systemd/systemd-udevd
│ ├─cron.service 
│ │ └─2805474 /usr/sbin/cron -f
│ ├─docker.service …
│ │ └─528 /usr/sbin/dockerd -H fd://
│ ├─kubelet.service 
│ │ └─2805501 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap…
│ ├─systemd-journald.service 
│ │ └─2805505 /lib/systemd/systemd-journald
│ ├─ssh.service 
│ │ └─2805500 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
│ ├─ifup@enx00e04c6851de.service 
│ │ └─2805675 /sbin/dhclient -4 -v -i -pf /run/dhclient.enx00e04c6851de.pid -lf…
│ ├─rsyslog.service 
│ │ └─2805488 /usr/sbin/rsyslogd -n -iNONE
│ ├─smartmontools.service 
│ │ └─2805499 /usr/sbin/smartd -n
│ ├─dbus.service 
│ │ └─527 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile…
│ ├─systemd-timesyncd.service 
│ │ └─2805513 /lib/systemd/systemd-timesyncd
│ ├─system-getty.slice 
│ │ └─getty@tty1.service 
│ │   └─536 /sbin/agetty -o -p -- \u --noclear tty1 linux
│ └─systemd-logind.service 
│   └─533 /lib/systemd/systemd-logind
└─kubepods.slice 
  ├─kubepods-burstable.slice 
  │ ├─kubepods-burstable-pod1af8c5f362b7b02269f4d244cb0e6fbf.slice 
  │ │ ├─docker-6f417fd8a8c573a2b8f792af08cdcd7ce663457f0f7218c8d55afa3732e6ee94.scope …
  │ │ │ └─21493 /pause
  │ │ └─docker-89f35bf7a825eb97db7035d29aa475a3a1c8aaccda0860a46388a3a923cd10bc.scope …
  │ │   └─21699 kube-apiserver --advertise-address=192.168.33.147 --allow-privi…
  │ ├─kubepods-burstable-podf8b2b52e_6673_4966_82b1_3fbe052a0297.slice 
  │ │ ├─docker-649a507d45831aca1de5231b49afc8ff37d90add813e7ecd451d12eedd785b0c.scope …
  │ │ │ └─28187 /coredns -conf /etc/coredns/Corefile
  │ │ └─docker-9ffd6b668ddfbf3c64c6783bc6f4f6cc9e92bfb16c83fb214c2cbb4044993bf0.scope …
  │ │   └─27987 /pause
  │ ├─kubepods-burstable-podc2a3008c1d9895f171cd394e38656ea0.slice 
  │ │ ├─docker-c2ff2c50f0bc052feda2281741c4f37df7905e3b819294ec645148ae13c3fe1b.scope …
  │ │ │ └─21481 /pause
  │ │ └─docker-2dabff6e4f59c96d931d95781d28314065b46d0e6f07f8c65dc52aa465f69456.scope …
  │ │   └─21701 etcd --advertise-client-urls=https://192.168.33.147:2379 --cert…
  │ ├─kubepods-burstable-pod629dc49dfd9f7446eb681f1dcffe6d74.slice 
  │ │ ├─docker-fd95c597ff3171ff110b7bf440229e76c5108d5d93be75ffeab54869df734413.scope …
  │ │ │ └─21491 /pause
  │ │ └─docker-afa9798c9f663b21df8f38d9634469e6b4db0984124547cd472a7789c61ef752.scope …
  │ │   └─21680 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/sche…
  │ ├─kubepods-burstable-podb9af9615_8cde_4a18_8555_6da1f51b7136.slice 
  │ │ ├─docker-48d735f7f44e3944851563f03f32c60811f81409e7378641404035dffd8c1eb4.scope …
  │ │ │ ├─27424 /usr/bin/weave-npc
  │ │ │ └─27458 /usr/sbin/ulogd -v
  │ │ ├─docker-36b418e69ae7076fe5a44d16cef223d8908016474cb65910f2fd54cca470566b.scope …
  │ │ │ ├─27549 /bin/sh /home/weave/launch.sh
  │ │ │ ├─27629 /home/weave/weaver --port=6783 --datapath=datapath --name=12:82…
  │ │ │ └─27825 /home/weave/kube-utils -run-reclaim-daemon -node-name=udon -pee…
  │ │ └─docker-7104f65b5d92a56a2df93514ed0a78cfd1090ca47b6ce4e0badc43be6c6c538e.scope …
  │ │   └─27011 /pause
  │ ├─kubepods-burstable-pod4d7d3d81_a769_4de9_a4fb_04763b7c1605.slice 
  │ │ ├─docker-6669168db70db4e6c741e8a047942af06dd745fae4d594291d1d6e1077b05082.scope …
  │ │ │ └─698925 /pause
  │ │ └─docker-d5fa78fa31f11a4c5fb9fd2e853a00f0e60e414a7bce2e0d8fcd1f6ab2b30074.scope …
  │ │   ├─ 699303 /usr/bin/dumb-init -- /nginx-ingress-controller --publish-ser…
  │ │   ├─ 699316 /nginx-ingress-controller --publish-service=ingress-nginx/ing…
  │ │   ├─ 699405 nginx: master process /usr/local/nginx/sbin/nginx -c /etc/ngi…
  │ │   ├─1075085 nginx: worker process
  │ │   ├─1075086 nginx: worker process
  │ │   ├─1075087 nginx: worker process
  │ │   ├─1075088 nginx: worker process
  │ │   └─1075089 nginx: cache manager process
  │ ├─kubepods-burstable-pod1976f4d6_647c_45ca_b268_95f071f064d5.slice 
  │ │ ├─docker-4a30785f91873a7e6a191e86928a789760a054e4fa6dcd7048a059b42cf19edf.scope …
  │ │ │ └─28178 /coredns -conf /etc/coredns/Corefile
  │ │ └─docker-534c0a698478599277482d97a137fab8ef4d62db8a8a5cf011b4bead28246f70.scope …
  │ │   └─27995 /pause
  │ └─kubepods-burstable-pod1d1b9018c3c6e7aa2e803c6e9ccd2eab.slice 
  │   ├─docker-589c1545d9e0cdf8ea391745c54c8f4db49f5f437b1a2e448e7744b2c12f8856.scope …
  │   │ └─21489 /pause
  │   └─docker-4b3708b62f4d427690f5979848c59fce522dab6c62a9c53b806ffbaef3f88e62.scope …
  │     └─21690 kube-controller-manager --authentication-kubeconfig=/etc/kubern…
  └─kubepods-besteffort.slice 
    ├─kubepods-besteffort-podc7111c9e_7131_40e0_876d_be89d5ca1812.slice 
    │ ├─docker-62b369de8d8cece4d33ec9fda4d23a9718379a8df8b30173d68f20bff830fed2.scope …
    │ │ └─514688 /pause
    │ └─docker-7cbb177bee18dbdeed21fb90e74378e2081436ad5bf116b36ad5077fe382df30.scope …
    │   ├─514999 /bin/bash /usr/local/bin/run.sh
    │   ├─515039 nginx: master process nginx -g daemon off;
    │   └─515040 nginx: worker process
    └─kubepods-besteffort-pod8bf2d7ec_4850_427f_860f_465a9ff84841.slice 
      ├─docker-73fae81715b670255b66419a7959798b287be7bbb41e96f8b711fa529aa02f0d.scope …
      │ └─22364 /pause
      └─docker-26d92a720c560caaa5f8a0217bc98e486b1c032af6c7c5d75df508021d462878.scope …
        └─22412 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.c…

Again, there’s a lot going on here, but if you look for the kubepods.slice piece then you can see our pods are divided into two sets, kubepods-burstable.slice and kubepods-besteffort.slice. Under those you can see the individual pods, all of which have at least 2 separate cgroups, one of which is running /pause. Turns out this is a generic Kubernetes image which basically performs the process reaping that an init process would do on a normal system; it just sits and waits for processes to exit and cleans them up. Again, Ian Lewis has more details on the pause container.

Finally let’s dig into the actual containers. The pause container seems like a good place to start. We can examine the details of where the filesystem is (may differ if you’re not using the overlay2 image thingy). The hex string is the container ID listed by docker ps.

# docker inspect --format='{{.GraphDriver.Data.MergedDir}}' 6669168db70d
/var/lib/docker/overlay2/5a2d76012476349e6b58eb6a279bac400968cefae8537082ea873b2e791ff3c6/merged
# cd /var/lib/docker/overlay2/5a2d76012476349e6b58eb6a279bac400968cefae8537082ea873b2e791ff3c6/merged
# find . | sed -e 's;^./;;'
pause
proc
.dockerenv
etc
etc/resolv.conf
etc/hostname
etc/mtab
etc/hosts
sys
dev
dev/shm
dev/pts
dev/console
# file pause
pause: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 3.2.0, BuildID[sha1]=d35dab7152881e37373d819f6864cd43c0124a65, stripped

This is a nice, minimal container. The pause binary is statically linked, so there are no extra libraries required and it’s just a basic set of support devices and files. I doubt the pieces in /etc are even required. Let’s try the echoserver next:

# docker inspect --format='{{.GraphDriver.Data.MergedDir}}' 7cbb177bee18
/var/lib/docker/overlay2/09042bc1aff16a9cba43f1a6a68f7786c4748e989a60833ec7417837c4bfaacb/merged
# cd /var/lib/docker/overlay2/09042bc1aff16a9cba43f1a6a68f7786c4748e989a60833ec7417837c4bfaacb/merged
# find . | wc -l
3358

Wow. That’s a lot more stuff. Poking /etc/os-release shows why:

# grep PRETTY etc/os-release
PRETTY_NAME="Ubuntu 16.04.2 LTS"

Aha. It’s an Ubuntu-based image. We can cut straight to the chase with the nginx ingress container:

# docker exec d5fa78fa31f1 grep PRETTY /etc/os-release
PRETTY_NAME="Alpine Linux v3.13"

That’s a bit more reasonable an image for a container; Alpine Linux is a much smaller distro.

I don’t feel there’s a lot more poking to do here. It’s not something I’d expect to do on a normal Kubernetes setup, but I wanted to dig under the hood to make sure it really was just a normal container situation. I think the next steps involve adding a bit more complexity - that means building a pod with more than a single node, and then running an application that’s a bit more complicated. That should help explore two major advantages of running this sort of setup; resilency from a node dying, and the ability to scale out beyond what a single node can do.

June 03, 2021 08:20 PM

May 31, 2021

Chris Lamb

Free software activities in May 2021

Here's my monthly update covering what I have been doing in the free software world for May 2021 (previous month):

§

Reproducible Builds

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I also made the following changes to diffoscope, including preparing and uploading versions 174, 175 and 176 to Debian:

§

Debian

Finally, I also made a sponsored upload of adminer (4.7.9-2) for Alexandre Rossi.

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

You can find out more about the project via the following video:

May 31, 2021 04:32 PM

May 28, 2021

Jonathan McDowell

Trying to understand Kubernetes networking

I previously built a single node Kubernetes cluster as a test environment to learn more about it. The first thing I want to try to understand is its networking. In particular the IP addresses that are listed are all 10.* and my host’s network is a 192.168/24. I understand each pod gets its own virtual ethernet interface and associated IP address, and these are generally private within the cluster (and firewalled out other than for exposed services). What does that actually look like?

$ ip route
default via 192.168.53.1 dev enx00e04c6851de
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.0.0/24 dev weave proto kernel scope link src 192.168.0.1
192.168.53.0/24 dev enx00e04c6851de proto kernel scope link src 192.168.53.147

Huh. No sign of any way to get to 10.107.66.138 (the IP my echoserver from the previous post is available on directly from the host). What about network interfaces? (under the cut because it’s lengthy)

ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: enx00e04c6851de: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:e0:4c:68:51:de brd ff:ff:ff:ff:ff:ff
    inet 192.168.53.147/24 brd 192.168.53.255 scope global dynamic enx00e04c6851de
       valid_lft 41571sec preferred_lft 41571sec
3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 74:d8:3e:70:3b:18 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:18:04:9e:08 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether d2:5a:fd:c1:56:23 brd ff:ff:ff:ff:ff:ff
7: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
    link/ether 12:82:8f:ed:c7:bf brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global weave
       valid_lft forever preferred_lft forever
9: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
    link/ether b6:49:88:d6:6d:84 brd ff:ff:ff:ff:ff:ff
10: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 6e:6c:03:1d:e5:0e brd ff:ff:ff:ff:ff:ff
11: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
    link/ether 9a:af:c5:0a:b3:fd brd ff:ff:ff:ff:ff:ff
13: vethwepl534c0a6@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 1e:ac:f1:85:61:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
15: vethwepl9ffd6b6@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether 56:ca:71:2a:ab:39 brd ff:ff:ff:ff:ff:ff link-netnsid 1
17: vethwepl62b369d@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether e2:a0:bb:ee:fc:73 brd ff:ff:ff:ff:ff:ff link-netnsid 2
23: vethwepl6669168@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
    link/ether f2:e7:e6:95:e0:61 brd ff:ff:ff:ff:ff:ff link-netnsid 3

That looks like a collection of virtual ethernet devices that are being managed by the weave networking plugin, and presumably partnered inside each pod. They’re bridged to the weave interface (the master weave bit). Still no clues about the 10.* range. What about ARP?

ip neigh
192.168.53.1 dev enx00e04c6851de lladdr e4:8d:8c:35:98:d5 DELAY
192.168.0.4 dev datapath lladdr da:22:06:96:50:cb STALE
192.168.0.2 dev weave lladdr 66:eb:ce:16:3c:62 REACHABLE
192.168.53.136 dev enx00e04c6851de lladdr 00:e0:4c:39:f2:54 REACHABLE
192.168.0.6 dev weave lladdr 56:a9:f0:d2:9e:f3 STALE
192.168.0.3 dev datapath lladdr f2:42:c9:c3:08:71 STALE
192.168.0.3 dev weave lladdr f2:42:c9:c3:08:71 REACHABLE
192.168.0.2 dev datapath lladdr 66:eb:ce:16:3c:62 STALE
192.168.0.6 dev datapath lladdr 56:a9:f0:d2:9e:f3 STALE
192.168.0.4 dev weave lladdr da:22:06:96:50:cb STALE
192.168.0.5 dev datapath lladdr fe:6f:1b:14:56:5a STALE
192.168.0.5 dev weave lladdr fe:6f:1b:14:56:5a REACHABLE

Nope. That just looks like addresses on the weave managed bridge. Alright. What about firewalling?

nft list ruleset
table ip nat {
	chain DOCKER {
		iifname "docker0" counter packets 0 bytes 0 return
	}

	chain POSTROUTING {
		type nat hook postrouting priority srcnat; policy accept;
		 counter packets 531750 bytes 31913539 jump KUBE-POSTROUTING
		oifname != "docker0" ip saddr 172.17.0.0/16 counter packets 1 bytes 84 masquerade 
		counter packets 525600 bytes 31544134 jump WEAVE
	}

	chain PREROUTING {
		type nat hook prerouting priority dstnat; policy accept;
		 counter packets 180 bytes 12525 jump KUBE-SERVICES
		fib daddr type local counter packets 23 bytes 1380 jump DOCKER
	}

	chain OUTPUT {
		type nat hook output priority -100; policy accept;
		 counter packets 527005 bytes 31628455 jump KUBE-SERVICES
		ip daddr != 127.0.0.0/8 fib daddr type local counter packets 285425 bytes 17125524 jump DOCKER
	}

	chain KUBE-MARK-DROP {
		counter packets 0 bytes 0 meta mark set mark or 0x8000 
	}

	chain KUBE-MARK-MASQ {
		counter packets 0 bytes 0 meta mark set mark or 0x4000 
	}

	chain KUBE-POSTROUTING {
		mark and 0x4000 != 0x4000 counter packets 4622 bytes 277720 return
		counter packets 0 bytes 0 meta mark set mark xor 0x4000 
		 counter packets 0 bytes 0 masquerade 
	}

	chain KUBE-KUBELET-CANARY {
	}

	chain INPUT {
		type nat hook input priority 100; policy accept;
	}

	chain KUBE-PROXY-CANARY {
	}

	chain KUBE-SERVICES {
		meta l4proto tcp ip daddr 10.96.0.10  tcp dport 9153 counter packets 0 bytes 0 jump KUBE-SVC-JD5MR3NA4I4DYORP
		meta l4proto tcp ip daddr 10.107.66.138  tcp dport 8080 counter packets 1 bytes 60 jump KUBE-SVC-666FUMINWJLRRQPD
		meta l4proto tcp ip daddr 10.111.16.129  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-EZYNCFY2F7N6OQA2
		meta l4proto tcp ip daddr 10.96.9.41  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-EDNDUDH2C75GIR6O
		meta l4proto tcp ip daddr 192.168.53.147  tcp dport 443 counter packets 0 bytes 0 jump KUBE-XLB-EDNDUDH2C75GIR6O
		meta l4proto tcp ip daddr 10.96.9.41  tcp dport 80 counter packets 0 bytes 0 jump KUBE-SVC-CG5I4G2RS3ZVWGLK
		meta l4proto tcp ip daddr 192.168.53.147  tcp dport 80 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
		meta l4proto tcp ip daddr 10.96.0.1  tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-NPX46M4PTMTKRN6Y
		meta l4proto udp ip daddr 10.96.0.10  udp dport 53 counter packets 0 bytes 0 jump KUBE-SVC-TCOU7JCQXEZGVUNU
		meta l4proto tcp ip daddr 10.96.0.10  tcp dport 53 counter packets 0 bytes 0 jump KUBE-SVC-ERIFXISQEP7F7OF4
		 fib daddr type local counter packets 3312 bytes 198720 jump KUBE-NODEPORTS
	}

	chain KUBE-NODEPORTS {
		meta l4proto tcp  tcp dport 31529 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 31529 counter packets 0 bytes 0 jump KUBE-SVC-666FUMINWJLRRQPD
		meta l4proto tcp ip saddr 127.0.0.0/8  tcp dport 30894 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 30894 counter packets 0 bytes 0 jump KUBE-XLB-EDNDUDH2C75GIR6O
		meta l4proto tcp ip saddr 127.0.0.0/8  tcp dport 32740 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp  tcp dport 32740 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
	}

	chain KUBE-SVC-NPX46M4PTMTKRN6Y {
		 counter packets 0 bytes 0 jump KUBE-SEP-Y6PHKONXBG3JINP2
	}

	chain KUBE-SEP-Y6PHKONXBG3JINP2 {
		ip saddr 192.168.53.147  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.53.147:6443
	}

	chain WEAVE {
		# match-set weaver-no-masq-local dst  counter packets 135966 bytes 8160820 return
		ip saddr 192.168.0.0/24 ip daddr 224.0.0.0/4 counter packets 0 bytes 0 return
		ip saddr != 192.168.0.0/24 ip daddr 192.168.0.0/24 counter packets 0 bytes 0 masquerade 
		ip saddr 192.168.0.0/24 ip daddr != 192.168.0.0/24 counter packets 33 bytes 2941 masquerade 
	}

	chain WEAVE-CANARY {
	}

	chain KUBE-SVC-JD5MR3NA4I4DYORP {
		  counter packets 0 bytes 0 jump KUBE-SEP-6JI23ZDEH4VLR5EN
		 counter packets 0 bytes 0 jump KUBE-SEP-FATPLMAF37ZNQP5P
	}

	chain KUBE-SEP-6JI23ZDEH4VLR5EN {
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.2:9153
	}

	chain KUBE-SVC-TCOU7JCQXEZGVUNU {
		  counter packets 0 bytes 0 jump KUBE-SEP-JTN4UBVS7OG5RONX
		 counter packets 0 bytes 0 jump KUBE-SEP-4TCKAEJ6POVEFPVW
	}

	chain KUBE-SEP-JTN4UBVS7OG5RONX {
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto udp   counter packets 0 bytes 0 dnat to 192.168.0.2:53
	}

	chain KUBE-SVC-ERIFXISQEP7F7OF4 {
		  counter packets 0 bytes 0 jump KUBE-SEP-UPZX2EM3TRFH2ASL
		 counter packets 0 bytes 0 jump KUBE-SEP-KPHYKKPVMB473Z76
	}

	chain KUBE-SEP-UPZX2EM3TRFH2ASL {
		ip saddr 192.168.0.2  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.2:53
	}

	chain KUBE-SEP-4TCKAEJ6POVEFPVW {
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto udp   counter packets 0 bytes 0 dnat to 192.168.0.3:53
	}

	chain KUBE-SEP-KPHYKKPVMB473Z76 {
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.3:53
	}

	chain KUBE-SEP-FATPLMAF37ZNQP5P {
		ip saddr 192.168.0.3  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.3:9153
	}

	chain KUBE-SVC-666FUMINWJLRRQPD {
		 counter packets 1 bytes 60 jump KUBE-SEP-LYLDBZYLHY4MT3AQ
	}

	chain KUBE-SEP-LYLDBZYLHY4MT3AQ {
		ip saddr 192.168.0.4  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 1 bytes 60 dnat to 192.168.0.4:8080
	}

	chain KUBE-XLB-EDNDUDH2C75GIR6O {
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-SVC-EDNDUDH2C75GIR6O
		 counter packets 0 bytes 0 jump KUBE-SEP-BLQHCYCSXY3NRKLC
	}

	chain KUBE-XLB-CG5I4G2RS3ZVWGLK {
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		 fib saddr type local counter packets 0 bytes 0 jump KUBE-SVC-CG5I4G2RS3ZVWGLK
		 counter packets 0 bytes 0 jump KUBE-SEP-5XVRKWM672JGTWXH
	}

	chain KUBE-SVC-EDNDUDH2C75GIR6O {
		 counter packets 0 bytes 0 jump KUBE-SEP-BLQHCYCSXY3NRKLC
	}

	chain KUBE-SEP-BLQHCYCSXY3NRKLC {
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:443
	}

	chain KUBE-SVC-CG5I4G2RS3ZVWGLK {
		 counter packets 0 bytes 0 jump KUBE-SEP-5XVRKWM672JGTWXH
	}

	chain KUBE-SEP-5XVRKWM672JGTWXH {
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:80
	}

	chain KUBE-SVC-EZYNCFY2F7N6OQA2 {
		 counter packets 0 bytes 0 jump KUBE-SEP-JYW326XAJ4KK7QPG
	}

	chain KUBE-SEP-JYW326XAJ4KK7QPG {
		ip saddr 192.168.0.5  counter packets 0 bytes 0 jump KUBE-MARK-MASQ
		meta l4proto tcp   counter packets 0 bytes 0 dnat to 192.168.0.5:8443
	}
}
table ip filter {
	chain DOCKER {
	}

	chain DOCKER-ISOLATION-STAGE-1 {
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
		counter packets 0 bytes 0 return
	}

	chain DOCKER-ISOLATION-STAGE-2 {
		oifname "docker0" counter packets 0 bytes 0 drop
		counter packets 0 bytes 0 return
	}

	chain FORWARD {
		type filter hook forward priority filter; policy drop;
		iifname "weave"  counter packets 213 bytes 54014 jump WEAVE-NPC-EGRESS
		oifname "weave"  counter packets 150 bytes 30038 jump WEAVE-NPC
		oifname "weave" ct state new counter packets 0 bytes 0 log group 86 
		oifname "weave" counter packets 0 bytes 0 drop
		iifname "weave" oifname != "weave" counter packets 33 bytes 2941 accept
		oifname "weave" ct state related,established counter packets 0 bytes 0 accept
		 counter packets 0 bytes 0 jump KUBE-FORWARD
		ct state new  counter packets 0 bytes 0 jump KUBE-SERVICES
		ct state new  counter packets 0 bytes 0 jump KUBE-EXTERNAL-SERVICES
		counter packets 0 bytes 0 jump DOCKER-USER
		counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-1
		oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
		oifname "docker0" counter packets 0 bytes 0 jump DOCKER
		iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
		iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
	}

	chain DOCKER-USER {
		counter packets 0 bytes 0 return
	}

	chain KUBE-FIREWALL {
		 mark and 0x8000 == 0x8000 counter packets 0 bytes 0 drop
		ip saddr != 127.0.0.0/8 ip daddr 127.0.0.0/8  ct status dnat counter packets 0 bytes 0 drop
	}

	chain OUTPUT {
		type filter hook output priority filter; policy accept;
		ct state new  counter packets 527014 bytes 31628984 jump KUBE-SERVICES
		counter packets 36324809 bytes 6021214027 jump KUBE-FIREWALL
		meta l4proto != esp  mark and 0x20000 == 0x20000 counter packets 0 bytes 0 drop
	}

	chain INPUT {
		type filter hook input priority filter; policy accept;
		 counter packets 35869492 bytes 5971008896 jump KUBE-NODEPORTS
		ct state new  counter packets 390938 bytes 23457377 jump KUBE-EXTERNAL-SERVICES
		counter packets 36249774 bytes 6030068622 jump KUBE-FIREWALL
		meta l4proto tcp ip daddr 127.0.0.1 tcp dport 6784 fib saddr type != local ct state != related,established  counter packets 0 bytes 0 drop
		iifname "weave" counter packets 907273 bytes 88697229 jump WEAVE-NPC-EGRESS
		counter packets 34809601 bytes 5818213726 jump WEAVE-IPSEC-IN
	}

	chain KUBE-KUBELET-CANARY {
	}

	chain KUBE-PROXY-CANARY {
	}

	chain KUBE-EXTERNAL-SERVICES {
	}

	chain KUBE-NODEPORTS {
		meta l4proto tcp  tcp dport 32196 counter packets 0 bytes 0 accept
		meta l4proto tcp  tcp dport 32196 counter packets 0 bytes 0 accept
	}

	chain KUBE-SERVICES {
	}

	chain KUBE-FORWARD {
		ct state invalid counter packets 0 bytes 0 drop
		 mark and 0x4000 == 0x4000 counter packets 0 bytes 0 accept
		 ct state related,established counter packets 0 bytes 0 accept
		 ct state related,established counter packets 0 bytes 0 accept
	}

	chain WEAVE-NPC-INGRESS {
	}

	chain WEAVE-NPC-DEFAULT {
		# match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst  counter packets 14 bytes 840 accept
		# match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst  counter packets 0 bytes 0 accept
		# match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst  counter packets 0 bytes 0 accept
		# match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst  counter packets 0 bytes 0 accept
		# match-set weave-iLgO^}{o=U/*%KE[@=W:l~|9T dst  counter packets 9 bytes 540 accept
	}

	chain WEAVE-NPC {
		ct state related,established counter packets 124 bytes 28478 accept
		ip daddr 224.0.0.0/4 counter packets 0 bytes 0 accept
		# PHYSDEV match --physdev-out vethwe-bridge --physdev-is-bridged counter packets 3 bytes 180 accept
		ct state new counter packets 23 bytes 1380 jump WEAVE-NPC-DEFAULT
		ct state new counter packets 0 bytes 0 jump WEAVE-NPC-INGRESS
	}

	chain WEAVE-NPC-EGRESS-ACCEPT {
		counter packets 48 bytes 3769 meta mark set mark or 0x40000 
	}

	chain WEAVE-NPC-EGRESS-CUSTOM {
	}

	chain WEAVE-NPC-EGRESS-DEFAULT {
		# match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src  counter packets 0 bytes 0 return
		# match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src  counter packets 31 bytes 2749 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src  counter packets 31 bytes 2749 return
		# match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src  counter packets 0 bytes 0 return
		# match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src  counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src  counter packets 0 bytes 0 return
		# match-set weave-nmMUaDKV*YkQcP5s?Q[R54Ep3 src  counter packets 17 bytes 1020 jump WEAVE-NPC-EGRESS-ACCEPT
		# match-set weave-nmMUaDKV*YkQcP5s?Q[R54Ep3 src  counter packets 17 bytes 1020 return
	}

	chain WEAVE-NPC-EGRESS {
		ct state related,established counter packets 907425 bytes 88746642 accept
		# PHYSDEV match --physdev-in vethwe-bridge --physdev-is-bridged counter packets 0 bytes 0 return
		fib daddr type local counter packets 11 bytes 640 return
		ip daddr 224.0.0.0/4 counter packets 0 bytes 0 return
		ct state new counter packets 50 bytes 3961 jump WEAVE-NPC-EGRESS-DEFAULT
		ct state new mark and 0x40000 != 0x40000 counter packets 2 bytes 192 jump WEAVE-NPC-EGRESS-CUSTOM
	}

	chain WEAVE-IPSEC-IN {
	}

	chain WEAVE-CANARY {
	}
}
table ip mangle {
	chain KUBE-KUBELET-CANARY {
	}

	chain PREROUTING {
		type filter hook prerouting priority mangle; policy accept;
	}

	chain INPUT {
		type filter hook input priority mangle; policy accept;
		counter packets 35716863 bytes 5906910315 jump WEAVE-IPSEC-IN
	}

	chain FORWARD {
		type filter hook forward priority mangle; policy accept;
	}

	chain OUTPUT {
		type route hook output priority mangle; policy accept;
		counter packets 35804064 bytes 5938944956 jump WEAVE-IPSEC-OUT
	}

	chain POSTROUTING {
		type filter hook postrouting priority mangle; policy accept;
	}

	chain KUBE-PROXY-CANARY {
	}

	chain WEAVE-IPSEC-IN {
	}

	chain WEAVE-IPSEC-IN-MARK {
		counter packets 0 bytes 0 meta mark set mark or 0x20000
	}

	chain WEAVE-IPSEC-OUT {
	}

	chain WEAVE-IPSEC-OUT-MARK {
		counter packets 0 bytes 0 meta mark set mark or 0x20000
	}

	chain WEAVE-CANARY {
	}
}

Wow. That’s a lot of nftables entries, but it explains what’s going on. We have a nat entry for:

meta l4proto tcp ip daddr 10.107.66.138 tcp dport 8080 counter packets 1 bytes 60 jump KUBE-SVC-666FUMINWJLRRQPD

which ends up going to KUBE-SEP-LYLDBZYLHY4MT3AQ and:

meta l4proto tcp counter packets 1 bytes 60 dnat to 192.168.0.4:8080

So packets headed for our echoserver are eventually ending up in a container that has a local IP address of 192.168.0.4. Which we can see in our routing table via the weave interface. Mystery explained. We can see the ingress for the externally visible HTTP service as well:

meta l4proto tcp ip daddr 192.168.33.147 tcp dport 80 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK

which ends up redirected to:

meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.5:80

So from that we’d expect the IP inside the echoserver pod to be 192.168.0.4 and the IP address instead our nginx ingress pod to be 192.168.0.5. Let’s look:

root@udon:/# docker ps | grep echoserver
7cbb177bee18   k8s.gcr.io/echoserver                 "/usr/local/bin/run.…"   3 days ago   Up 3 days             k8s_echoserver_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
root@udon:/# docker exec -it 7cbb177bee18 /bin/bash
root@hello-node-59bffcc9fd-8hkgb:/# awk '/32 host/ { print f } {f=$2}' <<< "$(</proc/net/fib_trie)" | sort -u
127.0.0.1
192.168.0.4

It’s a slightly awkward method of determining the local IPs addresses due to the stripped down nature of the container, but it clearly shows the expected 192.168.0.4 address.

I’ve touched here upon the ability to actually enter a container and have a poke around its running environment by using docker directly. Next step is to use that to investigate what containers have actually been spun up and what they’re doing. I’ll also revisit networking when I get to the point of building a multi-node cluster, to examine how the bridging between different hosts is done.

May 28, 2021 06:43 AM

May 27, 2021

Mick Morgan

nothing to hide, nothing to fear

I recently came across this rather nice (spoof) NSA site describing the work of the Agency’s “Domestic Surveillance Directorate”. That Directorate supposedly exists to protect the citizen from the usual suspects (terrorists, paedophiles, criminals) and is tasked with data collection and analysis to support that end.

The site says:

“Our value is founded on a unique and deep understanding of risks, vulnerabilities, mitigations, and threats. Domestic Surveillance plays a vital role in our national security by using advanced data mining systems to “connect the dots” to identify suspicious patterns.”

and

“In the past, domestic law enforcement agencies collected data AFTER a suspect had been identified. This often resulted in lost intelligence and missed opportunities. But what if data could be collected in advance, BEFORE the target was known? What if the mere act of collecting data could result in the identification of new targets?

What if we could build a national data warehouse containing information about every person in the United States? Thanks to secret interpretations of the PATRIOT ACT, top-secret Fourth Amendment exceptions allowed by the Foreign Intelligence Surveillance Court, and broad cooperation at the local, state, and federal level, we can!”

It continues its explanation of how and why data collection on citizens’ activity is so important:

“Every day, people leave a digital trail of electronic breadcrumbs as they go about their daily routine. They go to work using electronic fare cards; drive through intersections with traffic cameras; walk down the street past security cameras; surf the internet; pay for purchases with credit/debit cards; text or call their friends; and on and on.

There is no way to predict in advance which crucial piece of data will be the key to revealing a potential plot. The standard operating procedure for the Domestic Surveillance Directorate is to “collect all available information from all available sources all the time, every time, always”.

So, in shades of Philip K Dick’s “precogs” which allow the Police Precrime Division to arrest suspects /before/ they can commit any actual crime, just because the information is there we should collect it, analyse it, and look for potential activities which might, just might, allow us to prevent that criminal action.

Who could possibly object to that?

Only the the criminals. After all, no-one else has anything to hide.

Back in 2014, at about the time of the Snowden revelations, Simon Jenkins wrote a nice piece for the Guardian about an (again spoof) intended extension to the 2014 (emergency) Data Retention and Investigatory Powers Act 2014 (Drip). In that article, Jenkins had the Home Secretary say:

“A serious shortcoming has now emerged in the 2014 act. Many terrorists, criminals and paedophiles are no longer using the internet, and we need to follow them more closely. While their activities in the open are subject to surveillance cameras, they are also meeting in private houses and other premises. We thus need visual capability for complete coverage. The proposed bill supplies a vital link in the chain.”

In order to address that shortcoming the Home Secretary proposed:

“In consultation with colleagues I intend to amend building regulations to ensure that all new and converted properties have fish-eye lenses installed in ceiling cavities, with Wi-Fi cameras and appropriate power supply. A discreet camera in every room would be unnoticed and, in my view, unobjectionable.

Existing properties will be required to install them over a four-year period. These would supply real-time images of terrorists, criminals and paedophiles at any time of day and night. Any disconnection of a camera would immediately alert the police as prima facie evidence of wrongdoing. I have held talks with the industry on whether the cameras should be in bathrooms and bedrooms. It would clearly be nonsensical to exclude them, as terrorists and paedophiles often make use of these rooms.”

So, HMG would mandate spy cameras in all rooms of every private home in order to intercept data which might, just might, be of use in crime prevention.

Of course, whilst these references may be ironic and (supposedly) completely unrealistic, in fact they are not. Governments (most specifically including our own in the UK) continue to press for increased surveillance powers, the crippling or evasion of encryption, and longer retention of personal data.

Snowden himself said in his 2014 interview with Alan Rusbridger and Ewen MacAskill.

“Of course we can imagine hypotheticals in which some sort of mass surveillance system, facial recognition system, would be effective in preventing crime. In the same way we can imagine hypotheticals in which, if we allowed police to enter our homes freely and search them when we’re gone at work, we’d be able to discover elements of crime and drug use and any kind of social ill. But we draw the line, and we have to draw that line somewhere. The question is, why are our private details that are transmitted online, why are our private details that are stored on our personal devices, any different [from] the details and private records of our lives that are stored in our private journals?

There shouldn’t be this distinction between digital information and printed information. But governments, in the United States and many other countries around the world, increasingly seek to make that distinction because they recognise that it actively increases their powers of investigation.”

No there should not be such a distinction. And just because the data is there, does not mean it should be collected, stored and analysed on the off chance that it might be useful to a surveillance agency. HMG and other Governments may not like end-to-end encryption (because it hampers their collection capability), but that encryption protects the citizen in his or her daily interaction with the modern world. There is no way to break encryption without making everyone less safe. And any rhetoric to the contrary is simply a lie.

by Mick at May 27, 2021 03:25 PM

May 15, 2021

Mick Morgan

fastboot oem get_unlock_data hangs on moto g7 plus

I am posting this in the hope it may help others who find themselves in a similar position to myself.

I have recently upgraded my mobile ‘phone (from a Motorola Moto X4) to a Moto G7 plus. I chose this particular phone because I like Motorolas. I like the fact that they are relatively cheap for a well specced device. I like the fact that they are (usually) easy to reflash with lineageos, and I like the fact that Motorola is quite supportive of that process and gives you some assistance in doing so. Of course the more paranoid of the tin hat brigade might see that support (from a Chinese manufacturer) as somewhat suspect, but that aside, I much prefer my ‘phones to be Google free and all of my past mobiles have been reflashed to either lineageos or its predecessor CyanogenMod.

I chose the G7 plus based on its specification, price, and the availability of an image for lineageos 18.1 (effectively Android 11). At the time of writing this post, later Moto Gs (the G8, G9 and G10) are not supported by lineageos. In theory, my X4 should have been capable of being reflashed to 18.1, but my earlier (three) attempts last year to upgrade it to 17.1 all failed and ended with the phone stuck in a boot loop and I had to revert to 16.0. I was not, therefore, prepared to attempt a jump to 18.1 without the fallback of a separate phone if all went wrong. The X4 is now four years old and I figured I could afford the (modest) cost of a newer model so plumped for the G7. The plus variant is very well specced indeed for a low to mid range phone and certainly meets my modest requirements (messaging, phone calls and a pretty good camera).

I have successfully flashed all three of my earlier Motorolas (and a couple of Samsung devices) in the past so I’m reasonably confident (and experienced) in the process. Besides, the lineageos wiki gives plenty of advice and further help is available on the lineageos subreddit to those who may need it. This time, I needed it.

All went well in the beginning. I already had adb and fastboot loaded on my desktop so I went through the process on the phone of:

Then, having connected the phone to my PC with a USB cable I ran “adb reboot bootloader” and sure enough, the phone rebooted into fastboot mode. Note however, that in multiple subsequent attempts I found it much more consistently successful to boot using the hardware option (with the device powered off, simultaneously holding volume down + power buttons).

With the phone in fastboot mode I could verify that it was found with “fastboot devices” which would successfully report the device ID. Hereafter, however, everything went wrong.

At this stage you need to contact Motorola support to get an unlock code before you can flash a new ROM. Once you are signed in (if you have a motorola account) the support page instructs you to get the device id by typing “fastboot oem get_unlock_data”. The response to this command is supposed to be of the form:

$ fastboot oem get_unlock_data
(bootloader) 0A40040192024205#4C4D3556313230
(bootloader) 30373731363031303332323239#BD00
(bootloader) 8A672BA4746C2CE02328A2AC0C39F95
(bootloader) 1A3E5#1F53280002000000000000000
(bootloader) 0000000

Motorola instructs you to concatenate the five lines of output data (without any of the “bootloader” headers and with no spaces) into one continuous string and paste that string into a field on the web page. If your device can be unlocked, you will then be emailed the unlock code which you can apply to your phone.

My big problem here was the complete failure of the command “fastboot oem get_unlock_data”. No matter how many times I rebooted into fastboot mode that last command would simply hang with no response. I tried waiting for several minutes, I checked the phone boot log (which recorded the command as having successfully been received) but nothing happened. Without the unlock code, you can go no further, so I spent a few (less than happy) hours searching various on-line fora for answers. Try searching for “fastboot oem get_unlock_data hangs on moto g7” to see queries from lots of people with similar problems (though of course you may find references to this post after publication). The best hints I got were at the XDA developers site which pointed to a possible hardware problem – not enouraging.

So – I tried all of the following in various combinations:

All to no avail.

Eventually I tried a completely different PC running a completely different distro (coincidentally a distro entirely free of systemd which I have been experimenting with because I loathe and detest systemd and its vampire squid like attempts to take over the entire linux system) and bingo, it worked. I sucessfully received the unlock code and could move on to reflashing the phone as outlined on the lineageos wiki page.

Now I cannot be sure what was wrong, or why the change of PC should have made any difference, there are too many variables at play here. The second PC runs an AMD processor, not Intel so it has a completely different architecture from the motherboard up. It runs a version of linux without systemd (which I may move to entirely when I have finished my testing), and it runs a completely different kernel (version 5.10.0-5mx-amd64 rather than 5.4.0-73-generic) so it is probably unfair to blame systemd – but I’m going to anyway. It gives me one more reason to hate it.

I don’t like not knowing why something fails, whilst trying something else works without finding out exactly what is wrong, because that leaves me with an itch I can’t scratch. But if you are faced with a similar problem (and searching suggests that this is not an uncommon problem with the G7 models) then please do try a different distro on a different machine. Preferably one without systemd.

One final point. Following the flash of 18.1 to my new phone, I successfully flashed my older X4 to the same version – with no problem whatsoever.

Go figure.

by Mick at May 15, 2021 06:16 PM

December 03, 2020

Ben Francis

A New Future for the WebThings IoT Platform


Originally posted on Medium.

After four years of incubation at Mozilla, Krellian is proud to become the new commercial sponsor of WebThings, an open platform for monitoring and controlling devices over the web.

Today we are announcing the release of WebThings Gateway 1.0 and setting out a vision for the future of the WebThings project.

WebThings

WebThings is an open source implementation of emerging W3C Web of Things standards and consists of three main components:

Flying the Nest

Following a company restructuring in August, Mozilla was looking for a new home for the WebThings community to continue their work.

Having co-founded the project whilst working at Mozilla, I joined discussions with two of my former colleagues Michael Stegeman and David Bryant about spinning out WebThings as an independent open source project. We worked with Mozilla on an agreement to transition the project to a new community-run home at webthings.io, and have spent the last three months working together on that transition.

WebThings Gateway 1.0

Today marks the public release of WebThings Gateway 1.0 and the formal transition of the WebThings platform to its new home at webthings.io. Going forward, Krellian will be sponsoring the new WebThings website and replacement cloud infrastructure, to continue to provide automatic software updates and a secure remote access service for WebThings gateways around the world.

You can read more about the 1.0 release and the transition of existing gateways to the new infrastructure on the Mozilla Hacks blog.

Krellian & WebThings

Krellian’s mission is to “extend the World Wide Web into physical spaces to make our built environment smarter, safer and more sustainable.” WebThings provides an ideal open source platform, built on web standards, to help achieve that mission.

In the short term Krellian will be leveraging the WebThings Cloud remote access service as part of our new digital signage platform. In the longer term we plan to explore other enterprise use cases for the WebThings platform, to help make buildings smarter, safer and more sustainable.

These commercial applications of WebThings will help provide revenue streams to support the long term sustainability of the open source project and allow it to continue to develop and grow.

The WebThings Community

Krellian highly values the thriving community who have supported the WebThings project over the last four years. From hackers and makers to educators and hobbyists, the community have been pivotal in building, testing and promoting WebThings around the world.

Amongst their achievements is the translation of WebThings Gateway into 34 spoken languages, the creation of over a hundred gateway add-ons and the building of countless DIY projects in a dozen different programming languages. Community members have contributed their time and effort to help build and promote WebThings and support other members in using it in thousands of private smart homes around the world.

We intend to support the community to continue with their great work, and have put in place an open governance structure to distribute decision making and foster leadership amongst the global WebThings community.

Future Roadmap

The following are some ideas about where to take the platform next, but we’d also very much like to hear from the community about what they would like to see from the project going forward.

W3C Compliance

WebThings has been developed in parallel with, and has contributed to, the standardisation of the Web of Things at the W3C. Since the last release of WebThings Gateway in April, the W3C Thing Description specification has reached “recommendation” status and is now an international standard.

We’d like to work towards making WebThings compliant with this standard, as there are still a remaining number of differences between the W3C and Mozilla specifications. In order to fill in the gaps between Mozilla’s Web Thing API and the W3C’s Thing Description standard, we plan to continue to lead work on standardising the Web Thing Protocol as a concrete protocol for communicating with devices over the web.

Production Gateway OS

The main WebThings Gateway software image is currently built on top of the Raspbian Linux distribution. This served the project well for its initial target of DIY smart home users, using the popular Raspberry Pi single board computer.

As the platform matures, we would like to explore a more production-quality IoT operating system like Ubuntu Core or Balena OS on which to base the WebThings Gateway distribution.

This will have the following benefits:

  1. A smaller footprint, reducing the minimum system requirements for running the gateway
  2. Enabling the targeting of a wider range of hardware for consumer and enterprise use cases
  3. Better security, through containerisation and automatic software updates for the underlying operating system

WebThings Controller

There was previously a project to build controller software for WebThings, to run on a controller device such as a smart speaker or smart display. The initial prototype was built on Android Things, but was discontinued when Google locked down the Android Things platform to specific OEMs and introduced restrictions on how it could be used.

Krellian would like to explore new controller software built on our open source Krellian Kiosk web runtime, which could allow for touch and voice input. This software would be designed so that it could either run on the same device as the gateway software, or on a separate controller device.

WebThings App

A native WebThings mobile app could act as a general purpose Web of Things client. This could potentially:

  1. Help to streamline the setup process of a WebThings Gateway
  2. Act as a client for native web things which don’t need a gateway
  3. Help with the standardisation process by providing a user friendly reference implementation of a Web of Things client

WebThings Cloud

Finally, we would like to explore expanding the WebThings Cloud offering. This could include an online dashboard for monitoring and controlling devices across multiple premises, and cloud to cloud integrations with other IoT platforms and voice assistants.


We’re excited about this new chapter in the WebThings story, and look forward to working closely with the community on our vision of a connected world where technology is seamlessly woven into the spaces around us and improves the lives of those who use it.

You can find out more about WebThings at its new home of webthings.io, follow @WebThingsIO on Twitter and sign up for the email newsletter to keep up to date with all the latest news.

by tola at December 03, 2020 05:15 PM

November 19, 2020

Daniel Silverstone (Kinnison)

Withdrawing Gitano from support

Unfortunately, in Debian in particular, libgit2 is undergoing a transition which is blocked by gall. Despite having had over a month to deal with this, I've not managed to summon the tuits to update Gall to the new libgit2 which means, nominally, I ought to withdraw it from testing and possibly even from unstable given that I'm not really prepared to look after Gitano and friends in Debian any longer.

However, I'd love for Gitano to remain in Debian if it's useful to people. Gall isn't exactly a large piece of C code, and so probably won't be a huge job to do the port, I simply don't have the desire/energy to do it myself.

If someone wanted to do the work and provide a patch / "pull request" to me, then I'd happily take on the change and upload a new package, or if someone wanted to NMU the gall package in Debian I'll take the change they make and import it into upstream. I just don't have the energy to reload all that context and do the change myself.

If you want to do this, email me and let me know, so I can support you and take on the change when it's done. Otherwise I probably will go down the lines of requesting Gitano's removal from Debian in the next week or so.

by Daniel Silverstone at November 19, 2020 08:49 AM

September 10, 2020

Daniel Silverstone (Kinnison)

Broccoli Sync Conversation

Broccoli Sync Conversation

A number of days ago (I know, I'm an awful human who failed to post this for over a week), myself, Lars, Mark, and Vince discussed Dropbox's article about Broccoli Sync. It wasn't quite what we'd expected but it was an interesting discussion of compression and streamed data.

Vince observed that it was interesting in that it was a way to move storage compression cost to the client edge. This makes sense because decompression (to verify the uploaded content) is cheaper than compression; and also since CPU and bandwidth are expensive, spending the client CPU to reduce bandwidth is worthwhile.

Lars talked about how even in situations where everyone has gigabit data connectivity with no limit on total transit, bandwidth/time is a concern, so it makes sense.

We liked how they determined the right compresison level to use available bandwidth (i.e. not be CPU throttled) but also gain the most compression possible. Their diagram showing relative compression sizes for level 1 vs. 3 vs. 5 suggests that the gain for putting the effort in for 5 rather than 1. It's interesting in that diagram that 'documents' don't compress well but then again it is notable that such documents are likely DEFLATE'd zip files. Basically if the data is already compressed then there's little hope Brotli will gain much.

I raised that it was interesting that they chose Brotli, in part, due to the availability of a pure Rust implementation of Brotli. Lars mentioned that Microsoft and others talk about how huge quantities of C code has unexpected memory safety issues and so perhaps that is related. Daniel mentioned that the document talked about Dropbox having a policy of not running unconstrained C code which was interesting.

Vince noted that in their deployment challenges it seemed like a very poor general strategy to cope with crasher errors; but Daniel pointed out that it might be an over-simplified description, and Mark suggested that it might be sufficient until a fix can be pushed out. Vince agreed that it's plausible this is a tiered/sharded deployment process and thus a good way to smoke out problems.

Daniel found it interesting that their block storage sounds remarkably like every other content-addressible storage and that while they make it clear in the article that encryption, client identification etc are elided, it looks like they might be able to deduplicate between theoretically hostile clients.

We think that the compressed-data plus type plus hash (which we assume also contains length) is an interesting and nice approach to durability and integrity validation in the protocol. And the compressed blocks can then be passed to the storage backend quickly and effectively which is nice for latency.

Daniel raised that he thought it was fun that their rust-brotli library is still workable on Rust 1.12 which is really quite old.

We ended up on a number of tangential discussions, about Rust, about deployment strategies, and so on. While the article itself was a little thin, we certainly had a lot of good chatting around topics it raised.

We'll meet again in a month (on the 28th Sept) so perhaps we'll have a chunkier article next time. (Possibly this and/or related articles)

by Daniel Silverstone at September 10, 2020 07:49 AM

March 12, 2020

Ben Francis

Introducing Krellian — Interactive Digital Signage

Orignally posted on Medium.

For the last six weeks I’ve been enrolled in YCombinator’s Startup School, working towards the launch of a new technology startup.

Today I’m excited to introduce Krellian, a software platform for interactive digital signage, built on web standards.

If you walk around any large town or city today you’ll notice that you’re surrounded by screens. Digital billboards, information kiosks, self-service kiosks, interactive exhibits, digital menus and departure boards.

A lot of those screens are running outdated consumer-grade operating systems and proprietary content runtimes. They’re unreliable, inefficient and insecure. How many times have you seen a digital billboard that’s gone dark, departure boards with a Windows error message on the screen or even ATMs with a blue screen of death?

With Krellian, I am building a software platform for interactive digital signage, built on web standards. A simple, reliable web-based operating system and a secure cloud service for monitoring, controlling and deploying content to connected displays over the internet.

Krellian’s products will build on everything I have learnt over the last decade working on Webian, Firefox OS and Mozilla WebThings. I believe the web technologies I’ve helped standardise around installable web apps and the Web of Things could have enormous potential if applied to the digital signage market.

For example, a purpose-built operating system for displaying modern web content, which can be remotely managed over the internet, could significantly reduce content creation costs and technician call-out fees.

I’m also delighted that Krellian has been accepted onto the High Potential Startups programme, powered by the North East Local Enterprise Partnership. Over the next six months we’ll be working together to better understand customer needs, bring products to market and ultimately grow the business to create more high-tech jobs in the North East of England.

Are you interested in providing in-person digital experiences to your customers? Perhaps you work in advertising, hospitality, healthcare, museums, travel, retail or entertainment? Or do you sell digital signage to your own customers? I’d love to hear more about your needs and the problems you are trying to solve.

Register your interest today at krellian.com and follow Krellian on Twitter and Facebook.

by tola at March 12, 2020 05:23 PM

November 13, 2019

Steve Engledow (stilvoid)

Maur - A minimal AUR helper

This post is about the Arch User Repository. If you’re not an Arch user, probably just move along ;)

There are lots of AUR helpers in existence already but, in the best traditions of open source, none of them work exactly how I want an AUR helper to work, so I created a new one.

Here it is: https://github.com/stilvoid/maur

maur (pronounced like “more”) is tiny. At the time of writing, it’s 49 lines of bash. It also has very few features.

Here is the list of features:

The “help” when installing a package is this, and nothing more:

If you think maur needs more features, use a different AUR helper.

If you find bugs, please submit an issue or, even better, a pull request.

Example usage

Searching the AUR

If you want to search for a package in the AUR, you can grep for it ;)

maur | grep maur

Installing a package

If you want to install a package, for example yay:

maur yay

Upgrading a package

Upgrade a package is the same as installing one. This will upgrade maur:

maur maur

by Steve Engledow at November 13, 2019 12:00 AM

April 06, 2019

Richard Lewis

e-Research on Texts and Images

I went to a colloquium on e-Research on Texts and Images at the British Academy yesterday; very, very swanky. Lunch was served on triangular plates, triangular! Big chandeliers, paintings, grand staircase. Well worth investigating for post-doc fellowships one day.

There were also some good papers. Just one or two things that really stuck out for me. There seems to be quite a lot of interest in e-research now around formalising, encoding, and analysing scholarly process. The motivation seems to be that, in order to design software tools to aid scholarship, it's necessary to identify what scholarly processes are engaged in and how they may be re-figured in software manifestations. This is the same direction that my research has been taking, and relates closely to the study of tacit knowledge in which Purcell Plus is engaged.

Ségoléne Tarte presented a very useful diagram in her talk explaining why this line of investigation is important. It showed a continuum of activity which started with "signal" and ended with "meaning". Running along one side of this continuum were the scholarly activities and conceptions that occur as raw primary sources are interpreted, and along the other were the computational processes which may aid these human activities. Her particular version of this continuum was describing the interpretation of images of Roman writing tablets, so the kinds of activities described included identification of marks, characters, and words, and boundary and shape detection in images. She described some of the common aspects of this process, including: oscillation of activity and understanding; dealing with noise; phase congruency; and identifying features (a term which has become burdened with assumed meaning but which should also be considered at its most general sometimes). But I'm sure the idea extends to other humanities disciplines and other kinds of "signal" or primary sources.

Similarly, Melissa Terras talked about her work on knowledge elicitation from expert papyrologists. This included various techniques (drawn from social science and clinical psychology) such as talk-aloud protocols and concept sorting. She was able to show nice graphs of how an expert's understanding of a particular source switches between different levels continuously during the process of working with it. It's this cyclical, dynamic process of coming to understand an artifact which we're attempting to capture and encode with a view to potentially providing decision support tools whose design is informed by this encoded procedure.

A few other odd notes I made. David DeRoure talked about the importance of social science methods in e-Humanities. Amongst other things, he also made an interesting point that it's probably a better investment to teach scholars and researchers about understanding data (representation, manipulation, management) than it is to buy lots of expensive and powerful hardware. Annamaria Carusi said lots of interesting things which I'm annoyed with myself for not having written down properly. (There was something about warning of the non-neutrality of abstractions; interpretation as arriving at a hypothesis, and how this potentially aligns humanistic work with scientific method; and how use of technologies can make some things very easy, but at the expense of making other things very hard.)

April 06, 2019 09:04 PM

New baby, new house, new job

A great deal of time has passed since I last wrote a blog post. During that time my partner and I have had a baby (who's now 20 months old) and bought a house, I've started a new job, finished that new job, and started another new job.

The first new job was working for an open source consultancy firm called credativ which is based in Rugby but which, at the time I started, had recently opened a London office. Broadly, they consult on open source software for business. In practice most of the work is using OpenERP, an open source enterprise resource planning (ERP) system written in Python. I was very critical of OpenERP when I started, but I guess this was partly because my unfamiliarity with it led to me often feeling like a n00b programmer again and this was quite frustrating. By the time I finished at credativ I'd learned to understand how to deal with this quite large software system and I now have a better understanding of its real deficiencies: code quality in the core system is generally quite poor, although it has a decent test suite and is consequently functionally fairly sound, the code is scrappy and often quite poorly designed; the documentation is lacking and not very organised; its authors, I find, don't have a sense of what developers who are new to the framework actually need to know. I also found that, during the course of my employment, it took a long time to gain experience of the system from a user's perspective (because I had to spend time doing development work with it); I think earlier user experience would have helped me to understand it sooner. Apart from those things, it seems like a fairly good ERP. Although one other thing I learned working with it (and with business clients in general) is the importance of domain knowledge: OpenERP is about business applications (accounting, customer relations, sales, manufacture) and, it turns out, I don't know anything about any of these things. That makes trying to understand software designed to solve those problems doubly hard. (In all my previous programming experience, I've been working in domains that are much more familiar.)

As well as OpenERP, I've also learned quite a lot about the IT services industry and about having a proper job in general. Really, this was the first proper job I've ever had; I've earned money for years, but always in slightly off-the-beaten-track ways. I've found that team working skills (that great CV cliché) are actually not one of my strong points; I had to learn to ask for help with things, and to share responsibilities with my colleagues. I've learned a lot about customers. It's a very different environment where a lot of your work is reactive; I've previously been used to long projects where the direction is largely self-determined. A lot of the work was making small changes requested by customers. In such cases it's so important to push them to articulate as clearly as possible what they are actually trying to achieve; too often customers will describe a requirement at the wrong level of detail, that is, they'll describe a technical level change. What's much better is if you can get them to describe the business process they are trying to implement so you can be sure the technical change they want is appropriate or specify something better. I've learned quite a bit about managing my time and being productive. We undertook a lot of fixed-price work, where we were required to estimate the cost of the work beforehand. This involves really knowing how long things take which is quite a skill. We also needed to be able to account for all our working time in order to manage costs and stick within budgets for projects. So I learned some more org-mode tricks for managing effort estimates and for keeping more detailed time logs.

My new new job is working back at Goldsmiths again, with mostly the same colleagues. We're working on an AHRC-funded project called Transforming Musicology. We have partners at Queen Mary, the Centre for e-Research at Oxford, Oxford Music Faculty, and the Lancaster Institute for Contemporary Arts. The broad aim of the project can be understood as the practical follow-on from Purcell Plus: how does the current culture of pervasive networked computing affect what it means to study music and how music gets studied? We're looking for evidence of people using computers to do things which we would understand as musicology, even though they may not. We're also looking at how computers can be integrated into the traditional discipline. And we're working on extending some existing tools for music and sound analysis, and developing frameworks for making music resources available on the Semantic Web. My role is as project manager. I started work at the beginning of October so we've done four days so far. It's mainly been setting up infrastructure (website, wiki, mailing list) and trying to get a good high-level picture of how the two years should progress.

I've also moved my blog from livejournal to here which I manage using Ikiwiki. Livejournal is great; I just liked the idea of publishing my blog using Ikiwiki, writing it in Emacs, and managing it using git. Let's see if I stick to it...

April 06, 2019 09:04 PM

February 12, 2019

Steve Engledow (stilvoid)

Using Git with AWS CodeCommit Across Multiple AWS Accounts

(Cross-posted from the AWS DevOps blog)

I use AWS CodeCommit to host all of my private Git repositories. My repositories are split across several AWS accounts for different purposes: personal projects, internal projects at work, and customer projects.

The CodeCommit documentation shows you how to configure and clone a repository from one place, but in this blog post I want to share how I manage my Git configuration across multiple AWS accounts.

Background

First, I have profiles configured for each of my AWS environments. I connect to some of them using IAM user credentials and others by using cross-account roles.

I intentionally do not have any credentials associated with the default profile. That way I must always be sure I have selected a profile before I run any AWS CLI commands.

Here’s an anonymized copy of my ~/.aws/config file:

[profile personal]
region = eu-west-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile work]
region = us-east-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile customer]
region = eu-west-2
source_profile = work
role_arn = arn:aws:iam::123456789012:role/CrossAccountPowerUser

If I am doing some work in one of those accounts, I run export AWS_PROFILE=work and use the AWS CLI as normal.

The problem

I use the Git credential helper so that the Git client works seamlessly with CodeCommit. However, because I use different profiles for different repositories, my use case is a little more complex than the average.

In general, to use the credential helper, all you need to do is place the following options into your ~/.gitconfig file, like this:

[credential]
    helper = !aws codecommit credential-helper $@
    UserHttpPath = true

I could make this work across accounts by setting the appropriate value for AWS_PROFILE before I use Git in a repository, but there is a much neater way to deal with this situation using a feature released in Git version 2.13, conditional includes.

A solution

First, I separate my work into different folders. My ~/code/ directory looks like this:

code
    personal
        repo1
        repo2
    work
        repo3
        repo4
    customer
        repo5
        repo6

Using this layout, each folder that is directly underneath the code folder has different requirements in terms of configuration for use with CodeCommit.

Solving this has two parts; first, I create a .gitconfig file in each of the three folder locations. The .gitconfig files contain any customization (specifically, configuration for the credential helper) that I want in place while I work on projects in those folders.

For example:

[user]
    # Use a custom email address
    email = sengledo@amazon.co.uk

[credential]
    # Note the use of the --profile switch
    helper = !aws --profile work codecommit credential-helper $@
    UseHttpPath = true

I also make sure to specify the AWS CLI profile to use in the .gitconfig file which means that, when I am working in the folder, I don’t need to set AWS_PROFILE before I run git push, etc.

Secondly, to make use of these folder-level .gitconfig files, I need to reference them in my global Git configuration at ~/.gitconfig

This is done through the includeIf section. For example:

[includeIf "gitdir:~/code/personal/"]
    path = ~/code/personal/.gitconfig

This example specifies that if I am working with a Git repository that is located anywhere under ~/code/personal/, Git should load additional configuration from ~/code/personal/.gitconfig. That additional file specifies the appropriate credential helper invocation with the corresponding AWS CLI profile selected as detailed earlier.

The contents of the new file are treated as if they are inserted into the main .gitconfig file at the location of the includeIf section. This means that the included configuration will only override any configuration specified earlier in the config.

by Steve Engledow at February 12, 2019 12:00 AM

June 07, 2018

Brett Parker (iDunno)

The Psion Gemini

So, I backed the Gemini and received my shiny new device just a few months after they said that it'd ship, not bad for an indiegogo project! Out of the box, I flashed it, using the non-approved linux flashing tool at that time, and failed to backup the parts that, err, I really didn't want blatted... So within hours I had a new phone that I, err, couldn't make calls on, which was marginally annoying. And the tech preview of Debian wasn't really worth it, as it was fairly much unusable (which was marginally upsetting, but hey) - after a few more hours / days of playing around I got the IMEI number back in to the Gemini and put back on the stock android image. I didn't at this point have working bluetooth or wifi, which was a bit of a pain too, turns out the mac addresses for those are also stored in the nvram (doh!), that's now mostly working through a bit of collaboration with another Gemini owner, my Gemini currently uses the mac addresses from his device... which I'll need to fix in the next month or so, else we'll have a mac address collision, probably.

Overall, it's not a bad machine, the keyboard isn't quite as good as I was hoping for, the phone functionality is not bad once you're on a call, but not great until you're on a call, and I certainly wouldn't use it to replace the Samsung Galaxy S7 Edge that I currently use as my full time phone. It is however really rather useful as a sysadmin tool when you don't want to be lugging a full laptop around with you, the keyboard is better than using the on screen keyboard on the phone, the ssh client is "good enough" to get to what I need, and the terminal font isn't bad. I look forward to seeing where it goes, I'm happy to have been an early backer, as I don't think I'd pay the current retail price for one.

by Brett Parker (iDunno@sommitrealweird.co.uk) at June 07, 2018 01:04 PM

February 21, 2018

MJ Ray

How hard can typing æ, ø and å be?

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

by mjr at February 21, 2018 04:14 PM

March 01, 2017

Brett Parker (iDunno)

Using the Mythic Beasts IPv4 -> IPv6 Proxy for Websites on a v6 only Pi and getting the right REMOTE_ADDR

So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.

I've cheated a bit, because the turning on of ProxyProtocol for the hostedpi.com addresses is currently not exposed to customers (it's on the list!), to do it without access to Mythic's backends use your own domainname (I've also got https://pi3.sommitrealweird.co.uk/ mapped to this Pi).

So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen [::1]:8080

<IfModule ssl_module>
       Listen [::1]:4443
</IfModule>

<IfModule mod_gnutls.c>
       Listen [::1]:4443
</IfModule>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.

So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:

global
       log /dev/log    local0
       log /dev/log    local1 notice
       chroot /var/lib/haproxy
       stats socket /run/haproxy/admin.sock mode 660 level admin
       stats timeout 30s
       user haproxy
       group haproxy
       daemon

       # Default SSL material locations
       ca-base /etc/ssl/certs
       crt-base /etc/ssl/private

       # Default ciphers to use on SSL-enabled listening sockets.
       # For more information, see ciphers(1SSL). This list is from:
       #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
       ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
       ssl-default-bind-options no-sslv3

defaults
       log     global
       mode    http
       option  httplog
       option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
       errorfile 400 /etc/haproxy/errors/400.http
       errorfile 403 /etc/haproxy/errors/403.http
       errorfile 408 /etc/haproxy/errors/408.http
       errorfile 500 /etc/haproxy/errors/500.http
       errorfile 502 /etc/haproxy/errors/502.http
       errorfile 503 /etc/haproxy/errors/503.http
       errorfile 504 /etc/haproxy/errors/504.http

frontend any_http
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::80
        default_backend any_http

backend any_http
        server apache2 ::1:8080

Obviously after that you then do:

systemctl restart apache2
systemctl restart haproxy

Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:

a2enmod remoteip

And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined

And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat and add the relevant RemoteIP settings:

RemoteIPHeader X-Forwarded-For
RemoteIPTrustedProxy ::1

CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined

Now, enable the config and restart apache2:

a2enconf remoteip-logformats
systemctl restart apache2

Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.

So, you can now happily visit http://www.<your-pi-name>.hostedpi.com/, e.g. http://www.srwpi.hostedpi.com/.

Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.

/etc/dehydrated/conf.d/mail.sh:

CONTACT_EMAIL="my@email.address"

/etc/dehydrated/conf.d/domainconfig.sh:

DOMAINS_D="/etc/dehydrated/domains.d"

/etc/dehydrated/domains.d/srwpi.hostedpi.com:

HOOK="/etc/dehydrated/hooks/srwpi"

/etc/dehydrated/hooks/srwpi:

#!/bin/sh
action="$1"
domain="$2"

case $action in
  deploy_cert)
    privkey="$3"
    cert="$4"
    fullchain="$5"
    chain="$6"
    cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem
    chmod 640 /etc/ssl/private/srwpi.pem
    ;;
  *)
    ;;
esac

/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.

And finally the file /etc/dehydrated/domains.txt:

www.srwpi.hostedpi.com srwpi.hostedpi.com

Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.

Run dehydrated in cron mode (it's noisy, but meh...):

dehydrated -c

That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/www.srwpi.hostedpi.com/fullchain.pem and /var/llib/dehydrated/certs/ww.srwpi.hostedpi.com/privkey.pem files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:

a2ensite default-ssl
a2enmod ssl

And restart apache2:

systemctl restart apache2

Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:

frontend any_https
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::443 ssl crt /etc/ssl/private/srwpi.pem

        default_backend any_https

backend any_https
        server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt

Restart haproxy:

systemctl restart haproxy

And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.

by Brett Parker (iDunno@sommitrealweird.co.uk) at March 01, 2017 06:35 PM

October 18, 2016

MJ Ray

Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web. 🙂

by mjr at October 18, 2016 04:28 AM

July 10, 2014

James Taylor

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


July 10, 2014 02:09 PM

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


July 10, 2014 02:09 PM

June 12, 2014

Paul Tansom

Beginning irc

After some discussion last night at PHP Hants about the fact that irc is a great facilitator of support / discussion, but largely ignored because there is rarely enough information for a new user to get going I decided it may be worth putting together a howto type post so here goes…

What is irc?

First of all, what on earth is it? I’m tempted to describe it as Twitter done right years before Twitter even existed, but I’m a geek and I’ve been using irc for years. It has a long heritage, but unlike the ubiquitous email it hasn’t made the transition into mainstream use. In terms of usage it has similarities to things like Twitter and Instant Messaging. Let’s take a quick look at this.

Twitter allows you to broadcast messages, they get published and anyone who is subscribed to your feed can read what you say. Everything is pretty instant, and if somebody is watching the screen at the right time they can respond straight away. Instant Messaging on the other hand, is more of a direct conversation with a single person, or sometimes a group of people, but it too is pretty instantaneous – assuming, of course, that there’s someone reading what you’ve said. Both of these techonologies are pretty familiar to many. If you go to the appropriate website you are given the opportunity to sign up and either use a web based client or download one.

It is much the same for irc in terms of usage, although conversations are grouped into channels which generally focus on a particular topic rather than being generally broadcast (Twitter) or more specifically directed (Instant Messaging). The downside is that in most cases you don’t get a web page with clear instructions of how to sign up, download a client and find where the best place is to join the conversation.

Getting started

There are two things you need to get going with irc, a client and somewhere to connect to. Let’s put that into a more familiar context.

The client is what you use to connect with; this can be an application – so as an example Outlook or Thunderbird would be a mail client, or IE, Firefox, Chrome or Safari are examples of clients for web pages – or it can be a web page that does the same thing – so if you go to twitter.com and login you are using the web page as your Twitter client. Somewhere to connect to can be compared to a web address, or if you’ve got close enough to the configuration of your email to see the details, your mail server address.

Let’s start with the ‘somewhere to connect to‘ bit. Freenode is one of the most popular irc servers, so let’s take a look. First we’ll see what we can find out from their website, http://freenode.net/.

freenode

There’s a lot of very daunting information there for somebody new to irc, so ignore most of it and follow the Webchat link on the left.

webchat

That’s all very well and good, but what do we put in there? I guess the screenshot above gives a clue, but if you actually visit the page the entry boxes will be blank. Well first off there’s the Nickname, this can be pretty much anything you like, no need to register it – stick to the basics of letters, numbers and some simple punctuation (if you want to), keep it short and so long as nobody else is already using it you should be fine; if it doesn’t work try another. Channels is the awkward one, how do you know what channels there are? If you’re lucky you’re looking into this because you’ve been told there’s a channel there and hopefully you’ve been given the channel name. For now let’s just use the PHP Hants channel, so that would be #phph in the Channels box. Now all you need to do is type in the captcha, ignore the tick boxes and click Connect and you are on the irc channel and ready to chat. Down the right you’ll see a list of who else is there, and in the main window there will be a bit of introductory information (e.g. topic for the channel) and depending on how busy it is anything from nothing to a fast scrolling screen of text.

phph

If you’ve miss typed there’s a chance you’ll end up in a channel specially created for you because it didn’t exist; don’t worry, just quit and try again (I’ll explain that process shortly).

For now all you really need to worry about is typing in text an posting it, this is as simple as typing it into the entry box at the bottom of the page and pressing return. Be polite, be patient and you’ll be fine. There are plenty of commands that you can use to do things, but for now the only one you need to worry about is the one to leave, this is:

/quit

Type it in the entry box, press return and you’ve disconnected from the server. The next thing to look into is using a client program since this is far more flexible, but I’ll save that for another post.

The post Beginning irc appeared first on Linuxlore.

by Paul Tansom at June 12, 2014 04:27 PM

January 01, 2014

John Woodard

A year in Prog!


It's New Year's Day 2014 and I'm reflecting on the music of past year.

Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 01, 2014 07:56 PM

December 01, 2013

Paul Tansom

Scratch in a network environment

I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I’ve made to the default Scratch install to make things easier. So here goes:

With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn’t exactly helpful. It also isn’t ideal that they can explore the C: drive in spite of profile restrictions (although it isn’t the end of the world as there is little they can do from Scratch).

save-orig

After a bit of time with Google I found the answer, and since it didn’t immediately leap out at me when I was searching I thought I’d post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:

C:\Program Files\Scratch\Scratch.ini

Initially it looks like this:

ini-orig

Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):

Home=U:
VisibleDrives=U:

To get this:

ini-new

They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.

save-new1

The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.

save-new2

You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).

There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.

Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven’t tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:

ProxyServer=[server name or IP address]
ProxyPort=[port number]

The post Scratch in a network environment appeared first on Linuxlore.

by Paul Tansom at December 01, 2013 07:00 PM

January 16, 2013

John Woodard

LinuxMint 14 Add Printer Issue


 LinuxMint 14 Add Printer Issue



 

I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!

It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.

Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.

The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.

So come on Minty people get your bug squashing boots on and stamp on this one please.

Update

Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 16, 2013 12:39 AM

August 20, 2012

David Reynolds

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

June 25, 2012

Elisabeth Fosbrooke-Brown (sfr)

Black redstarts

It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.

Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!

The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.

by sunflowerinrain (noreply@blogger.com) at June 25, 2012 08:02 AM

June 16, 2012

Elisabeth Fosbrooke-Brown (sfr)

Roundabout at Mirambeau

Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.

A couple of years ago the roundabout at the edge of  Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.

Need a passenger to take photo!

by sunflowerinrain (noreply@blogger.com) at June 16, 2012 12:06 PM

September 04, 2006

Ashley Howes

Some new photos

Take a look at some new photos my father and I have taken. We are experimenting with our new digital SLR with a variety of lenses.

by Ashley (noreply@blogger.com) at September 04, 2006 10:42 AM

August 30, 2006

Ashley Howes

A Collection of Comments

This is a bit of fun. A collection of comments found in code. This is from The Daily WTF.

by Ashley (noreply@blogger.com) at August 30, 2006 01:13 AM