Planet ALUG

September 21, 2020

Jonathan McDowell

Mainline Linux on the MikroTik RB3011

I upgraded my home internet connection to fibre (FTTP) last October. I’m still on an 80M/20M service, so it’s no faster than my old VDSL FTTC connection was, and as a result for a long time I continued to use my HomeHub 5A running OpenWRT. However the FTTP ONT meant I was using up an additional ethernet port on the router, and I was already short, so I ended up with a GigE switch in use as well. Also my wifi is handled by a UniFi, which takes its power via Power-over-Ethernet. That mean I had a router, a switch and a PoE injector all in close proximity. I wanted to reduce the number of devices, and ideally upgrade to something that could scale once I decide to upgrade my FTTP service speed.

Looking around I found the MikroTik RB3011UiAS-RM, which is a rack mountable device with 10 GigE ports (plus an SFP slot) and a dual core Qualcomm IPQ8064 ARM powering it. There’s 1G RAM and 128MB NAND flash, as well as a USB3 port. It also has PoE support. On paper it seemed like an ideal device. I wasn’t particularly interested in running RouterOS on it (the provided software), but that’s based on Linux and there was some work going on within OpenWRT to add support, so it seemed like a worthwhile platform to experiment with (what, you expected this to be about me buying an off the shelf device and using it with only the supplied software?). As an added bonus a friend said he had one he wasn’t using, and was happy to sell it to me for a bargain price.

RB3011 router in use

I did try out RouterOS to start with, but I didn’t find it particularly compelling. I’m comfortable configuring firewalling and routing at a Linux command line, and I run some additional services on the router like my MQTT broker, and mqtt-arp, my wifi device presence monitor. I could move things around such that they ran on the house server, but I consider them core services and as a result am happier with them on the router.

The first step was to get something booting on the router. Luckily it has an RJ45 serial console port on the back, and a reasonably featured bootloader that can manage to boot via tftp over the network. It wants an ELF binary rather than a plain kernel, but Sergey Sergeev had done the hard work of getting u-boot working for the IPQ8064, which mean I could just build normal u-boot images to try out.

Linux upstream already had basic support for a lot of the pieces I was interested in. There’s a slight fudge around AUTO_ZRELADDR because the network coprocessors want a chunk of memory at the start of RAM, but there’s ongoing discussions about how to handle this cleanly that I’m hopeful will eventually mean I can drop that hack. Serial, ethernet, the QCA8337 switches (2 sets of 5 ports, tied to different GigE devices on the processor) and the internal NOR all had drivers, so it was a matter of crafting an appropriate DTB to get them working. That left niggles.

First, the second switch is hooked up via SGMII. It turned out the IPQ806x stmmac driver didn’t initialise the clocks in this mode correctly, and neither did the qca8k switch driver. So I need to fix up both of those (Sergey had handled the stmmac driver, so I just had to clean up and submit his patch). Next it turned out the driver for talking to the Qualcomm firmware (SCM) had been updated in a way that broke the old method needed on the IPQ8064. Some git archaeology figured that one out and provided a solution. Ansuel Smith helpfully provided the DWC3 PHY driver for the USB port. That got me to the point I could put a Debian armhf image onto a USB stick and mount that as root, which made debugging much easier.

At this point I started to play with configuring up the device to actually act as a router. I make use of a number of VLANs on my home network, so I wanted to make sure I could support those. Turned out the stmmac driver wasn’t happy reconfiguring its MTU because the IPQ8064 driver doesn’t configure the FIFO sizes. I found what seem to be the correct values and plumbed them in. Then the qca8k driver only supported port bridging. I wanted the ability to have a trunk port to connect to the upstairs switch, while also having ports that only had a single VLAN for local devices. And I wanted the switch to handle this rather than requiring the CPU to bridge the traffic. Thankfully it’s easy to find a copy of the QCA8337 datasheet and the kernel Distributed Switch Architecture is pretty flexible, so I was able to implement the necessary support.

I stuck with Debian on the USB stick for actually putting the device into production. It makes it easier to fix things up if necessary, and the USB stick allows for a full Debian install which would be tricky on the 128M of internal NAND. That means I can use things like nftables for my firewalling, and use the standard Debian packages for things like collectd and mosquitto. Plus for debug I can fire up things like tcpdump or tshark. Which ended up being useful because when I put the device into production I started having weird IPv6 issues that turned out to be a lack of proper Ethernet multicast filter support in the IPQ806x ethernet device. The driver would try and setup the multicast filter for the IPv6 NDP related packets, but it wouldn’t actually work. The fix was to fall back to just receiving all multicast packets - this is what the vendor driver does.

Most of this work will be present once the 5.9 kernel is released - the basics are already in 5.8. Currently not queued up that I can think of are the following:

Overall I consider the device a success, and it’s been entertaining getting it working properly. I’m running a mostly mainline kernel, it’s handling my house traffic without breaking a sweat, and the fact it’s running Debian makes it nice and easy to throw more things on it as I desire. However it turned out the RB3011 isn’t as perfect device as I’d hoped. The PoE support is passive, and the UniFi wants 802.1af. So I was going to end up with 2 devices. As it happened I picked up a cheap D-Link DGS-1210-10P switch, which provides the PoE support as well as some additional switch ports. Plus it runs Linux, so more on that later…

September 21, 2020 05:53 PM

September 14, 2020

Jonathan McDowell

onak 0.6.1 released

Yesterday I did the first release of my OpenPGP compatible keyserver, onak, in 4 years. Actually, 2 releases because I discovered my detection for various versions of libnettle needed some fixing.

It was largely driven by the need to get an updated package sorted for Debian due to the removal of dh-systemd, but it should have come sooner. This release has a number of clean-ups for dealing with the hostility shown to the keyserver network in recent years. In particular it implements some of dkg’s Abuse-Resistant OpenPGP Keystores, and finally adds support for verifying signatures fully. That opens up the ability to run a keyserver that will only allow verifiable updates to keys. This doesn’t tie in with folk who want to run PGP based systems because of the anonymity, but for those of us who think PGP’s strength is in the web of trust it’s pretty handy. And it’s all configurable to taste; you can turn off all the verification if you want, or verify everything but not require any signatures, or even enable v3 keys if you feel like it.

The main reason this release didn’t come sooner is that I’m painfully aware of the bits that are missing. In particular:

Anyway. Available locally or via GitHub.

0.6.0 - 13th September 2020

0.6.1 - 13th September 2020

September 14, 2020 06:09 PM

September 11, 2020

Mick Morgan

laws are made to be broken

We here in England must now limit our social interaction to the “rule of six”. Our dear, delusional, demented, certifiable, mendacious, law breaking Prime Minister says that I cannot meet in my home with more than six people – despite the fact that I can go to the Gym, or a Supermarket, or a Pub, or a Restaurant which will be inside, and with more than six people present.

Well, guess what, my grandchildren come to dinner with my wife and I and visit fairly regularly. They love their Nana. Together with their parents that makes seven people in my home.

So, I tell you now Prime Minister, I fully intend to break the law in a very “specific and limited way” (TM).

by Mick at September 11, 2020 03:52 PM

September 10, 2020

Daniel Silverstone (Kinnison)

Broccoli Sync Conversation

Broccoli Sync Conversation

A number of days ago (I know, I'm an awful human who failed to post this for over a week), myself, Lars, Mark, and Vince discussed Dropbox's article about Broccoli Sync. It wasn't quite what we'd expected but it was an interesting discussion of compression and streamed data.

Vince observed that it was interesting in that it was a way to move storage compression cost to the client edge. This makes sense because decompression (to verify the uploaded content) is cheaper than compression; and also since CPU and bandwidth are expensive, spending the client CPU to reduce bandwidth is worthwhile.

Lars talked about how even in situations where everyone has gigabit data connectivity with no limit on total transit, bandwidth/time is a concern, so it makes sense.

We liked how they determined the right compresison level to use available bandwidth (i.e. not be CPU throttled) but also gain the most compression possible. Their diagram showing relative compression sizes for level 1 vs. 3 vs. 5 suggests that the gain for putting the effort in for 5 rather than 1. It's interesting in that diagram that 'documents' don't compress well but then again it is notable that such documents are likely DEFLATE'd zip files. Basically if the data is already compressed then there's little hope Brotli will gain much.

I raised that it was interesting that they chose Brotli, in part, due to the availability of a pure Rust implementation of Brotli. Lars mentioned that Microsoft and others talk about how huge quantities of C code has unexpected memory safety issues and so perhaps that is related. Daniel mentioned that the document talked about Dropbox having a policy of not running unconstrained C code which was interesting.

Vince noted that in their deployment challenges it seemed like a very poor general strategy to cope with crasher errors; but Daniel pointed out that it might be an over-simplified description, and Mark suggested that it might be sufficient until a fix can be pushed out. Vince agreed that it's plausible this is a tiered/sharded deployment process and thus a good way to smoke out problems.

Daniel found it interesting that their block storage sounds remarkably like every other content-addressible storage and that while they make it clear in the article that encryption, client identification etc are elided, it looks like they might be able to deduplicate between theoretically hostile clients.

We think that the compressed-data plus type plus hash (which we assume also contains length) is an interesting and nice approach to durability and integrity validation in the protocol. And the compressed blocks can then be passed to the storage backend quickly and effectively which is nice for latency.

Daniel raised that he thought it was fun that their rust-brotli library is still workable on Rust 1.12 which is really quite old.

We ended up on a number of tangential discussions, about Rust, about deployment strategies, and so on. While the article itself was a little thin, we certainly had a lot of good chatting around topics it raised.

We'll meet again in a month (on the 28th Sept) so perhaps we'll have a chunkier article next time. (Possibly this and/or related articles)

by Daniel Silverstone at September 10, 2020 07:49 AM

August 31, 2020

Chris Lamb

Free software activities in August 2020

Here is another monthly update covering what I have been doing in the free software world during August 2020 (previous month):

I uploaded Lintian versions 2.86.0, 2.87.0, 2.88.0, 2.89.0, 2.90.0, 2.91.0 and 2.92.0, as well as made the following changes:


§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:


§


diffoscope

I made the following changes to diffoscope, including preparing and uploading versions 155, 156, 157 and 158 to Debian:


§


Debian

Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

You can find out more about the project via the following video:


§


Uploads to Debian

August 31, 2020 08:05 PM

August 06, 2020

Chris Lamb

The Bringers of Beethoven

This is a curiously poignant work to me that I doubt I will ever be able to communicate. I found it about fifteen years ago, along with a friend who I am quite regrettably no longer in regular contact with, so there was some deeply complicated nostalgia entangled with rediscovering it today.

So what might I say about it instead? For me, one tell-tale sign of 'good' art is that you can find something new in it — or yourself — each time. In this sense, despite The Bringers of Beethoven being more than a little ridiculous, it is somehow 'good' music to me. For instance, it only really dawned on me today that the whole poem is an allegory for a GDR-like totalitarianism.

But I also realised that it is not an accident that it's Beethoven himself (quite literally the soundtrack for Enlightenment humanism) who is being weaponised here, rather than some fourth-rate composer of military marches or even one with an already-problematic past. That is to say, not only is the poem arguing that something generally recognised as an unalloyed good can be subverted for propagandistic ends, but that is precisely the point being made by the poem's regime. An inverted Clockwork Orange, if you will.

Yet when I listen to it again, I can't help but laugh. I think of the 18th-century poet Alexander Pope, who first used the word bathos to refer to those abrupt and often absurd transitions from the elevated to the ordinary, in contrast to the concept of pathos, the sincere feeling of sadness and tragedy. Today, I can't think of two better words.

August 06, 2020 09:48 PM

June 06, 2020

Mick Morgan

encrypting DNS on android

My previous two posts discussed the need for encrypted DNS and then how to do it on a linux desktop. I do not have any Microsoft systems so I have no idea how to approach the problem if you use any form of Windows OS, nor do I have any Apple devices so I can’t provide advice for iOS either, but I do use Android devices. All my mobile phones for some time have been re-flashed to run a version of Android’s OS software. My current mobile (and two previous ones) run lineageOS, earlier phones used the (now discontinued) cyanogenmod ROM. I dislike google’s cavalier approach to user privacy, but I do like the flexibility and usability provided by modern smartphones. Apple devices strike me as hugely overpriced and I have never been convinced by the “oooh shiny” adoration shown by some of Apple’s hardened admirers. But each to their own.

I tend to buy mid-range smartphones, SIM free and unlocked, costing around the £100-£200 mark or higher specification models second hand. When choosing a new phone I always check whether it will support a current version of lineageos (preferably the latest which is at 17.1, or Android 10). In fact the only reason I have to upgrade my phone is if it becomes unsupported by the OS of my choice. Of course, this happens with mainstream Android on many devices simply because the Telco providing the service stops providing OTA upgrades to the OS (they want you to buy a new “shiny, shiny” and sign a new contract). Fortunately for people like me this often means that we can then get fairly well specced phones second hand and then re-flash them with our preferred OS.

All that aside, prior to Android 9, it was quite difficult to reliably change the DNS settings on such a device. You could, of course have used a VPN app, but several of those actually leak DNS and there was no single central setting which enabled the user to change DNS settings for both WiFi and mobile data. Of course, changing the DNS settings on your home DHCP server (often your WiFi router) would help, but that still left you exposed to the default settings offered by your mobile provider or the WiFi hotspot in your local cafe when out and about. Several apps are (still) available to allow users to change DNS settings but some of those need the phone to be rooted, and this sort of change is often beyond the average normal user. A search for “DNS changer apps for android” will give you several lists of such apps if you are stuck on a device with an android version lower than 9. However, bear in mind that whilst these apps may allow you to change the DNS away from the defaults provided by your ISP or Telco, they will not offer any form of encryption, and many of the apps provide default settings to services which are less than privacy conscious – google on 8.8.8.8 is a common default.

To their credit, in Android 9, Google introduced a “private DNS” option to the central settings. And of course, lineageos 16 and later offers the same functionality. Here’s how:

1. Open “settings”

2. Select “Network & Internet”

3. Select “Advanced” then “Private DNS”

4. On the pop-up screen change “Off” (or “Automatic”) to “Private DNS Provider hostname” and add your preferred provider.

5. Save and exit.

The DNS provider I chose is “anycast.censurfridns.dk” – a service provided by UncensoredDNS. The owner of that system has this to say about his service.

UncensoredDNS was started in November 2009 shortly after I began working at an ISP where one of my responsibilities was to administer the censored DNS servers that all Danish ISPs run for their customers.

I had never been a fan of the Danish DNS censorship system, and working with it first hand didn’t exactly help. At the time OpenDNS was kind of new and Google DNS didn’t exist yet. Friends and family were coming to me asking for a recommendation for an alternative to the censored ISP DNS servers.

Back then OpenDNS did NXDOMAIN redirection, an advertising trick where misspelled nonexistant domains are redirected to a search page with ads instead of returning an NXDOMAIN error. To a DNS purist like myself this made OpenDNS as appealing as a turd on a stick. They also didn’t have ipv6 or DNSSEC, both of which were mandatory on a modern DNS server, even back in 2009.

Even if Google DNS had existed back then I wouldn’t have felt comfortable recommending them either. While they do run a stable, fast and uncensored service, I am not convinced that it is a good idea to hand over all your DNS lookups to Google. They already have all your searches, you might have one of their phones in your pocket, and so on.

All this prompted me to do something about the situation so I started UncensoredDNS with help from friends. After a while I also started giving talks about the service at various conferences, since people naturally have an easier time trusting a service if they know who is behind it. You can see some of my old talks online, for example from RIPE65.

You could of course choose any provider offering DoT services and you can find suggestions on both the dnsprivacy and privacytools sites. Whichever provider you choose, I recommend you stay away from any which log you, or which have less than satisfactory privacy policies (I’m looking at you google).

One final point, for users of Apple’s iphone, the privacytools site recommends an encrypted DNS client for iOS called “DNSCloak”. I have no personal experience of that client, but I am impressed overall by the guys at privacytools.

by Mick at June 06, 2020 03:56 PM

May 12, 2020

Daniel Silverstone (Kinnison)

The Lars, Mark, and Daniel Club

Last night, Lars, Mark, and I discussed Jeremy Kun's The communicative value of using Git well post. While a lot of our discussion was spawned by the article, we did go off-piste a little, and I hope that my notes below will enlighten you all as to a bit of how we see revision control these days. It was remarkably pleasant to read an article where the comments section wasn't a cesspool of horror, so if this posting encourages you to go and read the article, don't stop when you reach the bottom -- the comments are good and useful too.


This was a fairly non-contentious article for us though each of us had points we wished to bring up and chat about it turned into a very convivial chat. We saw the main thrust of the article as being about using the metadata of revision control to communicate intent, process, and decision making. We agreed that it must be possible to do so effectively with Mercurial (thus deciding that the mention of it was simply a bit of clickbait / red herring) and indeed Mark figured that he was doing at least some of this kind of thing way back with CVS.

We all discussed how knowing the fundamentals of Git's data model improved our ability to work wih the tool. Lars and I mentioned how jarring it has originally been to come to Git from revision control systems such as Bazaar (bzr) but how over time we came to appreciate Git for what it is. For Mark this was less painful because he came to Git early enough that there was little more than the fundamental data model, without much of the porcelain which now exists.

One point which we all, though Mark in particular, felt was worth considering was that of distinguishing between published and unpublished changes. The article touches on it a little, but one of the benefits of the workflow which Jeremy espouses is that of using the revision control system as an integral part of the review pipeline. This is perhaps done very well with Git based workflows, but can be done with other VCSs.

With respect to the points Jeremy makes regarding making commits which are good for reviewing, we had a general agreement that things were good and sensible, to a point, but that some things were missed out on. For example, I raised that commit messages often need to be more thorough than one-liners, but Jeremy's examples (perhaps through expedience for the article?) were all pretty trite one-liners which perhaps would have been entirely obvious from the commit content. Jeremy makes the point that large changes are hard to review, and Lars poined out that Greg Wilson did research in this area, and at least one article mentions 200 lines as a maximum size of a reviewable segment.

I had a brief warble at this point about how reviewing needs to be able to consider the whole of the intended change (i.e. a diff from base to tip) not just individual commits, which is also missing from Jeremy's article, but that such a review does not need to necessarily be thorough and detailed since the commit-by-commit review remains necessary. I use that high level diff as a way to get a feel for the full shape of the intended change, a look at the end-game if you will, before I read the story of how someone got to it. As an aside at this point, I talked about how Jeremy included a 'style fixes' commit in his example, but I loathe seeing such things and would much rather it was either not in the series because it's unrelated to it; or else the style fixes were folded into the commits they were related to.

We discussed how style rules, as well as commit-bisectability, and other rules which may exist for a codebase, the adherence to which would form part of the checks that a code reviewer may perform, are there to be held to when they help the project, and to be broken when they are in the way of good communication between humans.

In this, Lars talked about how revision control histories provide high level valuable communication between developers. Communication between humans is fraught with error and the rules are not always clear on what will work and what won't, since this depends on the communicators, the context, etc. However whatever communication rules are in place should be followed. We often say that it takes two people to communicate information, but when you're writing commit messages or arranging your commit history, the second party is often nebulous "other" and so the code reviewer fulfils that role to concretise it for the purpose of communication.

At this point, I wondered a little about what value there might be (if any) in maintaining the metachanges (pull request info, mailing list discussions, etc) for historical context of decision making. Mark suggested that this is useful for design decisions etc but not for the style/correctness discussions which often form a large section of review feedback. Maybe some of the metachange tracking is done automatically by the review causing the augmentation of the changes (e.g. by comments, or inclusion of design documentation changes) to explain why changes are made.

We discussed how the "rebase always vs. rebase never" feeling flip-flopped in us for many years until, as an example, what finally won Lars over was that he wants the history of the project to tell the story, in the git commits, of how the software has changed and evolved in an intentional manner. Lars said that he doesn't care about the meanderings, but rather about a clear story which can be followed and understood.

I described this as the switch from "the revision history is about what I did to achieve the goal" to being more "the revision history is how I would hope someone else would have done this". Mark further refined that to "The revision history of a project tells the story of how the project, as a whole, chose to perform its sequence of evolution."

We discussed how project history must necessarily then contain issue tracking, mailing list discussions, wikis, etc. There are exist free software projects where part of their history is forever lost because, for example, the project moved from Sourceforge to Github, but made no effort (or was unable) to migrate issues or somesuch. Linkages between changes and the issues they relate to can easily be broken, though at least with mailing lists you can often rewrite URLs if you have something consistent like a Message-Id.

We talked about how cover notes, pull request messages, etc. can thus also be lost to some extent. Is this an argument to always use merges whose message bodies contain those details, rather than always fast-forwarding? Or is it a reason to encapsulate all those discussions into git objects which can be forever associated with the changes in the DAG?

We then diverted into discussion of CI, testing every commit, and the benefits and limitations of automated testing vs. manual testing; though I think that's a little too off-piste for even this summary. We also talked about how commit message audiences include software perhaps, with the recent movement toward conventional commits and how, with respect to commit messages for machine readability, it can become very complex/tricky to craft good commit messages once there are multiple disparate audiences. For projects the size of the Linux kernel this kind of thing would be nearly impossible, but for smaller projects, perhaps there's value.

Finally, we all agreed that we liked the quote at the end of the article, and so I'd like to close out by repeating it for you all...

Hal Abelson famously said:

Programs must be written for people to read, and only incidentally for machines to execute.

Jeremy agrees, as do we, and extends that to the metacommit information as well.

by Daniel Silverstone at May 12, 2020 07:00 AM

November 13, 2019

Steve Engledow (stilvoid)

Maur - A minimal AUR helper

This post is about the Arch User Repository. If you’re not an Arch user, probably just move along ;)

There are lots of AUR helpers in existence already but, in the best traditions of open source, none of them work exactly how I want an AUR helper to work, so I created a new one.

Here it is: https://github.com/stilvoid/maur

maur (pronounced like “more”) is tiny. At the time of writing, it’s 49 lines of bash. It also has very few features.

Here is the list of features:

The “help” when installing a package is this, and nothing more:

If you think maur needs more features, use a different AUR helper.

If you find bugs, please submit an issue or, even better, a pull request.

Example usage

Searching the AUR

If you want to search for a package in the AUR, you can grep for it ;)

maur | grep maur

Installing a package

If you want to install a package, for example yay:

maur yay

Upgrading a package

Upgrade a package is the same as installing one. This will upgrade maur:

maur maur

by Steve Engledow at November 13, 2019 12:00 AM

April 06, 2019

Richard Lewis

e-Research on Texts and Images

I went to a colloquium on e-Research on Texts and Images at the British Academy yesterday; very, very swanky. Lunch was served on triangular plates, triangular! Big chandeliers, paintings, grand staircase. Well worth investigating for post-doc fellowships one day.

There were also some good papers. Just one or two things that really stuck out for me. There seems to be quite a lot of interest in e-research now around formalising, encoding, and analysing scholarly process. The motivation seems to be that, in order to design software tools to aid scholarship, it's necessary to identify what scholarly processes are engaged in and how they may be re-figured in software manifestations. This is the same direction that my research has been taking, and relates closely to the study of tacit knowledge in which Purcell Plus is engaged.

Ségoléne Tarte presented a very useful diagram in her talk explaining why this line of investigation is important. It showed a continuum of activity which started with "signal" and ended with "meaning". Running along one side of this continuum were the scholarly activities and conceptions that occur as raw primary sources are interpreted, and along the other were the computational processes which may aid these human activities. Her particular version of this continuum was describing the interpretation of images of Roman writing tablets, so the kinds of activities described included identification of marks, characters, and words, and boundary and shape detection in images. She described some of the common aspects of this process, including: oscillation of activity and understanding; dealing with noise; phase congruency; and identifying features (a term which has become burdened with assumed meaning but which should also be considered at its most general sometimes). But I'm sure the idea extends to other humanities disciplines and other kinds of "signal" or primary sources.

Similarly, Melissa Terras talked about her work on knowledge elicitation from expert papyrologists. This included various techniques (drawn from social science and clinical psychology) such as talk-aloud protocols and concept sorting. She was able to show nice graphs of how an expert's understanding of a particular source switches between different levels continuously during the process of working with it. It's this cyclical, dynamic process of coming to understand an artifact which we're attempting to capture and encode with a view to potentially providing decision support tools whose design is informed by this encoded procedure.

A few other odd notes I made. David DeRoure talked about the importance of social science methods in e-Humanities. Amongst other things, he also made an interesting point that it's probably a better investment to teach scholars and researchers about understanding data (representation, manipulation, management) than it is to buy lots of expensive and powerful hardware. Annamaria Carusi said lots of interesting things which I'm annoyed with myself for not having written down properly. (There was something about warning of the non-neutrality of abstractions; interpretation as arriving at a hypothesis, and how this potentially aligns humanistic work with scientific method; and how use of technologies can make some things very easy, but at the expense of making other things very hard.)

April 06, 2019 09:04 PM

New baby, new house, new job

A great deal of time has passed since I last wrote a blog post. During that time my partner and I have had a baby (who's now 20 months old) and bought a house, I've started a new job, finished that new job, and started another new job.

The first new job was working for an open source consultancy firm called credativ which is based in Rugby but which, at the time I started, had recently opened a London office. Broadly, they consult on open source software for business. In practice most of the work is using OpenERP, an open source enterprise resource planning (ERP) system written in Python. I was very critical of OpenERP when I started, but I guess this was partly because my unfamiliarity with it led to me often feeling like a n00b programmer again and this was quite frustrating. By the time I finished at credativ I'd learned to understand how to deal with this quite large software system and I now have a better understanding of its real deficiencies: code quality in the core system is generally quite poor, although it has a decent test suite and is consequently functionally fairly sound, the code is scrappy and often quite poorly designed; the documentation is lacking and not very organised; its authors, I find, don't have a sense of what developers who are new to the framework actually need to know. I also found that, during the course of my employment, it took a long time to gain experience of the system from a user's perspective (because I had to spend time doing development work with it); I think earlier user experience would have helped me to understand it sooner. Apart from those things, it seems like a fairly good ERP. Although one other thing I learned working with it (and with business clients in general) is the importance of domain knowledge: OpenERP is about business applications (accounting, customer relations, sales, manufacture) and, it turns out, I don't know anything about any of these things. That makes trying to understand software designed to solve those problems doubly hard. (In all my previous programming experience, I've been working in domains that are much more familiar.)

As well as OpenERP, I've also learned quite a lot about the IT services industry and about having a proper job in general. Really, this was the first proper job I've ever had; I've earned money for years, but always in slightly off-the-beaten-track ways. I've found that team working skills (that great CV cliché) are actually not one of my strong points; I had to learn to ask for help with things, and to share responsibilities with my colleagues. I've learned a lot about customers. It's a very different environment where a lot of your work is reactive; I've previously been used to long projects where the direction is largely self-determined. A lot of the work was making small changes requested by customers. In such cases it's so important to push them to articulate as clearly as possible what they are actually trying to achieve; too often customers will describe a requirement at the wrong level of detail, that is, they'll describe a technical level change. What's much better is if you can get them to describe the business process they are trying to implement so you can be sure the technical change they want is appropriate or specify something better. I've learned quite a bit about managing my time and being productive. We undertook a lot of fixed-price work, where we were required to estimate the cost of the work beforehand. This involves really knowing how long things take which is quite a skill. We also needed to be able to account for all our working time in order to manage costs and stick within budgets for projects. So I learned some more org-mode tricks for managing effort estimates and for keeping more detailed time logs.

My new new job is working back at Goldsmiths again, with mostly the same colleagues. We're working on an AHRC-funded project called Transforming Musicology. We have partners at Queen Mary, the Centre for e-Research at Oxford, Oxford Music Faculty, and the Lancaster Institute for Contemporary Arts. The broad aim of the project can be understood as the practical follow-on from Purcell Plus: how does the current culture of pervasive networked computing affect what it means to study music and how music gets studied? We're looking for evidence of people using computers to do things which we would understand as musicology, even though they may not. We're also looking at how computers can be integrated into the traditional discipline. And we're working on extending some existing tools for music and sound analysis, and developing frameworks for making music resources available on the Semantic Web. My role is as project manager. I started work at the beginning of October so we've done four days so far. It's mainly been setting up infrastructure (website, wiki, mailing list) and trying to get a good high-level picture of how the two years should progress.

I've also moved my blog from livejournal to here which I manage using Ikiwiki. Livejournal is great; I just liked the idea of publishing my blog using Ikiwiki, writing it in Emacs, and managing it using git. Let's see if I stick to it...

April 06, 2019 09:04 PM

February 12, 2019

Steve Engledow (stilvoid)

Using Git with AWS CodeCommit Across Multiple AWS Accounts

(Cross-posted from the AWS DevOps blog)

I use AWS CodeCommit to host all of my private Git repositories. My repositories are split across several AWS accounts for different purposes: personal projects, internal projects at work, and customer projects.

The CodeCommit documentation shows you how to configure and clone a repository from one place, but in this blog post I want to share how I manage my Git configuration across multiple AWS accounts.

Background

First, I have profiles configured for each of my AWS environments. I connect to some of them using IAM user credentials and others by using cross-account roles.

I intentionally do not have any credentials associated with the default profile. That way I must always be sure I have selected a profile before I run any AWS CLI commands.

Here’s an anonymized copy of my ~/.aws/config file:

[profile personal]
region = eu-west-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile work]
region = us-east-1
aws_access_key_id = ABCDEFGHIJKLMNOPQRST
aws_secret_access_key = uvwxyz0123456789abcdefghijklmnopqrstuvwx

[profile customer]
region = eu-west-2
source_profile = work
role_arn = arn:aws:iam::123456789012:role/CrossAccountPowerUser

If I am doing some work in one of those accounts, I run export AWS_PROFILE=work and use the AWS CLI as normal.

The problem

I use the Git credential helper so that the Git client works seamlessly with CodeCommit. However, because I use different profiles for different repositories, my use case is a little more complex than the average.

In general, to use the credential helper, all you need to do is place the following options into your ~/.gitconfig file, like this:

[credential]
    helper = !aws codecommit credential-helper $@
    UserHttpPath = true

I could make this work across accounts by setting the appropriate value for AWS_PROFILE before I use Git in a repository, but there is a much neater way to deal with this situation using a feature released in Git version 2.13, conditional includes.

A solution

First, I separate my work into different folders. My ~/code/ directory looks like this:

code
    personal
        repo1
        repo2
    work
        repo3
        repo4
    customer
        repo5
        repo6

Using this layout, each folder that is directly underneath the code folder has different requirements in terms of configuration for use with CodeCommit.

Solving this has two parts; first, I create a .gitconfig file in each of the three folder locations. The .gitconfig files contain any customization (specifically, configuration for the credential helper) that I want in place while I work on projects in those folders.

For example:

[user]
    # Use a custom email address
    email = sengledo@amazon.co.uk

[credential]
    # Note the use of the --profile switch
    helper = !aws --profile work codecommit credential-helper $@
    UseHttpPath = true

I also make sure to specify the AWS CLI profile to use in the .gitconfig file which means that, when I am working in the folder, I don’t need to set AWS_PROFILE before I run git push, etc.

Secondly, to make use of these folder-level .gitconfig files, I need to reference them in my global Git configuration at ~/.gitconfig

This is done through the includeIf section. For example:

[includeIf "gitdir:~/code/personal/"]
    path = ~/code/personal/.gitconfig

This example specifies that if I am working with a Git repository that is located anywhere under ~/code/personal/, Git should load additional configuration from ~/code/personal/.gitconfig. That additional file specifies the appropriate credential helper invocation with the corresponding AWS CLI profile selected as detailed earlier.

The contents of the new file are treated as if they are inserted into the main .gitconfig file at the location of the includeIf section. This means that the included configuration will only override any configuration specified earlier in the config.

by Steve Engledow at February 12, 2019 12:00 AM

June 07, 2018

Brett Parker (iDunno)

The Psion Gemini

So, I backed the Gemini and received my shiny new device just a few months after they said that it'd ship, not bad for an indiegogo project! Out of the box, I flashed it, using the non-approved linux flashing tool at that time, and failed to backup the parts that, err, I really didn't want blatted... So within hours I had a new phone that I, err, couldn't make calls on, which was marginally annoying. And the tech preview of Debian wasn't really worth it, as it was fairly much unusable (which was marginally upsetting, but hey) - after a few more hours / days of playing around I got the IMEI number back in to the Gemini and put back on the stock android image. I didn't at this point have working bluetooth or wifi, which was a bit of a pain too, turns out the mac addresses for those are also stored in the nvram (doh!), that's now mostly working through a bit of collaboration with another Gemini owner, my Gemini currently uses the mac addresses from his device... which I'll need to fix in the next month or so, else we'll have a mac address collision, probably.

Overall, it's not a bad machine, the keyboard isn't quite as good as I was hoping for, the phone functionality is not bad once you're on a call, but not great until you're on a call, and I certainly wouldn't use it to replace the Samsung Galaxy S7 Edge that I currently use as my full time phone. It is however really rather useful as a sysadmin tool when you don't want to be lugging a full laptop around with you, the keyboard is better than using the on screen keyboard on the phone, the ssh client is "good enough" to get to what I need, and the terminal font isn't bad. I look forward to seeing where it goes, I'm happy to have been an early backer, as I don't think I'd pay the current retail price for one.

by Brett Parker (iDunno@sommitrealweird.co.uk) at June 07, 2018 01:04 PM

February 21, 2018

MJ Ray

How hard can typing æ, ø and å be?

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

by mjr at February 21, 2018 04:14 PM

March 01, 2017

Brett Parker (iDunno)

Using the Mythic Beasts IPv4 -> IPv6 Proxy for Websites on a v6 only Pi and getting the right REMOTE_ADDR

So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.

I've cheated a bit, because the turning on of ProxyProtocol for the hostedpi.com addresses is currently not exposed to customers (it's on the list!), to do it without access to Mythic's backends use your own domainname (I've also got https://pi3.sommitrealweird.co.uk/ mapped to this Pi).

So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen [::1]:8080

<IfModule ssl_module>
       Listen [::1]:4443
</IfModule>

<IfModule mod_gnutls.c>
       Listen [::1]:4443
</IfModule>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.

So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:

global
       log /dev/log    local0
       log /dev/log    local1 notice
       chroot /var/lib/haproxy
       stats socket /run/haproxy/admin.sock mode 660 level admin
       stats timeout 30s
       user haproxy
       group haproxy
       daemon

       # Default SSL material locations
       ca-base /etc/ssl/certs
       crt-base /etc/ssl/private

       # Default ciphers to use on SSL-enabled listening sockets.
       # For more information, see ciphers(1SSL). This list is from:
       #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
       ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
       ssl-default-bind-options no-sslv3

defaults
       log     global
       mode    http
       option  httplog
       option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
       errorfile 400 /etc/haproxy/errors/400.http
       errorfile 403 /etc/haproxy/errors/403.http
       errorfile 408 /etc/haproxy/errors/408.http
       errorfile 500 /etc/haproxy/errors/500.http
       errorfile 502 /etc/haproxy/errors/502.http
       errorfile 503 /etc/haproxy/errors/503.http
       errorfile 504 /etc/haproxy/errors/504.http

frontend any_http
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::80
        default_backend any_http

backend any_http
        server apache2 ::1:8080

Obviously after that you then do:

systemctl restart apache2
systemctl restart haproxy

Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:

a2enmod remoteip

And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined

And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat and add the relevant RemoteIP settings:

RemoteIPHeader X-Forwarded-For
RemoteIPTrustedProxy ::1

CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined

Now, enable the config and restart apache2:

a2enconf remoteip-logformats
systemctl restart apache2

Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.

So, you can now happily visit http://www.<your-pi-name>.hostedpi.com/, e.g. http://www.srwpi.hostedpi.com/.

Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.

/etc/dehydrated/conf.d/mail.sh:

CONTACT_EMAIL="my@email.address"

/etc/dehydrated/conf.d/domainconfig.sh:

DOMAINS_D="/etc/dehydrated/domains.d"

/etc/dehydrated/domains.d/srwpi.hostedpi.com:

HOOK="/etc/dehydrated/hooks/srwpi"

/etc/dehydrated/hooks/srwpi:

#!/bin/sh
action="$1"
domain="$2"

case $action in
  deploy_cert)
    privkey="$3"
    cert="$4"
    fullchain="$5"
    chain="$6"
    cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem
    chmod 640 /etc/ssl/private/srwpi.pem
    ;;
  *)
    ;;
esac

/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.

And finally the file /etc/dehydrated/domains.txt:

www.srwpi.hostedpi.com srwpi.hostedpi.com

Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.

Run dehydrated in cron mode (it's noisy, but meh...):

dehydrated -c

That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/www.srwpi.hostedpi.com/fullchain.pem and /var/llib/dehydrated/certs/ww.srwpi.hostedpi.com/privkey.pem files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:

a2ensite default-ssl
a2enmod ssl

And restart apache2:

systemctl restart apache2

Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:

frontend any_https
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::443 ssl crt /etc/ssl/private/srwpi.pem

        default_backend any_https

backend any_https
        server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt

Restart haproxy:

systemctl restart haproxy

And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.

by Brett Parker (iDunno@sommitrealweird.co.uk) at March 01, 2017 06:35 PM

October 18, 2016

MJ Ray

Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web. 🙂

by mjr at October 18, 2016 04:28 AM

March 09, 2015

Ben Francis

Pinned Apps – An App Model for the Web

(re-posted from a page I created on the Mozilla wiki on 17th December 2014)

Problem Statement

The per-OS app store model has resulted in a market where a small number of OS companies have a large amount of control, limiting choice for users and app developers. In order to get things done on mobile devices users are restricted to using apps from a single app store which have to be downloaded and installed on a compatible device in order to be useful.

Design Concept

Concept Overview

The idea of pinned apps is to turn the apps model on its head by making apps something you discover simply by searching and browsing the web. Web apps do not have to be installed in order to be useful, “pinning” is an optional step where the user can choose to split an app off from the rest of the web to persist it on their device and use it separately from the browser.

Pinned_apps_overview

”If you think of the current app store experience as consumers going to a grocery store to buy packaged goods off a shelf, the web is more like a hunter-gatherer exploring a forest and discovering new tools and supplies along their journey.”

App Discovery

A Web App Manifest linked from a web page says “I am part of a web app you can use separately from the browser”. Users can discover web apps simply by searching or browsing the web, and use them instantly without needing to install them first.

Pinned_apps_discovery

”App discovery could be less like shopping, and more like discovering a new piece of inventory while exploring a new level in a computer game.”

App Pinning

If the user finds a web app useful they can choose to split it off from the rest of the web to persist it on their device and use it separately from the browser. Pinned apps can provide a more app-like experience for that part of the web with no browser chrome and get their own icon on the homescreen.

Pinned_apps_pinning

”For the user pinning apps becomes like collecting pin badges for all their favourite apps, rather than cluttering their device with apps from an app store that they tried once but turned out not to be useful.”

Deep Linking

Once a pinned app is registered as managing its own part of the web (defined by URL scope), any time the user navigates to a URL within that scope, it will open in the app. This allows deep linking to a particular page inside an app and seamlessly linking from one app to another.

Pinned_apps_linking

”The browser is like a catch-all app for pages which don’t belong to a particular pinned app.”

Going Offline

Pinning an app could download its contents to the device to make it work offline, by registering a Service Worker for the app’s URL scope.

Pinned_apps_offline

”Pinned apps take pinned tabs to the next level by actually persisting an app on the device. An app pin is like an anchor point to tether a collection of web pages to a device.”

Multiple Pages

A web app is a collection of web pages dedicated to a particular task. You should be able to have multiple pages of the app open at the same time. Each app could be represented in the task manager as a collection of sheets, pinned together by the app.

Pinned_app_pages

”Exploding apps out into multiple sheets could really differentiate the Firefox OS user experience from all other mobile app platforms which are limited to one window per app.”

Travel Guide

Even in a world without app stores there would still be a need for a curated collection of content. The Marketplace could become less of a grocery store, and more of a crowdsourced travel guide for the web.

Pinned_apps_guide

”If a user discovers an app which isn’t yet included in the guide, they could be given the opportunity to submit it. The guide could be curated by the community with descriptions, ratings and tags.”

3 Questions

Pinnged_apps_pinned

What value (the importance, worth or usefulness of something) does your idea deliver?

The pinned apps concept makes web apps instantly useful by making “installation” optional. It frees users from being tied to a single app store and gives them more choice and control. It makes apps searchable and discoverable like the rest of the web and gives developers the freedom of where to host their apps and how to monetise them. It allows Mozilla to grow a catalogue of apps so large and diverse that no walled garden can compete, by leveraging its user base to discover the apps and its community to curate them.

What technological advantage will your idea deliver and why is this important?

Pinned apps would be implemented with emerging web standards like Web App Manifests and Service Workers which add new layers of functionality to the web to make it a compelling platform for mobile apps. Not just for Firefox OS, but for any user agent which implements the standards.

Why would someone invest time or pay money for this idea?

Users would benefit from a unique new web experience whilst also freeing themselves from vendor lock-in. App developers can reduce their development costs by creating one searchable and discoverable web app for multiple platforms. For Mozilla, pinned apps could leverage the unique properties of the web to differentiate Firefox OS in a way that is difficult for incumbents to follow.

UI Mockups

App Search

Pinned_apps_search

Pin App

Pin_app

Pin Page

Pin_page

Multiple Pages

Multiple_pages

App Directory

App_directory

Implementation

Web App Manifest

A manifest is linked from a web page with a link relation:

  <link rel=”manifest” href=”/manifest.json”>

A manifest can specify an app name, icon, display mode and orientation:

 {
   "name": "GMail"
   "icons": {...},
   "display": "standalone",
   "orientation": “portrait”,
   ...
 }

There is a proposal for a manifest to be able to specify an app scope:

 {
   ...
   "scope": "/"
   ...
 }

Service Worker

There is also a proposal to be able to reference a Service Worker from within the manifest:

 {
   ...
   service_worker: {
     src: "app.js",
     scope: "/"
   ...
 }

A Service Worker has an install method which can populate a cache with a web app’s resources when it is registered:

 this.addEventListener('install', function(event) {
  event.waitUntil(
    caches.create('v1').then(function(cache) {
     return cache.add(
        '/index.html',
        '/style.css',
        '/script.js',
        '/favicon.ico'
      );
    }, function(error) {
        console.error('error populating cache ' + error);
    };
  );
 });

So that the app can then respond to requests for resources when offline:

 this.addEventListener('fetch', function(event) {
  event.respondWith(
    caches.match(event.request).catch(function() {
      return event.default();
    })
  );
 });

by tola at March 09, 2015 03:54 PM

December 11, 2014

Ben Francis

The Times They Are A Changin’ (Open Web Remix)

In the run up to the “Mozlandia” work week in Portland, and in reflection of the last three years of the Firefox OS project, for a bit of fun I’ve reworked a Bob Dylan song to celebrate our incredible journey so far.

Here’s a video featuring some of my memories from the last three years, with Siobhan (my fiancée) and me singing the song at you! There are even lyrics so you can sing along 😉

“Keep on rockin’ the free web” — Potch

by tola at December 11, 2014 11:26 AM

July 10, 2014

James Taylor

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


July 10, 2014 02:09 PM

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


July 10, 2014 02:09 PM

June 12, 2014

Paul Tansom

Beginning irc

After some discussion last night at PHP Hants about the fact that irc is a great facilitator of support / discussion, but largely ignored because there is rarely enough information for a new user to get going I decided it may be worth putting together a howto type post so here goes…

What is irc?

First of all, what on earth is it? I’m tempted to describe it as Twitter done right years before Twitter even existed, but I’m a geek and I’ve been using irc for years. It has a long heritage, but unlike the ubiquitous email it hasn’t made the transition into mainstream use. In terms of usage it has similarities to things like Twitter and Instant Messaging. Let’s take a quick look at this.

Twitter allows you to broadcast messages, they get published and anyone who is subscribed to your feed can read what you say. Everything is pretty instant, and if somebody is watching the screen at the right time they can respond straight away. Instant Messaging on the other hand, is more of a direct conversation with a single person, or sometimes a group of people, but it too is pretty instantaneous – assuming, of course, that there’s someone reading what you’ve said. Both of these techonologies are pretty familiar to many. If you go to the appropriate website you are given the opportunity to sign up and either use a web based client or download one.

It is much the same for irc in terms of usage, although conversations are grouped into channels which generally focus on a particular topic rather than being generally broadcast (Twitter) or more specifically directed (Instant Messaging). The downside is that in most cases you don’t get a web page with clear instructions of how to sign up, download a client and find where the best place is to join the conversation.

Getting started

There are two things you need to get going with irc, a client and somewhere to connect to. Let’s put that into a more familiar context.

The client is what you use to connect with; this can be an application – so as an example Outlook or Thunderbird would be a mail client, or IE, Firefox, Chrome or Safari are examples of clients for web pages – or it can be a web page that does the same thing – so if you go to twitter.com and login you are using the web page as your Twitter client. Somewhere to connect to can be compared to a web address, or if you’ve got close enough to the configuration of your email to see the details, your mail server address.

Let’s start with the ‘somewhere to connect to‘ bit. Freenode is one of the most popular irc servers, so let’s take a look. First we’ll see what we can find out from their website, http://freenode.net/.

freenode

There’s a lot of very daunting information there for somebody new to irc, so ignore most of it and follow the Webchat link on the left.

webchat

That’s all very well and good, but what do we put in there? I guess the screenshot above gives a clue, but if you actually visit the page the entry boxes will be blank. Well first off there’s the Nickname, this can be pretty much anything you like, no need to register it – stick to the basics of letters, numbers and some simple punctuation (if you want to), keep it short and so long as nobody else is already using it you should be fine; if it doesn’t work try another. Channels is the awkward one, how do you know what channels there are? If you’re lucky you’re looking into this because you’ve been told there’s a channel there and hopefully you’ve been given the channel name. For now let’s just use the PHP Hants channel, so that would be #phph in the Channels box. Now all you need to do is type in the captcha, ignore the tick boxes and click Connect and you are on the irc channel and ready to chat. Down the right you’ll see a list of who else is there, and in the main window there will be a bit of introductory information (e.g. topic for the channel) and depending on how busy it is anything from nothing to a fast scrolling screen of text.

phph

If you’ve miss typed there’s a chance you’ll end up in a channel specially created for you because it didn’t exist; don’t worry, just quit and try again (I’ll explain that process shortly).

For now all you really need to worry about is typing in text an posting it, this is as simple as typing it into the entry box at the bottom of the page and pressing return. Be polite, be patient and you’ll be fine. There are plenty of commands that you can use to do things, but for now the only one you need to worry about is the one to leave, this is:

/quit

Type it in the entry box, press return and you’ve disconnected from the server. The next thing to look into is using a client program since this is far more flexible, but I’ll save that for another post.

The post Beginning irc appeared first on Linuxlore.

by Paul Tansom at June 12, 2014 04:27 PM

January 01, 2014

John Woodard

A year in Prog!


It's New Year's Day 2014 and I'm reflecting on the music of past year.

Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 01, 2014 07:56 PM

December 01, 2013

Paul Tansom

Scratch in a network environment

I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I’ve made to the default Scratch install to make things easier. So here goes:

With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn’t exactly helpful. It also isn’t ideal that they can explore the C: drive in spite of profile restrictions (although it isn’t the end of the world as there is little they can do from Scratch).

save-orig

After a bit of time with Google I found the answer, and since it didn’t immediately leap out at me when I was searching I thought I’d post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:

C:\Program Files\Scratch\Scratch.ini

Initially it looks like this:

ini-orig

Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):

Home=U:
VisibleDrives=U:

To get this:

ini-new

They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.

save-new1

The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.

save-new2

You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).

There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.

Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven’t tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:

ProxyServer=[server name or IP address]
ProxyPort=[port number]

The post Scratch in a network environment appeared first on Linuxlore.

by Paul Tansom at December 01, 2013 07:00 PM

January 16, 2013

John Woodard

LinuxMint 14 Add Printer Issue


 LinuxMint 14 Add Printer Issue



 

I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!

It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.

Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.

The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.

So come on Minty people get your bug squashing boots on and stamp on this one please.

Update

Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 16, 2013 12:39 AM

August 20, 2012

David Reynolds

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

June 25, 2012

Elisabeth Fosbrooke-Brown (sfr)

Black redstarts

It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.

Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!

The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.

by sunflowerinrain (noreply@blogger.com) at June 25, 2012 08:02 AM

June 16, 2012

Elisabeth Fosbrooke-Brown (sfr)

Roundabout at Mirambeau

Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.

A couple of years ago the roundabout at the edge of  Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.

Need a passenger to take photo!

by sunflowerinrain (noreply@blogger.com) at June 16, 2012 12:06 PM

September 04, 2006

Ashley Howes

Some new photos

Take a look at some new photos my father and I have taken. We are experimenting with our new digital SLR with a variety of lenses.

by Ashley (noreply@blogger.com) at September 04, 2006 10:42 AM

August 30, 2006

Ashley Howes

A Collection of Comments

This is a bit of fun. A collection of comments found in code. This is from The Daily WTF.

by Ashley (noreply@blogger.com) at August 30, 2006 01:13 AM