Planet ALUG

March 31, 2015

Jonathan McDowell

Shipping my belongings across the globe

I previously wrote about tracking a ship around the world, but never followed up with the practical details involved with shipping my life from the San Francisco Bay Area back to Belfast. So here they are, in the hope they provide a useful data point for anyone considering a similar move.

Firstly, move out. I was in a one bedroom apartment in Fremont, CA. At the time I was leaving the US I didn’t have anywhere for my belongs to go - the hope was I’d be back in the Bay Area, but there was a reasonable chance I was going to end up in Belfast or somewhere in England. So on January 24th 2014 I had my all of my belongings moved out and put into storage, pending some information about where I might be longer term. When I say all of my belongings I mean that; I took 2 suitcases and everything else went into storage. That means all the furniture for probably a 2 bed apartment (I’d moved out of somewhere a bit larger) - the US doesn’t really seem to go in for the concept of a furnished lease the same way as the UK does.

I had deliberately picked a moving company that could handle the move out, the storage and the (potential) shipping. They handed off to a 3rd party for the far end bit, but that was to be expected. Having only one contact to deal with throughout the process really helped.

Fast forward 8 months and on September 21st I contacted my storage company to ask about getting some sort of rough shipping quote and timescales to Belfast. The estimate came back as around a 4-6 week shipping time, which was a lot faster than I was expecting. However it turned out this was the slow option. On October 27th (delay largely due to waiting for confirmation of when I’d definitely have keys on the new place) I gave the go ahead.

Container pickup (I ended up with exclusive use of a 20ft container - not quite full, but not worth part shipment) from the storage location was originally due on November 7th. Various delays at the Port of Oakland meant this didn’t happen until November 17th. It then sat in Oakland until December 2nd. At that point the ETA into Southampton was January 8th. Various other delays, including a week off the coast of LA (yay West Coast Port Backups) meant that the ship finally arrived in Southampton on January 13th. It then had to get to Belfast and clear customs. On January 22nd 2015, 2 days shy of a year since I’d seen them, my belongings and I were reunited.

So, on the face of it, the actual time on the ship was only slightly over 6 weeks, but all of the extra bits meant that the total time from “Ship it” to “I have it” was nearly 3 months. Which to be honest is more like what I was expecting. The lesson: don’t forget to factor in delays at every stage.

The relocation cost in the region of US$8000. It was more than I’d expected, but far cheaper than the cost of buying all my furniture again (plus the fact there were various things I couldn’t easily replace that were in storage). That cost didn’t cover the initial move into storage or the storage fees - it covered taking things out, packing them up for shipment and everything after that. Including delivery to a (UK) 3rd floor apartment at the far end and insurance. It’s important to note that I’d included this detail before shipment - the quote specifically mentioned it, which was useful when the local end tried to levy an additional charge for the 3rd floor aspect. They were fine once I showed them the quote as including that detail.

Getting an entire apartment worth of things I hadn’t seen in so long really did feel a bit like a second Christmas. I’d forgotten a lot of the things I had, and it was lovely to basically get a “home in a container” delivered.

March 31, 2015 02:35 PM

March 30, 2015

Mick Morgan

the russians are back

About four years ago I was getting a huge volume of backscatter email to the non-existent address info@baldric.net. After a month or so it started to go quiet and eventually I got hardly any hits on that (or any other) address. A couple of weeks or so ago they came back. My logs for weeks ending 15 March, 22 March and 29 March show 92%, 96% and 94% respectively of all email to my main mail server is failed connection attempts from Russian domains to dear old non-existent “info”. Out of curiosity I decided to capture some of the inbound mails. Most were in Russian, but the odd one or two were in (broken) english. Below is a typical example:

From: “Olga”
To: Subject: Are you still looking for love? Look at my photos!
Date: Thu, 12 Mar 2015 15:22:08 +0300
X-Mailer: Microsoft Windows Live Mail 16.4.3528.331

Sunshine!
Are you still looking for love? I will be very pleased to become your half and save you from loneliness. My name is Olga, 25 years old.
For now I live in Russia, but it’s a bad time in my country, and I think about moving to another state.
I need a safer place for life, is your country good for that?
If you are interested and want to get in touch with me, just look at this international dating site.
Hope to see you soon!
Just click here!

Sadly, I believe that many recipients of such emails will indeed, “click here”. Certainly enough to further propagate whatever malware was used to compromise the end system which actually sent the above email.

by Mick at March 30, 2015 08:50 AM

kidnapped by aliens

An old friend of mine has expressed some concern at the lack of activity on trivia of late. In his most recent email to me he said:

“You really should revive Baldric you know. Everyone will believe it if you just say you were kidnapped by aliens, and then you can just resume where you left off.”

So Peter, this one is just for you. Oh, and Happy Birthday too.

Mick

by Mick at March 30, 2015 08:21 AM

March 12, 2015

Steve Engledow (stilvoid)

Cleaning out my closet

Or: Finding out what crud you installed that's eating all of your space in Arch Linux

I started running out of space on one of my Arch boxes and wondered (beyond what was in my home directory) what I'd installed that was eating up all the space.

A little bit of bash-fu does the job:

for pkg in $(pacman -Qq); do
    size=$(pacman -Qi $pkg | grep "Installed Size" | cut -d ":" -f 2)
    echo "$size | $pkg"
done | sed -e 's/ //g' | sort -h

This outputs a list of packages with those using the most disk space at the bottom:

25.99MiB|llvm-libs
31.68MiB|raspberrypi-firmware-examples
32.69MiB|systemd
32.86MiB|glibc
41.88MiB|perl
54.31MiB|gtk2
62.13MiB|python2
73.27MiB|gcc
77.93MiB|python
84.21MiB|linux-firmware

The above is from my pi; not much I can uninstall there ;)

by Steve Engledow (steve@offend.me.uk) at March 12, 2015 02:59 PM

March 11, 2015

Steve Engledow (stilvoid)

Devicive

With all the tech world moving towards the idea that you have a single device that does everything, I've found myself suckered in to the convergence ideal in recent months. I was even genuinely excited by the recent video about Unity 8.

I have an Android phone that I use for a lot of purposes (nothing unusual: music, podcasts, messaging, web, phone calls) and a tablet that I use for more or less the same set of things with a bigger screen.

Yesterday, I managed to break my phone's screen leaving me with the horrifying prospect of a ride to work without being able to catch up on Linux Luddites (cha-ching) so I dug out my old mp3 player and it reminded me how clean and efficient the interface was.

Until a lot more work has happened, I've officially woken up from the convergence dream.

I'm left feeling a little unsure of what I need a phone for now. Breaking it and subsequently not missing it one iota today speaks volumes. I barely use SMS (I mostly stay in contact through various other means) and I don't make or receive enough phone calls for it to seem worth carting an expensive oblong around.

I'm sure I'll change my mind soon.

by Steve Engledow (steve@offend.me.uk) at March 11, 2015 11:53 PM

March 09, 2015

Ben Francis

Pinned Apps – An App Model for the Web

(re-posted from a page I created on the Mozilla wiki on 17th December 2014)

Problem Statement

The per-OS app store model has resulted in a market where a small number of OS companies have a large amount of control, limiting choice for users and app developers. In order to get things done on mobile devices users are restricted to using apps from a single app store which have to be downloaded and installed on a compatible device in order to be useful.

Design Concept

Concept Overview

The idea of pinned apps is to turn the apps model on its head by making apps something you discover simply by searching and browsing the web. Web apps do not have to be installed in order to be useful, “pinning” is an optional step where the user can choose to split an app off from the rest of the web to persist it on their device and use it separately from the browser.

Pinned_apps_overview

”If you think of the current app store experience as consumers going to a grocery store to buy packaged goods off a shelf, the web is more like a hunter-gatherer exploring a forest and discovering new tools and supplies along their journey.”

App Discovery

A Web App Manifest linked from a web page says “I am part of a web app you can use separately from the browser”. Users can discover web apps simply by searching or browsing the web, and use them instantly without needing to install them first.

Pinned_apps_discovery

”App discovery could be less like shopping, and more like discovering a new piece of inventory while exploring a new level in a computer game.”

App Pinning

If the user finds a web app useful they can choose to split it off from the rest of the web to persist it on their device and use it separately from the browser. Pinned apps can provide a more app-like experience for that part of the web with no browser chrome and get their own icon on the homescreen.

Pinned_apps_pinning

”For the user pinning apps becomes like collecting pin badges for all their favourite apps, rather than cluttering their device with apps from an app store that they tried once but turned out not to be useful.”

Deep Linking

Once a pinned app is registered as managing its own part of the web (defined by URL scope), any time the user navigates to a URL within that scope, it will open in the app. This allows deep linking to a particular page inside an app and seamlessly linking from one app to another.

Pinned_apps_linking

”The browser is like a catch-all app for pages which don’t belong to a particular pinned app.”

Going Offline

Pinning an app could download its contents to the device to make it work offline, by registering a Service Worker for the app’s URL scope.

Pinned_apps_offline

”Pinned apps take pinned tabs to the next level by actually persisting an app on the device. An app pin is like an anchor point to tether a collection of web pages to a device.”

Multiple Pages

A web app is a collection of web pages dedicated to a particular task. You should be able to have multiple pages of the app open at the same time. Each app could be represented in the task manager as a collection of sheets, pinned together by the app.

Pinned_app_pages

”Exploding apps out into multiple sheets could really differentiate the Firefox OS user experience from all other mobile app platforms which are limited to one window per app.”

Travel Guide

Even in a world without app stores there would still be a need for a curated collection of content. The Marketplace could become less of a grocery store, and more of a crowdsourced travel guide for the web.

Pinned_apps_guide

”If a user discovers an app which isn’t yet included in the guide, they could be given the opportunity to submit it. The guide could be curated by the community with descriptions, ratings and tags.”

3 Questions

Pinnged_apps_pinned

What value (the importance, worth or usefulness of something) does your idea deliver?

The pinned apps concept makes web apps instantly useful by making “installation” optional. It frees users from being tied to a single app store and gives them more choice and control. It makes apps searchable and discoverable like the rest of the web and gives developers the freedom of where to host their apps and how to monetise them. It allows Mozilla to grow a catalogue of apps so large and diverse that no walled garden can compete, by leveraging its user base to discover the apps and its community to curate them.

What technological advantage will your idea deliver and why is this important?

Pinned apps would be implemented with emerging web standards like Web App Manifests and Service Workers which add new layers of functionality to the web to make it a compelling platform for mobile apps. Not just for Firefox OS, but for any user agent which implements the standards.

Why would someone invest time or pay money for this idea?

Users would benefit from a unique new web experience whilst also freeing themselves from vendor lock-in. App developers can reduce their development costs by creating one searchable and discoverable web app for multiple platforms. For Mozilla, pinned apps could leverage the unique properties of the web to differentiate Firefox OS in a way that is difficult for incumbents to follow.

UI Mockups

App Search

Pinned_apps_search

Pin App

Pin_app

Pin Page

Pin_page

Multiple Pages

Multiple_pages

App Directory

App_directory

Implementation

Web App Manifest

A manifest is linked from a web page with a link relation:

  <link rel=”manifest” href=”/manifest.json”>

A manifest can specify an app name, icon, display mode and orientation:

 {
   "name": "GMail"
   "icons": {...},
   "display": "standalone",
   "orientation": “portrait”,
   ...
 }

There is a proposal for a manifest to be able to specify an app scope:

 {
   ...
   "scope": "/"
   ...
 }

Service Worker

There is also a proposal to be able to reference a Service Worker from within the manifest:

 {
   ...
   service_worker: {
     src: "app.js",
     scope: "/"
   ...
 }

A Service Worker has an install method which can populate a cache with a web app’s resources when it is registered:

 this.addEventListener('install', function(event) {
  event.waitUntil(
    caches.create('v1').then(function(cache) {
     return cache.add(
        '/index.html',
        '/style.css',
        '/script.js',
        '/favicon.ico'
      );
    }, function(error) {
        console.error('error populating cache ' + error);
    };
  );
 });

So that the app can then respond to requests for resources when offline:

 this.addEventListener('fetch', function(event) {
  event.respondWith(
    caches.match(event.request).catch(function() {
      return event.default();
    })
  );
 });

by tola at March 09, 2015 03:54 PM

February 20, 2015

MJ Ray

Rebooting democracy? The case for a citizens constitutional convention.

I’m getting increasingly cynical about our largest organisations and their voting-centred approach to democracy. You vote once, for people rather than programmes, then you’re meant to leave them to it for up to three years until they stand for reelection and in most systems, their actions aren’t compared with what they said they’d do in any way.

I have this concern about Cooperatives UK too, but then its CEO publishes http://www.uk.coop/blog/ed-mayo/2015-02-18/rebooting-democracy-case-citizens-constitutional-convention and I think there may be hope for it yet. Well worth a read if you want to organise better groups.

by mjr at February 20, 2015 04:03 AM

January 30, 2015

Chris Lamb

Calculating the ETA to zero in shell

< Faux> I have a command which emits a number. This number is heading towards zero. I want to know when it will arrive at zero, and how close to zero it has got.

Damn right you can.

eta2zero () {
    A=$(eval ${@})

    while [ ${A} -gt 0 ]
    do
        B=$(eval ${@})
        printf %$((${A} - ${B}))s
        A=${B}
        sleep 1
    done | pv -s ${A} >/dev/null
}

In action:

$ rm -rf /big/path &
[1] 4895
$ eta2zero find /big/path \| wc -l
10 B 0:00:14 [   0 B/s] [================================>    ] 90% ETA 0:00:10

(Sincere apologies for the lack of strace...)

January 30, 2015 08:49 PM

January 29, 2015

Daniel Silverstone (Kinnison)

Caius -- A heirarchical delegable password safe

A long while ago I, Rob Kendrick, Clive Jones (and possibly others) sat down and tried to come up with a way to store passwords a-la Password Safe. However, being us, we wanted to ensure a number of properties which password safes commonly don't have. We wanted to allow the delegation of access to some subset of the passwords. We also wanted for it to be reasonable to deny that there is content which has not been decrypted.

I was reminded of this work when I was discussing the concept of deniable storage of secrets with a colleague (An idea I'll expand upon in another blog post at another time). I am therefore presenting, with little change other than formatting the design from years ago. I would be very interested if anyone knows of software which meets the properties of the Caius system since I would like to have one but simply don't trust myself (see another future posting) to write it right now.


Caius

The following concepts are assumed to be understood:

The 'Caius' system is a password-safe type system sporting hierarchical delegable access to the data it stores. The 'Caius Fob' is the data-store for the system.

The 'Caius Fob' is a file which consists of a header and then three sections. The header identifies it as such a file, the first section lists a number of 'External IDs' which can be used to access portions of the file. The second section lists ACL entries as defined below. The third section of the file is the encrypted data looked after by this file. It is not intended that the holder of a CaiusFob be able to deny it is a CaiusFob, but it is expected that it be possible to deny an ability to decrypt (perhaps by lacking a password) any ACL entries. Given that the structure of the file is known, it is necessary that there be external IDs for which the password or GPG key is not valid or cannot decrypt an ACL entry, and ACL entries which even if decrypted may not be valid, and ACL entries which even if decrypted and valid may not be used to encode any data blocks.

An External ID

External ID ::=
   LENGTH
   TYPE
   DATA

Where TYPE is one of: * 0: Unused (ID slot placeholder) * 1: GPG key, where DATA is the keyid of the gpg key * 2: Password, where DATA is some hash of a password which can be used to derive a key for decrypting an ACL entry.

The list of external ids forms a numbered sequence where the index (0-based) into the sequence is the External ID number. (EIDnr)

An ACL Entry

ACL Entry ::=
   LENGTH
   EIDnr
   DATA
   HMAC

The EIDnr is the number of the External ID as explained above. The LENGTH is the length of the DATA section which is a key-pair as explained below, encrypted to the external id. The HMAC uses the authentication key in the key-pair in the DATA section, and authenticates the EIDnr, LENGTH, DATA tuple.

One possibility for increasing deniability is to remove the EIDnr from this part of the file, and assume that for every external ID you try to decrypt all ACLs you've not succeeded in decrypting thus-far. This has the benefit of being able to deny that an ACL entry ought to be decryptable with the credentials you hold, but also an increased inability to know if you have successfully unlocked everything appropriate to being able to fully manipulate a CaiusFob. This tradeoff is currently set in favour of better understanding of the content, but a future design feature might suggest EIDnr should always be -1 to indicate "unknown, try every EID".

A key pair

Key Pair ::=
   ENCRYPTIONKEY
   AUTHENTICATIONKEY

The ENCRYPTIONKEY is used to initialise the stream cipher for the data section. The AUTHENTICATIONKEY is used to compute HMACs for the appropriate ACL entries or data blocks (as defined below)

The data section

First consider a set of stream ciphers. There exists always one cipher which we will call the NULL cipher. It is defined such that Cipher(Byte, Offset) == Byte and is always available. Then there is a cipher initialised for each key pair we can successfully extract from the ACL entry section of the file.

Each of these ciphers is initialised and made ready such that the data section can be xored with the bytes as they come out of the stream ciphers in an attempt to decrypt the blocks. Essentially this is equivalent to decrypting the entire data section with each cipher in turn to produce N proposed cleartexts which can then be examined to find decrypted blocks.

Whenever a cipher, combined with the data stream in the file, manages to produce a sequence of eight bytes of value zero, we have reached a synchronisation point and what follows is a data block enciphered with which ever cipher managed to reveal the synchronisation sequence.

Since it is vanishingly unlikely that you will find eight zeros in a row when playing about with arbitrary cipher initialisation, we consider this to be an acceptable synchronisation sequence.

Once we have found a sync sequence, we can know the length of this block and thus we do not look for sync markers again until after the block we have just found.

A data block

Data block ::=
   DATALENGTH
   PADLENGTH
   TYPE
   DATA
   PAD
   HMAC

Such that each field is the obvious, DATA is DATALENGTH bytes of data, the texture of which is defined by TYPE and PAD is PADLENGTH arbitrary bytes which pad this block. HMAC is keyed using the authentication key associated with the stream cipher managing to decrypt this and is over the DATALENGTH, PADLENGTH, TYPE, DATA, PAD tuple.

If TYPE is zero then this is a ''free-space'' block and will typically contain zero bytes of DATA and some number of padding. This is however arbitrary and not enforced, the free space can be DATA if the implementer prefers and implementations are encouraged if possible to randomise the distribution of the consumed space between the DATA and PAD sections.

A node block

TYPE == 1 (Node)
DATA ::=
   MY_ID
   PARENT_ID
   NAME
   NOTES

MY_ID is a unique ID for this node. (generally a random number, probably 64 bits long, perhaps a UUID). PARENT_ID if not all NULLs is the ID for the parent node. If all NULLs then this is the root of a heirarchy. NAME is a NULL terminated byte string of one or more characters which is the name of this node. It may consist only of any characters other than NULL and the forward-slash character. NOTES is a byte string of zero or more characters, NULL terminated. Note that the DATALENGTH of the data block clearly delimits this field but the NULL is present to aid parsing.

A system block

TYPE == 2 (System)
DATA ::=
   PARENT_ID
   USERNAME
   PASSWORD
   EXPIRYDATE
   NOTES

PARENT_ID is the node to which this block belongs. It is required that any system blocks you succeed in decrypting can be placed within a node you succeed in decrypting. If the library encounters a system block which belongs to a node it cannot find then this is considered to be a corrupt system block and will be treated as though it could not be decrypted.

USERNAME is a byte string of one or more characters terminated by a NULL, ditto for PASSWORD and as for a node block, the NOTES are NULL terminated also.

EXPIRYDATE is a byte string of characters in RFC-822 date format.

Implementation notes

It is expected that any implementation of Caius will adhere to the following guidelines in order to enhance the security of content over time.

  1. Any time a block is invalidated (such as by the changing of a password, the obsoleting of an entry, the changing of notes, names, or reparenting a node) anywhere from one to all of the bits in the block can be changed. Trivially, this includes the synchronisation sequence preceeding the block as if the synchronisation sequence isn't present then the block does not exist.

  2. Every time a CaiusFob is altered in any way, some amount of known intra-block padding must be altered in a random way. Ideally this will be done so that it looks like number 1 has happened somewhere else in the file as well. Anywhere from zero to all of the padding can be thusly altered in any given change.

  3. No attempt will be made to write to any part of the file which cannot be positively identified as padding unless the user has explicitly stated that they will accept damage to data they cannot currently decrypt.

  4. No indication will be given to the user if any part of the file was unable to be decrypted or failed an HMAC check. Such data is simply incorrectly decrypted and thus ignored.

  5. Intrablock padding can be positively identified if you have two consecutive blocks in a CaiusFob such that the number of bytes between them could not possibly hold the simplest of free space blocks.

  6. When appending a block to a CaiusFob it is encouraged to place up to 50% of the size of the intrablock spacing before it as random padding, and up to 50% afterwards also. Naturally anywhere between zero and the full amount is acceptable, ideally the implementation would choose at random each time.

by Daniel Silverstone at January 29, 2015 01:08 PM

January 27, 2015

Daniel Silverstone (Kinnison)

I promise that I...

A friend and ex-colleague Francis Irving (@frabcus on Twitter) has recently been on a bit of an anti C/C++ kick, including tweeting about the problems which happen in software written in so called "insecure" languages, and culminating in his promise website which boldly calls for people to promise to not use C/C++ for new projects.

Recently I've not been programming enough. I'm still a member of the NetSurf browser project, and I'm still (slowly) working on Gitano from time to time. I am still (in theory) an upstream on the Cherokee Webserver project (and I really do need to sit down and fix some bugs in logging) and any number of smaller projects as well. I am still part of Debian and would like to start making positive contributions beyond voting and egging others on, but I have been somewhat burned out by everything going on in my life, including both home and work. While I am hardly in any kind of mental trouble, I've simply not had any tuits of late.

I find it very hard to make public promises which I know I am going to break. Francis suggested that the promise can be broken which while it might not devalue it for him (or you) it does for me. I do however think that public promises are a good thing, especially when they foster useful discussion in the communities I am part of, so from that point of view I very much support Francis in his efforts.

Even given all of the above, I'd like to make a promise statement of my own. I'd like to make it in public and hopefully that'll help me to keep it. I know I could easily fail to live up to this promise, but I'm hoping I'll do well and to some extent I'm relying on all you out there to help me keep it. Given we're almost at the end of the month, I am making the promise now and I want it to take effect starting on the 1st of February 2015.

I hereby promise that I will do better at contributing to all the projects I am nominally part of, making at least one useful material contribution to at least one project every week. I also promise to be more mindful of the choices I make when choosing how to implement solutions to problems, selecting appropriate languages and giving full consideration to how my solution might be attacked if appropriate.

I can't and won't promise not to use C/C++ but if you honestly feel you can make that promise, then I'm certain Francis would love for you to head over to his promise website and pledge. Also regardless of your opinions, please do join in the conversation, particularly regarding being mindful of attack vectors whenever you write something.

by Daniel Silverstone at January 27, 2015 10:27 PM

January 25, 2015

Chris Lamb

Recent Redis hacking

I've done a bunch of hacking on the Redis key/value database server recently:


I also made the following changes to the Debian packaging:

January 25, 2015 08:52 PM

January 22, 2015

MJ Ray

Outsourcing email to Google means SPF allows phishing?

I expect this is obvious to many people but bahumbug To Phish, or Not to Phish? just woke me up to the fact that if Google hosts your company email then its Sender Policy Framework might make other Google-sent emails look legitimate for your domain. When combined with the unsupportive support of the big free webmail hosts, is this another black mark against SPF?

by mjr at January 22, 2015 03:57 AM

January 21, 2015

Jonathan McDowell

Moving to Jekyll

I’ve been meaning to move away from Movable Type for a while; they no longer provide the “Open Source” variant, I’ve had some issues with the commenting side of things (more the fault of spammers than Movable Type itself) and there are a few minor niggles that I wanted to resolve. Nothing has been particularly pressing me to move and I haven’t been blogging as much so while I’ve been keeping an eye open for a replacement I haven’t exerted a lot of energy into the process. I have a little bit of time at present so I asked around on IRC for suggestions. One was ikiwiki, which I use as part of helping maintain the SPI website (and think is fantastic for that), the other was Jekyll. Both are available as part of Debian Jessie.

Jekyll looked a bit fancier out of the box (I’m no web designer so pre-canned themes help me a lot), so I decided to spend some time investigating it a bit more. I’d found a Movable Type to ikiwiki converter which provided a starting point for exporting from the SQLite3 DB I was using for MT. Most of my posts are in markdown, the rest (mostly from my Blosxom days) are plain HTML, so there wasn’t any need to do any conversion on the actual content. A minor amount of poking convinced Jekyll to use the same URL format (permalink: /:year/:month/:title.html in the _config.yml did what I wanted) and I had to do a few bits of fix up for some images that had been uploaded into MT, but overall fairly simple stuff.

Next I had to think about comments. My initial thought was to just ignore them for the moment; they weren’t really working on the MT install that well so it’s not a huge loss. I then decided I should at least see what the options were. Google+ has the ability to embed in your site, so I had a play with that. It worked well enough but I didn’t really want to force commenters into the Google ecosystem. Next up was Disqus, which I’ve seen used in various places. It seems to allow logins via various 3rd parties, can cope with threading and deals with the despamming. It was easy enough to integrate to play with, and while I was doing so I discovered that it could cope with importing comments. So I tweaked my conversion script to generate a WXR based file of the comments. This then imported easily into Disqus (and also I double checked that the export system worked).

I’m sure the use of a third party to handle comments will put some people off, but given the ability to export I’m confident if I really feel like dealing with despamming comments again at some point I can switch to something locally hosted. I do wish it didn’t require Javascript, but again it’s a trade off I’m willing to make at present.

Anyway. Thanks to Tollef for the pointer (and others who made various suggestions). Hopefully I haven’t broken (or produced a slew of “new” posts for) any of the feed readers pointed at my site (but you should update to use feed.xml rather than any of the others - I may remove them in the future once I see usage has died down).

(On the off chance it’s useful to someone else the conversion script I ended up with is available. There’s a built in Jekyll importer that may be a better move, but I liked ending up with a git repository containing a commit for each post.)

January 21, 2015 10:00 AM

December 11, 2014

Ben Francis

The Times They Are A Changin’ (Open Web Remix)

In the run up to the “Mozlandia” work week in Portland, and in reflection of the last three years of the Firefox OS project, for a bit of fun I’ve reworked a Bob Dylan song to celebrate our incredible journey so far.

Here’s a video featuring some of my memories from the last three years, with Siobhan (my fiancée) and me singing the song at you! There are even lyrics so you can sing along ;)

“Keep on rockin’ the free web” — Potch

by tola at December 11, 2014 11:26 AM

October 09, 2014

Wayne Stallwood (DrJeep)

Hosting Update2

Well after a year the SD card on the Raspberry Pi has failed, I noticed /var was unhappy when I tried to apply the recent Bash updates. Attempts at repair only made things worse and I suspect there is some physical issue. I had minimised writes with logs in tmpfs and the frequently updated weather site sat in tmpfs too..logging to remote systems etc. So not quite sure what happened. Of course this is all very inconvenient when your kit lives in another country, so at some point I guess I will have to build a new SD card and ship it out...for now we are back on Amazon EC2...yay for the elastic cloud \o/

October 09, 2014 09:31 PM

July 10, 2014

James Taylor

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


July 10, 2014 02:09 PM

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


July 10, 2014 02:09 PM

June 28, 2014

Brett Parker (iDunno)

Sony Entertainment Networks Insanity

So, I have a SEN account (it's part of the PSN), I have 2 videos with SEN, I have a broken PS3 so I can no deactivate video (you can only do that from the console itself, yes, really)... and the response from SEN has been abysmal, specifically:

As we take the security of SEN accounts very seriously, we are unable to provide support on this matter by e-mail as we will need you to answer some security questions before we can investigate this further. We need you to phone us in order to verify your account details because we're not allowed to verify details via e-mail.

I mean, seriously, they're going to verify my details over the phone better than over e-mail how exactly? All the contact details are tied to my e-mail account, I have logged in to their control panel and renamed the broken PS3 to "Broken PS3", I have given them the serial number of the PS3, and yet they insist that I need to call them, because apparently they're fucking stupid. I'm damned glad that I only ever got 2 videos from SEN, both of which I own on DVD now anyways, this kind of idiotic tie in to a system is badly wrong.

So, you phone the number... and now you get stuck with hold music for ever... oh, yeah, great customer service here guys. I mean, seriously, WTF.

OK - 10 minutes on the phone, and still being told "One of our advisors will be with you shortly". I get the feeling that I'll just be writing off the 2 videos that I no longer have access to.

I'm damned glad that I didn't decide to buy more content from that - at least you can reset the games entitlement once every six months without jumping through all these hoops (you have to reactivate each console that you still want to use, but hey).

by Brett Parker (iDunno@sommitrealweird.co.uk) at June 28, 2014 03:54 PM

June 12, 2014

Paul Tansom

Beginning irc

After some discussion last night at PHP Hants about the fact that irc is a great facilitator of support / discussion, but largely ignored because there is rarely enough information for a new user to get going I decided it may be worth putting together a howto type post so here goes…

What is irc?

First of all, what on earth is it? I’m tempted to describe it as Twitter done right years before Twitter even existed, but I’m a geek and I’ve been using irc for years. It has a long heritage, but unlike the ubiquitous email it hasn’t made the transition into mainstream use. In terms of usage it has similarities to things like Twitter and Instant Messaging. Let’s take a quick look at this.

Twitter allows you to broadcast messages, they get published and anyone who is subscribed to your feed can read what you say. Everything is pretty instant, and if somebody is watching the screen at the right time they can respond straight away. Instant Messaging on the other hand, is more of a direct conversation with a single person, or sometimes a group of people, but it too is pretty instantaneous – assuming, of course, that there’s someone reading what you’ve said. Both of these techonologies are pretty familiar to many. If you go to the appropriate website you are given the opportunity to sign up and either use a web based client or download one.

It is much the same for irc in terms of usage, although conversations are grouped into channels which generally focus on a particular topic rather than being generally broadcast (Twitter) or more specifically directed (Instant Messaging). The downside is that in most cases you don’t get a web page with clear instructions of how to sign up, download a client and find where the best place is to join the conversation.

Getting started

There are two things you need to get going with irc, a client and somewhere to connect to. Let’s put that into a more familiar context.

The client is what you use to connect with; this can be an application – so as an example Outlook or Thunderbird would be a mail client, or IE, Firefox, Chrome or Safari are examples of clients for web pages – or it can be a web page that does the same thing – so if you go to twitter.com and login you are using the web page as your Twitter client. Somewhere to connect to can be compared to a web address, or if you’ve got close enough to the configuration of your email to see the details, your mail server address.

Let’s start with the ‘somewhere to connect to‘ bit. Freenode is one of the most popular irc servers, so let’s take a look. First we’ll see what we can find out from their website, http://freenode.net/.

freenode

There’s a lot of very daunting information there for somebody new to irc, so ignore most of it and follow the Webchat link on the left.

webchat

That’s all very well and good, but what do we put in there? I guess the screenshot above gives a clue, but if you actually visit the page the entry boxes will be blank. Well first off there’s the Nickname, this can be pretty much anything you like, no need to register it – stick to the basics of letters, numbers and some simple punctuation (if you want to), keep it short and so long as nobody else is already using it you should be fine; if it doesn’t work try another. Channels is the awkward one, how do you know what channels there are? If you’re lucky you’re looking into this because you’ve been told there’s a channel there and hopefully you’ve been given the channel name. For now let’s just use the PHP Hants channel, so that would be #phph in the Channels box. Now all you need to do is type in the captcha, ignore the tick boxes and click Connect and you are on the irc channel and ready to chat. Down the right you’ll see a list of who else is there, and in the main window there will be a bit of introductory information (e.g. topic for the channel) and depending on how busy it is anything from nothing to a fast scrolling screen of text.

phph

If you’ve miss typed there’s a chance you’ll end up in a channel specially created for you because it didn’t exist; don’t worry, just quit and try again (I’ll explain that process shortly).

For now all you really need to worry about is typing in text an posting it, this is as simple as typing it into the entry box at the bottom of the page and pressing return. Be polite, be patient and you’ll be fine. There are plenty of commands that you can use to do things, but for now the only one you need to worry about is the one to leave, this is:

/quit

Type it in the entry box, press return and you’ve disconnected from the server. The next thing to look into is using a client program since this is far more flexible, but I’ll save that for another post.

by Paul Tansom at June 12, 2014 04:27 PM

May 06, 2014

Richard Lewis

Refocusing Ph.D

Actual progress on this Ph.D revision has been quite slow. My current efforts are on improving the focus of the thesis. One of the criticisms the examiners made (somewhat obliquely) was that it wasn&apost very clear exactly what my subject was: musicology? music information retrieval? computational musicology? And the reason for this was that I failed to make that clear to myself. It was only at the writing up stage, when I was trying to put together a coherent argument, that I decided to try and make it a story about music information retrieval (MIR). I tried to argue that MIR&aposs existing evaluation work (which was largely modelled on information retrieval evaluation from the text world) only took into account the music information needs of recreational users of MIR systems, and that there was very little in the way of studying the music information seeking behaviour of "serious" users. However, the examiners didn&apost even accept that information retrieval was an important problem for musicology, nevermind that there was work to be done in examining music information needs of music scholarship.

So I&aposm using this as an excuse to shift the focus away from MIR a little and towards something more like computational musicology and music informatics. I&aposm putting together a case study of a computational musicology toolkit called music21. Doing this allows me to focus in more detail on a smaller and more distinct community of users (rather than attempting to studying musicologists in general which was another problematic feature of the thesis), it makes it much clearer what kind of music research can be addressed using the technology (all of MIR is either far too diverse or far too generic, depending on how you want to spin it), and also allows me to work with the actually Purcell Plus project materials using the toolkit.

May 06, 2014 11:16 PM

March 27, 2014

Richard Lewis

Taking notes in Haskell

The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we&aposre using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we&aposre working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don&apost even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that&aposs what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

Here&aposs what I typed:

module MusicalDocuments where

import Data.Maybe

-- A document comprises some number of openings (double page spreads)
data Document = Document [Opening]

-- An opening comprises one or two pages (usually two)
data Opening = Opening (Page, Maybe Page)

-- A page comprises multiple systems
data Page = Page [System]

-- Each part is the line for a particular voice
data Voice = Superius | Discantus | Tenor | Contratenor | Bassus

-- A part comprises a list of musical sybmols, but it may span mutliple systems
--(including partial systems)
data Part = Part [MusicalSymbol]

-- A piece comprises some number of sections
data Piece = Piece [Section]

-- A system is a collection of staves
data System = System [Staff]

-- A staff is a list of atomic graphical symbols
data Staff = Staff [Glyph]

-- A section is a collection of parts
data Section = Section [Part]

-- These are the atomic components, MusicalSymbols are semantic and Glyphs are
--syntactic (i.e. just image elements)
data MusicalSymbol = MusicalSymbol
data Glyph = Glyph

-- If this were real, Image would abstract over some kind of binary format
data Image = Image

-- One of the important properties we need in order to be able to construct pieces
-- from the scanned components is to be able to say when objects of the some of the
-- types are strictly contiguous, i.e. this staff immediately follows that staff
class Contiguous a where
  immediatelyFollows :: a -> a -> Bool
  immediatelyPrecedes :: a -> a -> Bool
  immediatelyPrecedes a b = b `immediatelyFollows` a

instance Contiguous Staff where
  immediatelyFollows :: Staff -> Staff -> Bool
  immediatelyFollows = undefined

-- Another interesting property of this data set is that there are a number of
-- duplicate scans of openings, but nothing in the metadata that indicates this,
-- so our workflow needs to recognise duplicates
instance Eq Opening where
  (==) :: Opening -> Opening -> Bool
  (==) a b = undefined

-- Maybe it would also be useful to have equality for staves too?
instance Eq Staff where
  (==) :: Staff -> Staff -> Bool
  (==) a b = undefined

-- The following functions actually represent the workflow

collate :: [Document]
collate = undefined

scan :: Document -> [Image]
scan = undefined

split :: Image -> Opening
split = undefined

paginate :: Opening -> [Page]
paginate = undefined

omr :: Page -> [System]
omr = undefined

segment :: System -> [Staff]
segment = undefined

tokenize :: Staff -> [Glyph]
tokenize = undefined

recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol
recogniseMusicalSymbol = undefined

part :: [Glyph] -> Maybe Part
part gs =
  if null symbols then Nothing else Just $ Part symbols
  where symbols = mapMaybe recogniseMusicalSymbol gs

alignable :: Part -> Part -> Bool
alignable = undefined

piece :: [Part] -> Maybe Piece
piece = undefined

I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that&aposs just a label for a list or pair.

I haven&apost written much Haskell code before, and given that I&aposve only implemented one function here, I still haven&apost written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

March 27, 2014 11:13 PM

February 22, 2014

Wayne Stallwood (DrJeep)

Outlook 2003, Cutting off Emails

I had a friend come to me with an interesting problem they were having in their office. Due to the Exchange server and Office licencing they have they are running Outlook 2003 on Windows 7 64bit Machines.

After Internet Explorer updates to IE11 it introduces a rather annoying bug into Outlook. Typed emails often get cut off mid sentence when you click Send ! So only part of the email gets sent !

What I think is happening is that Outlook is reverting to a previously autosaved copy before sending.

Removing the IE11 update would probably fix it but perhaps the easiest way is to disable the "Autosave unsent email" option in Outlook.

Navigate to:-
Tools, Options, E-Mail Options, Advanced E-Mail Options, and disable the "Autosave unsent" option.

February 22, 2014 08:43 AM

February 06, 2014

Adam Bower (quinophex)

I finally managed to beat my nemesis!

I purchased this book http://www.amazon.co.uk/dp/0738206679 (Linked, by Barabasi) on the 24th of December 2002, I had managed to make 6 or 7 aborted attempts at reading it to completion where life had suddenly got busy and just took over. This meant that I put the book down and didn't pick it up again until things were less hectic some time later and I started again.

Anyhow, I finally beat the book a few nights ago, my comprehension of it was pretty low anyhow but at least it is done. Just shows I need to read lots more given how little went in.




comment count unavailable comments

February 06, 2014 10:40 PM

February 01, 2014

Adam Bower (quinophex)

Why buying a Mio Cyclo 305 HC cycling computer was actually a great idea.

I finally made it back out onto the bike today for the first time since September last year. I'd spent some time ill in October and November which meant I had to stop exercising and as a result I've gained loads of weight over the winter and it turns out also become very unfit which can be verified by looking at the Strava ride from today: http://www.strava.com/activities/110354158

Anyhow, a nice thing about this ride is that I can record it on Strava and get this data about how unfit I have become, this is because last year I bought a Mio Cyclo 305 HC cycle computer http://eu.mio.com/en_gb/mio-cyclo-305-hc.htm from Halfords reduced to £144.50 (using a British Cycling discount). I was originally going to get a Garmin 500 but Amazon put the price up from £149.99 the day I was going to buy it to £199.99.

I knew when I got the Mio that it had a few issues surrounding usability and features but it was cheap enough at under £150 that I figured that even if I didn't get on with it I'd at least have a cadence sensor and heart rate monitor so I could just buy a Garmin 510 when they sorted out the firmware bugs with that and the price came down a bit which is still my longer term intention.

So it turns out a couple of weeks ago I plugged my Mio into a Windows VM when I was testing USB support and carried out a check for new firmware. I was rather surprised to see a new firmware update and new set of map data was available for download. So I installed it think I wasn't going to get any new features from it as Mio had released some new models but it turns out that the new firmware actually enables a single feature (amongst other things, they also tidied up the UI and sorted a few other bugs along with some other features) that makes the device massively more useful as it now also creates files in .fit format which can be uploaded directly to Strava.

This is massively useful for me as although the Mio always worked in Linux as the device is essentially just a USB mass storage device but you would have to do an intermediate step of having to use https://github.com/rhyas/GPXConverter to convert the files from the Mio-centric GPX format to something Strava would recognise. Now I can just browse to the folder and upload the file directly which is very handy.

All in it turns out that buying a Mio which reading reviews and forums were full of doom and gloom means I can wait even longer before considering replacement with a garmin.

comment count unavailable comments

February 01, 2014 02:11 PM

January 04, 2014

Brett Parker (iDunno)

Wow, I do believe Fasthosts have outdone themselves...

So, got a beep this morning from our work monitoring system. One of our customers domain names is hosted with livedns.co.uk (which, as far as I can tell, is part of the Fasthosts franchise)... It appears that Fasthosts have managed to entirely break their DNS:

brettp@laptop:~$ host www.fasthosts.com
;; connection timed out; no servers could be reached
brettp@laptop:~$ whois fasthosts.com | grep -i "Name Server"
   Name Server: NS1.FASTHOSTS.NET.UK
   Name Server: NS2.FASTHOSTS.NET.UK
Name Server: NS1.FASTHOSTS.NET.UK
Name Server: NS2.FASTHOSTS.NET.UK
brettp@laptop:~$ whois fasthosts.net.uk | grep -A 2 "Name servers:"
    Name servers:
        ns1.fasthosts.net.uk      213.171.192.252
        ns2.fasthosts.net.uk      213.171.193.248
brettp@laptop:~$  host -t ns fasthosts.net.uk 213.171.192.252
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns fasthosts.net.uk 213.171.193.248
;; connection timed out; no servers could be reached
brettp@laptop:~$

So, that's fasthosts core nameservers not responding, good start! They also provide livedns.co.uk, so lets have a look at that:

brettp@laptop:~$ whois livedns.co.uk | grep -A 3 "Name servers:"
    Name servers:
        ns1.livedns.co.uk         213.171.192.250
        ns2.livedns.co.uk         213.171.193.250
        ns3.livedns.co.uk         213.171.192.254
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.192.250
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.193.250
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.192.254
;; connection timed out; no servers could be reached

So, erm, apparently that's all their DNS servers "Not entirely functioning correctly"! That's quite impressive!

by Brett Parker (iDunno@sommitrealweird.co.uk) at January 04, 2014 10:24 AM

January 01, 2014

John Woodard

A year in Prog!


It's New Year's Day 2014 and I'm reflecting on the music of past year.

Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 01, 2014 07:56 PM

December 01, 2013

Paul Tansom

Scratch in a network environment

I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I’ve made to the default Scratch install to make things easier. So here goes:

With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn’t exactly helpful. It also isn’t ideal that they can explore the C: drive in spite of profile restrictions (although it isn’t the end of the world as there is little they can do from Scratch).

save-orig

After a bit of time with Google I found the answer, and since it didn’t immediately leap out at me when I was searching I thought I’d post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:

C:\Program Files\Scratch\Scratch.ini

Initially it looks like this:

ini-orig

Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):

Home=U:
VisibleDrives=U:

To get this:

ini-new

They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.

save-new1

The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.

save-new2

You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).

There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.

Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven’t tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:

ProxyServer=[server name or IP address]
ProxyPort=[port number]

by Paul Tansom at December 01, 2013 07:00 PM

February 22, 2013

Joe Button

Sampler plugin for the baremetal LV2 host

I threw together a simpler sampler plugin for kicks. Like the other plugins it sounds fairly underwhelming. Next challenge will probably be to try plugging in some real LV2 plugins.

February 22, 2013 11:22 PM

February 21, 2013

Joe Button

Baremetal MIDI machine now talks to hardware MIDI devices

The Baremetal MIDI file player was cool, but not quite as cool as a real instrument.

I wired up a MIDI In port along the lines of This one here, messed with the code a bit and voila (and potentially viola), I can play LV2 instrument plugins using a MIDI keyboard:

When I say "LV2 synth plugins", I should clarify that I'm only using the LV2 plugin C API, not the whole .ttl text file shebangle. I hope to get around to that at some point but it will be a while before you can directly plug LV2s into this and expect them to just work.

February 21, 2013 04:05 PM

January 16, 2013

John Woodard

LinuxMint 14 Add Printer Issue


 LinuxMint 14 Add Printer Issue



 

I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!

It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.

Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.

The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.

So come on Minty people get your bug squashing boots on and stamp on this one please.

Update

Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 16, 2013 12:39 AM

August 20, 2012

David Reynolds

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

June 25, 2012

Elisabeth Fosbrooke-Brown (sfr)

Black redstarts

It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.

Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!

The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.

by sunflowerinrain (noreply@blogger.com) at June 25, 2012 08:02 AM

June 16, 2012

Elisabeth Fosbrooke-Brown (sfr)

Roundabout at Mirambeau

Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.

A couple of years ago the roundabout at the edge of  Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.

Need a passenger to take photo!

by sunflowerinrain (noreply@blogger.com) at June 16, 2012 12:06 PM

September 04, 2006

Ashley Howes

Some new photos

Take a look at some new photos my father and I have taken. We are experimenting with our new digital SLR with a variety of lenses.

by Ashley (noreply@blogger.com) at September 04, 2006 10:42 AM

August 30, 2006

Ashley Howes

A Collection of Comments

This is a bit of fun. A collection of comments found in code. This is from The Daily WTF.

by Ashley (noreply@blogger.com) at August 30, 2006 01:13 AM