Planet ALUG

August 14, 2018

Steve Engledow (stilvoid)

Heroes: Building some old code

For the end result of this post, see my AUR package of Heroes.


The other day, something reminded me of a game I used to really enjoy playing back in my early days of getting to know Linux. That game was Heroes. It’s a clone of Snake/Tron/Nibbles but with some fun additions, a nice graphical style, and some funky visual effects.

Heroes screenshot

So, of course, I immediately decided to install it.

$ pacman -Ss heroes

No results. Nothing in the AUR either. There is only one other course of action: I’m going to create an AUR package for it!

It looks like the last change to the game was 16 years ago so it could be fun getting it to compile with a modern toolchain.

Getting Heroes to compile in 2018

I put together a basic PKGBUILD that pulls down the source and data files from the Heroes sourceforge pasge and then runs:

./configure
make

Here’s the first of what I’m sure are many failure messages:

hedlite.c:48:20: error: static declaration of ‘tile_set_img’ follows non-static declaration 
 static a_pcx_image tile_set_img;
                    ^~~~~~~~~~~~
In file included from hedlite.c:44:
const.h:52:20: note: previous declaration of ‘tile_set_img’ was here                        
 extern a_pcx_image tile_set_img, font_deck_img;                                            
                    ^~~~~~~~~~~~

Some forewarning: it’s been quite some time since I wrote anything serious in C and I was never an expert in it anyway. But I think I know enough to fix this and so just commented out the static declaration as, after poking around in the code a bit, it doesn’t seem like it’s necessary anyway.

Now the compilation succeeds but I get the following error during linking:

/usr/bin/ld: camera.o: undefined reference to symbol 'sin@@GLIBC_2.2.5'
/usr/bin/ld: /usr/lib/libm.so.6: error adding symbols: DSO missing from command line

Turns out that for some reason, the developers forgot to include the math(s) library. I’m guessing that perhaps it used to be linked by default in a previous version of GCC.

LDFLAGS=-lm ./configure
make

Now it at least compiles correctly! Next up, compiling the data, music, and sound effects packages.

Amazingly, those all worked correctly and I was able to play the game!

However, this game was written a while ago and originally targeted MS-DOS so it has a window size of 320x200 which looks rather ridiculous on my 1920x1080 desktop ;)

Tiny Heroes window screenshot

So I set about trying to set the default screen mode so that the game starts in full screen…

Fortunately, it looks like this is relatively easy. I just modified a few variables and changed a command line flag from -F | --full-screen to -W | --windowed.

Next up, rather than rely on SDL’s built-in scaling (it looks blurry and weird), I need to enable Heroes’ quadruple flag -4 by default. In fact, I removed all the scaling options and just left it to default to scaling 4-fold as that leaves the game with a resolution of 1280x800 which seems a reasonable default these days. I’m sure I’ll receive bug reports if it’s not ;)

The very last thing I’ve done is to enable the high quality mixer by default and remove the command line option from the game. CPU is a little more abundant now than it was in 2002 ;)

Here’s my final patch file.

Submitting the AUR package

Things have changed since I last submitted a package to the AUR so here’s a brief writeup - if only to remind myself in future ;)

First step was to update the SSH key in my AUR account as it contained a key from my old machine.

Next up, I added a remote to my repository:

$ git remote add aur ssh://aur@aur.archlinux.org/heroes.git
$ git fetch aur

The next step is to generate AUR’s required .SRCINFO file and rebase it into every commit (AUR requires this).

$ git filter-branch --tree-filter "makepkg --printsrcinfo > .SRCINFO"

And the push it to the AUR repository:

$ git push -u aur master

Testing it out

I use packer to make using AUR easier (I’m lazy).

$ packer -S heroes

SUCCESS!

All in all, this wasn’t anywhere near as painful as I’d expected. Time to play some Heroes :D

by Steve Engledow at August 14, 2018 12:00 AM

August 06, 2018

Jonathan McDowell

DebConf18 writeup

I’m just back from DebConf18, which was held in Hsinchu, Taiwan. I went without any real concrete plans about what I wanted to work on - I had some options if I found myself at a loose end, but no preconceptions about what would pan out. In the end I felt I had a very productive conference and I did bits on all of the following:

I managed to catch the DebConf bug towards the end of the conference, which was unfortunate - I had been eating the venue food at the start of the week and it would have been nice to explore the options in Hsinchu itself for dinner, but a dodgy tummy makes that an unwise idea. Thanks to Stuart Prescott I squeezed in a short daytrip to Taipei yesterday as my flight was in the evening and I was going to have to miss all the closing sessions anyway. So at least I didn’t completely avoid seeing some of Taiwan when I was there.

As usual thanks to all the organisers for their hard work, and looking forward to DebConf19 in Curitiba, Brazil!

August 06, 2018 02:29 PM

July 31, 2018

Chris Lamb

Free software activities in July 2018

Here is my monthly update covering what I have been doing in the free software world during July 2018 (previous month):


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:


Debian LTS


This month I have worked 18 hours on Debian Long Term Support (LTS) and 11.75 hours on its sister Extended LTS project:

  • "Frontdesk" duties, triaging CVEs, responding to user questions/queries, etc.
  • Hopefully final updates to various scripts — both local and shared — to accommodate and support the introduction of the new "Extended LTS" initiative.
  • Issued DLA 1417-1 for ca-certificates, updating the set of Certificate Authority (CA) certificates that are considered "valid" or otherwise should be trusted by systems.
  • Issued DLA 1419-1 for ruby-sprockets to fix a path traversal issue exploitable via file:// URIs.
  • Issued DLA 1420-1 for the Cinnamon Desktop Environment where a symlink attack could permit an attacker to overwrite an arbitrary file on the filesystem.
  • Issued DLA 1427-1 for znc to address a path traversal vulnerability via ../ filenames in "skin" names as well as to fix an issue where insufficient validation could allow writing of arbitrary values to the znc.conf config file.
  • Issued DLA 1443-1 for evolution-data-server to fix an issue where rejected requests to upgrade to a secure connection did not result in the termination of the connection.
  • Issued DLA 1448-1 for policykit-1, uploading Abhijith PA's fix for a denial of service vulnerability.
  • Issued ELA-13-1 for ca-certificates, also updating the set of Certificate Authority (CA) certificates that are considered "valid" or otherwise should be trusted by wheezy systems.

Uploads


Finally, I also sponsored elpy (1.22.0-1) & wolfssl (3.15.3+dfsg-1) and I orphaned dbus-cpp (#904426) and process-cpp (#904425) as they were no longer required as build-dependencies of Anbox.


Debian bugs filed

  • cod-tools: Missing build-depends. (#903689)
  • network-manager-openvpn: "Cannot specify device when activating VPN" error when connecting. (#903109)
  • ukwm: override_dh_auto_test doesn't respect nocheck build profile. (#904889)
  • ITP: gpg-encrypted-root — Encrypt root volumes with an OpenPGP smartcard. (#903163)
  • gnumeric: ssconvert segmentation faults. (#903194)

FTP Team


As a Debian FTP assistant I ACCEPTed 213 packages: ahven, apache-mode-el, ats2-lang, bar-cursor-el, bidiui, boxquote-el, capstone, cargo, clevis, cockpit, crispy-doom, cyvcf2, debian-gis, devscripts-el, elementary-xfce, emacs-pod-mode, emacs-session, eproject-el, feedreader, firmware-nonfree, fwupd, fwupdate, gmbal, gmbal-commons, gmbal-pfl, gnome-subtitles, gnuastro, golang-github-avast-retry-go, golang-github-gdamore-encoding, golang-github-git-lfs-gitobj, golang-github-lucasb-eyer-go-colorful, golang-github-smira-go-aws-auth, golang-github-ulule-limiter, golang-github-zyedidia-clipboard, graphviz-dot-mode, grub2, haskell-iwlib, haskell-lzma, hyperscan, initsplit-el, intel-ipsec-mb, intel-mkl, ivulncheck, jaxws-api, jitterentropy-rngd, jp, json-c, julia, kitty, leatherman, leela-zero, lektor, libanyevent-fork-perl, libattribute-storage-perl, libbio-tools-run-alignment-clustalw-perl, libbio-tools-run-alignment-tcoffee-perl, libcircle-be-perl, libconvert-color-xterm-perl, libconvert-scalar-perl, libfile-copy-recursive-reduced-perl, libfortran-format-perl, libhtml-escape-perl, libio-fdpass-perl, libjide-oss-java, libmems, libmodule-build-pluggable-perl, libmodule-build-pluggable-ppport-perl, libnet-async-irc-perl, libnet-async-tangence-perl, libnet-cidr-set-perl, libperl-critic-policy-variables-prohibitlooponhash-perl, libppix-quotelike-perl, libpqxx, libproc-fastspawn-perl, libredis-fast-perl, libspatialaudio, libstring-tagged-perl, libtickit-async-perl, libtickit-perl, libtickit-widget-scroller-perl, libtickit-widget-tabbed-perl, libtickit-widgets-perl, libu2f-host, libuuid-urandom-perl, libvirt-dbus, libxsmm, lief, lightbeam, limesuite, linux, log4shib, mailscripts, mimepull, monero, mutter, node-unicode-data, octavia, octavia-dashboard, openstack-cluster-installer, osmo-iuh, osmo-mgw, osmo-msc, pg-qualstats, pg-stat-kcache, pgzero, php-composer-xdebug-handler, plasma-browser-integration, powerline-gitstatus, ppx-tools-versioned, pyside2, python-certbot-dns-gehirn, python-certbot-dns-linode, python-certbot-dns-sakuracloud, python-cheroot, python-django-dbconn-retry, python-fido2, python-ilorest, python-ipfix, python-lupa, python-morph, python-pygtrie, python-stem, pywws, r-cran-callr, r-cran-extradistr, r-cran-pkgbuild, r-cran-pkgload, r-cran-processx, rawtran, ros-ros-comm, ruby-bindex, ruby-marcel, rust-ar, rust-arrayvec, rust-atty, rust-bitflags, rust-bytecount, rust-byteorder, rust-chrono, rust-cloudabi, rust-crossbeam-utils, rust-csv, rust-csv-core, rust-ctrlc, rust-dns-parser, rust-dtoa, rust-either, rust-encoding-rs, rust-filetime, rust-fnv, rust-fuchsia-zircon, rust-futures, rust-getopts, rust-glob, rust-globset, rust-hex, rust-httparse, rust-humantime, rust-idna, rust-indexmap, rust-is-match, rust-itoa, rust-language-tags, rust-lazy-static, rust-libc, rust-memoffset, rust-nodrop, rust-num-integer, rust-num-traits, rust-openssl-sys, rust-os-pipe, rust-rand, rust-rand-core, rust-redox-termios, rust-regex, rust-regex-syntax, rust-remove-dir-all, rust-same-file, rust-scoped-tls, rust-semver-parser, rust-serde, rust-sha1, rust-sha2-asm, rust-shared-child, rust-shlex, rust-string-cache-shared, rust-strsim, rust-tar, rust-tempfile, rust-termion, rust-time, rust-try-lock, rust-ucd-util, rust-unicode-bidi, rust-url, rust-vec-map, rust-void, rust-walkdir, rust-winapi, rust-winapi-i686-pc-windows-gnu, rust-winapi-x86-64-pc-windows-gnu, rustc, simavr, tabbar-el, tarlz, ukui-media, ukui-menus, ukui-power-manager, ukui-window-switch, ukwm, vanguards, weevely & xml-security-c.

I also filed wishlist-level bugs against the following packages with potential licensing improvements:

  • pgzero: Please inline/summarise web-based licensing discussion in debian/copyright. (#904674)
  • plasma-browser-integration: "This_file_is_part_of_KDE" in debian/copyright? (#903713)
  • rawtran: Please split out debian/copyright. (#904589)
  • tabbar-el: Please inline web-based comments in debian/copyright. (#904782)
  • feedreader: Please use wildcards in debian/copyright. (#904631)

Lastly, I filed 10 RC bugs against packages that had potentially-incomplete debian/copyright files against: ahven, ats2-lang, fwupd, ivulncheck, libmems, libredis-fast-perl, libtickit-widget-tabbed-perl, lief, rust-humantime & rust-try-lock.

July 31, 2018 12:02 PM

Jonathan McDowell

(Badly) cloning a TEMPer USB

Digispark/DS18B20

Having setup a central MQTT broker I’ve wanted to feed it extra data. The study temperature was a start, but not the most useful piece of data when working towards controlling the central heating. As it happens I have a machine in the living room hooked up to the TV, so I thought about buying something like a TEMPer USB so I could sample the room temperature and add it as a data source. And then I realised that I still had a bunch of Digispark clones and some Maxim DS18B20 1-Wire temperature sensors and I should build something instead.

I decided to try and emulate the TEMPer device rather than doing something unique. V-USB was pressed into service and some furious Googling took place to try and find out the details of how the TEMPer appears to the host in order to craft the appropriate USB/HID descriptors to present - actually finding some lsusb output was the hardest part. Looking at the code of various tools designed to talk to the device provided details of the different init commands that needed to be recognised and a basic skeleton framework (reporting a constant 15°C temperature) was crafted. Once that was working with the existing client code knocking up some 1-Wire code to query the DS18B20 wasn’t too much effort (I seem to keep implementing this code on various devices).

At this point things became less reliable. The V-USB code is an evil (and very clever) set of interrupt driven GPIO bit banging routines, working around the fact that the ATTiny doesn’t have a USB port. 1-Wire is a timed protocol, so the simple implementation involves a bunch of delays. To add to this the temper-python library decides to do a USB device reset if it sees a timeout. And does a double read to work around some behaviour of the real hardware. Doing a 1-Wire transaction directly in response to these requests causes lots of problems, so I implemented a timer to do a 1-Wire temperature check once every 10 seconds, and then the request from the host just returns the last value read. This is a lot more reliable, but still sees a few resets a day. It would be nice to fix this, but for the moment it’s good enough for my needs - I’m reading temperature once a minute to report back to the MQTT server, but it offends me to see the USB resets in the kernel log.

Additionally I had some problems with accuracy. Firstly it seems the batch of DS18B20s I have can vary by 1-2°C, so I ended up adjusting for this in the code that runs on the host. Secondly I mounted the DS18B20 on the Digispark board, as in the picture. The USB cable ensures it’s far enough away from the host (rather than sitting plugged directly into the back of the machine and measuring the PSU fan output temperature), but the LED on the board turned out to be close enough that it affected the reading. I have no need for it so I just ended up removing it.

The code is locally and on GitHub in case it’s of use/interest to anyone else.

(I’m currently at DebConf18 but I’ll wait until it’s over before I write it up, and I’ve been meaning to blog about this for a while anyway.)

July 31, 2018 12:31 AM

July 07, 2018

Mick Morgan

re-encrypting trivia

Back in June 2015 I decided to force all connections to trivia over TLS rather than allow plain unencrypted connections. I decided to do this for the obvious reason that it was (and still is) a “good thing” (TM). In my view, all transactions over the ‘net should be encrypted, preferably using strong cyphers offering perfect forward secrecy – just to stop all forms of “bad guys” snooping on what you are doing. Of course, even in such cases there are still myriad ways said “bad guys” can get some idea what you are doing (unencrypted DNS tells them where you are going for example) but hey, at least we can make the buggers work a bit harder.

Unfortunately, as I soon discovered, my self-signed X509 certificates were not well received by RSS aggregators or by some spiders. And as Brett Parker at ALUG pointed out to me, the algorithms used by some (if not all) of the main web spiders (such as Google) would down rank my site on the (in my view laughably specious) grounds that the site could not be trusted.

As I have said before, I’m with Michael Orlitzky, both in his defence of self-signed certificates and his distaste for the CA “terrorists”. I think the CA model is fundamentally broken and I dislike it intensely. It is also, in my view, completely wrong to confuse encryption with identification and authentication. Admittedly, you might care about the (claimed) identity of an email correspondent using encryption (which is why PGP’s “web of trust” exists – even though that too is flawed) or whether the bank you are connecting to is actually who it says it is. But why trust the CA to verify that? Seriously, why? How did the CA verify that the entity buying the certificate is actually entitled to identify itself in that way? Why do you trust that CA as a third party verifier of that identity? How do you know that the certicate offered to your browser is a trustworthy indicator of the identity of the site you are visiting? How do you know that the certificate exchange has not been subject to a MITM attack? How do you know that your browser has not been compromised?

You don’t know. You can’t be sure. You simply trust the nice big green padlock.

Interestingly, banks, and I am sure other large organisations which are heavily regulated, are now beginning to add features which give more feedback to the end user on their identity during transactions. I recently applied for a new zero interest credit card (I like the idea of free money). In addition to the usual UID, password and security number requested of me (in order to identify me to them) the bank providing that card asked me to pick a “personal image” together with a personally chosen secure phrase known only to me in order that they could present those back to me to identify them to me. I am instructed not to proceed with any transaction unless that identification is satisfactory.

So even the banks recognise that the CA model is inadequate as a means of trusted identification. But we still use it to provide encryption.

For some time now browsers have thrown all sorts of overblown warnings about “untrusted” sites which offer self-signed certificates such as the ones I have happily used for years (and which I note that Mike Orlitzky still uses). As I have said in the past, that is simply daft when the same browser will happily connect to the same site over an unencrypted plain HTTP channel with no warning whatsoever. Now, however, there is a concerted effort (started by Google – yes them again) to move to warning end users that plain HTTP sites are “insecure”. Beginning in July 2018 (that’s now) with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure” (sigh). And where Google goes with Chrome, Mozilla, Microsoft and Apple will surely follow with Firefox, Edge and Safari. As much as I may applaud the move to a more fully encrypted web, I deplore the misuse of the word “secure” in this context. Many small sites will now face balkanisation as their viewers fall away in the face of daft warnings from their browsers. Worse, the continued use of warnings which may be ignored by end users (who, let’s face it, often just carry on clicking until they get what they want to see) will surely desensitise those same users to /real/ security warnings that they should pay attention to. Better I feel to simply warn the user that “access to this site is not encrypted”. But what do I know?

I write articles on trivia in the expectation that someone, somewhere, will read them. Granted, blogging is the ultimate form of vanity publishing, but I flatter myself that some people genuinely may find some of my “how-to” style articles of some use. Indeed, I know from my logs and from email corresondence that my articles on VPN usage for example are used and found to be useful. It would be a shame (and largely pointless) to continue to write here if no-one except the hardiest of souls persistent enough to ignore their browsers ever read it. Worse, of course, is the fact that for many people, Google /is/ the internet, They turn to Google before all else when searching for something. If that search engine doesn’t even index trivia, then again I am wasting my time. So, reluctantly, I have decided now is the time to bite the bullet and apply a CA provided TLS certificate to trivia. Some of my more perceptive readers may have already noticed that trivia now defaults to HTTPS rather than plain HTTP. Fortunately, letsencrypt, offers free (as in beer) certificates and the EFF provides an automated system of both installation and renewal of the necessary certificates. So I have deployed and installed a letsencrypt certificate here.

I still don’t like the CA model but, like Cnut the Great (and unlike his courtiers), I recognise my inability to influence the tides around me.

[Postscript]

Note that in order to ensure that I do not get a browser warning about “mixed content”, in addition to the necessary blog and lighttpd configuration changes I have run a global search and replace of all “http://” by “https://” on trivia. Whilst this now gives me a satisfyingly good clear green A+ on the SSL Labs site, it means that all off-site references which may have previously pointed to “http://somewhere.other” will now necessarily point to “https://somewhere.other”. This may break some links where the site in question has not yet moved to TLS support. If that happens, you may simply remove the trailing “s” from the link to get to the original site. Of course, if that still doesn’t work, then the link (or indeed entire site) may have moved or disappeared. It happens.

by Mick at July 07, 2018 04:03 PM

July 03, 2018

Daniel Silverstone (Kinnison)

Docker Compose

I glanced back over my shoulder to see the Director approaching. Zhe stood next to me, watched me intently for a few moments, before turning and looking out at the scape. The water was preturnaturally calm, above it only clear blue. A number of dark, almost formless, shapes were slowly moving back and forth beneath the surface.

"Is everything in readiness?" zhe queried, sounding both impatient and resigned at the same time. "And will it work?" zhe added. My predecessor, and zir predecessor before zem, had attempted to reach the same goal now set for myself.

"I believe so" I responded, sounding perhaps slightly more confident than I felt. "All the preparations have been made, everything is in accordance with what has been written". The director nodded, zir face pinched, with worry writ across it.

I closed my eyes, took a deep breath, opened them, raised my hand and focussed on the scape, until it seemed to me that my hand was almost floating on the water. With all of my strength of will I formed the incantation, repeating it over and over in my mind until I was sure that I was ready. I released it into the scape and dropped my arm.

The water began to churn, the blue above darkening rapidly, becoming streaked with grey. The shapes beneath the water picked up speed and started to grow, before resolving to what appeared to be stylised Earth whales. Huge arcs of electricity speared the water, a screaming, crashing, wall of sound rolled over us as we watched, a foundation rose up from the depths on the backs of the whale-like shapes wherever the lightning struck.

Chunks of goodness-knows-what rained down from the grey streaked morass, thumping into place seamlessly onto the foundations, slowly building what I had envisioned. I started to allow myself to feel hope, things were going well, each tower of the final solution was taking form, becoming the slick and clean visions of function which I had painstakingly selected from among the masses of clamoring options.

Now and then, the whale-like shapes would surface momentarily near one of the towers, stringing connections like bunting across the water, until the final design was achieved. My shoulders tightened and I raised my hand once more. As I did so, the waters settled, the grey bled out from the blue, and the scape became calm and the towers shone, each in its place, each looking exactly as it should.

Chanting the second incantation under my breath, over and over, until it seemed seared into my very bones, I released it into the scape and watched it flow over the towers, each one ringing out as the command reached it, until all the towers sang, producing a resonant and consonant chord which rose of its own accord, seeming to summon creatures from the very waters in which the towers stood.

The creatures approached the towers, reached up as one, touched the doors, and screamed in horror as their arms caught aflame. In moments each and every creature was reduced to ashes, somehow fundamentally unable to make use of the incredible forms I had wrought. The Director sighed heavily, turned, and made to leave. The towers I had sweated over the design of for months stood proud, beautiful, worthless.

I also turned, made my way out of the realisation suite, and with a sigh hit the scape-purge button on the outer wall. It was over. The grand design was flawed. Nothing I created in this manner would be likely to work in the scape and so the most important moment of my life was lost to ruin, just as my predecessor, and zir predecessor before zem.

Returning to my chambers, I snatched up the book from my workbench. The whale-like creature winking to me from the cover, grinning, as though it knew what I had tried to do and relished my failure. I cast it into the waste chute and went back to my drafting table to design new towers, towers which might be compatible with the creatures which were needed to inhabit them and breath life into their very structure, towers which would involve no grinning whales.

by Daniel Silverstone at July 03, 2018 03:13 PM

June 30, 2018

Chris Lamb

Free software activities in June 2018

Here is my monthly update covering what I have been doing in the free software world during June 2018 (previous month):


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:



Debian

Patches contributed


Debian LTS


This month I have been worked 18 hours on Debian Long Term Support (LTS) and 7 hours on its sister Extended LTS project. In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, responding to user questions, etc.
  • A fair amount of initial setup and administraton to accomodate the introduction for the new "Extended LTS" initiative as well as for the transition of LTS moving from supporting Debian wheezy to jessie:
    • Fixing various shared scripts, including adding pushing to the remote repository for ELAs [...] and updating hard-coded wheezy references [...]. I also added instructions on exactly how to use the kernel offered by Extended LTS [...].
    • Updating, expanding and testing my personal scripts and workflow to also work for the new "Extended" initiative.
  • Provided some help on updating the Mercurial packages. [...]
  • Began work on updating/syncing the ca-certificates packages in both LTS and Extended LTS.
  • Issued DLA 1395-1 to fix two remote code execution vulnerabilities in php-horde-image, the image processing library for the Horde <https://www.horde.org/> groupware tool. The original fix applied upstream has a regression in that it ignores the "force aspect ratio" option which I have fixed upstream .
  • Issued ELA 9-1 to correct an arbitrary file write vulnerability in the archiver plugin for the Plexus compiler system — a specially-crafted .zip file could overwrite any file on disk, leading to a privilege esclation.
  • During the overlap time between the support of wheezy and jessie I took the opportunity to address a number of vulnerabilities in all suites for the Redis key-value database, including CVE-2018-12326, CVE-2018-11218 & CVE-2018-11219) (via #902410 & #901495).

Uploads

  • redis:
    • 4.0.9-3 — Make /var/log/redis, etc. owned by the adm group. (#900496)
    • 4.0.10-1 — New upstream security release (#901495). I also uploaded this to stretch-backports and backported the packages to stretch.
    • Proposed 3.2.6-3+deb9u2 for inclusion in the next Debian stable release to address an issue in the systemd .service file. (#901811, #850534 & #880474)
  • lastpass-cli (1.3.1-1) — New upstream release, taking over maintership and completely overhauling the packaging. (#898940, #858991 & #842875)
  • python-django:
    • 1.11.13-2 — Fix compatibility with Python 3.7. (#902761)
    • 2.1~beta1-1 — New upstream release (to experimental).
  • installation-birthday (11) — Fix an issue in calcuclating the age of the system by always prefering the oldest mtime we can find. (#901005
  • bfs (1.2.2-1) — New upstream release.
  • libfiu (0.96-4) — Apply upstream patch to make the build more robust with --as-needed. (#902363)
  • I also sponsored an upload of yaml-mode (0.0.13-1) for Nicholas Steeves.

Debian bugs filed

  • cryptsetup-initramfs: "ERROR: Couldn't find sysfs hierarchy". (#902183)
  • git-buildpackage: Assumes capable UTF-8 locale. (#901586)
  • kitty: Render and ship HTML versions of asciidoc. (#902621)
  • redis: Use the system Lua to avoid an embedded code copy. (#901669)

June 30, 2018 06:11 PM

June 07, 2018

Brett Parker (iDunno)

The Psion Gemini

So, I backed the Gemini and received my shiny new device just a few months after they said that it'd ship, not bad for an indiegogo project! Out of the box, I flashed it, using the non-approved linux flashing tool at that time, and failed to backup the parts that, err, I really didn't want blatted... So within hours I had a new phone that I, err, couldn't make calls on, which was marginally annoying. And the tech preview of Debian wasn't really worth it, as it was fairly much unusable (which was marginally upsetting, but hey) - after a few more hours / days of playing around I got the IMEI number back in to the Gemini and put back on the stock android image. I didn't at this point have working bluetooth or wifi, which was a bit of a pain too, turns out the mac addresses for those are also stored in the nvram (doh!), that's now mostly working through a bit of collaboration with another Gemini owner, my Gemini currently uses the mac addresses from his device... which I'll need to fix in the next month or so, else we'll have a mac address collision, probably.

Overall, it's not a bad machine, the keyboard isn't quite as good as I was hoping for, the phone functionality is not bad once you're on a call, but not great until you're on a call, and I certainly wouldn't use it to replace the Samsung Galaxy S7 Edge that I currently use as my full time phone. It is however really rather useful as a sysadmin tool when you don't want to be lugging a full laptop around with you, the keyboard is better than using the on screen keyboard on the phone, the ssh client is "good enough" to get to what I need, and the terminal font isn't bad. I look forward to seeing where it goes, I'm happy to have been an early backer, as I don't think I'd pay the current retail price for one.

by Brett Parker (iDunno@sommitrealweird.co.uk) at June 07, 2018 01:04 PM

June 03, 2018

Steve Engledow (stilvoid)

Shue

I finally got around to releasing a tool I wrote a while back (git says I started it in November 2015).

It's called Shue and you can find it on github.

If you dig back in the commit history, you'll see that Shue was originally intended as a tool for converting rgb colour values into their nearest equivalent bash colour codes.

Shue doesn't do that now as I haven't really needed anything that does it since that one time :) I might bring back that functionality at some point but for now, here's what Shue does do:

I wrote this at the time because I was fiddling with a few websites and frequently needed the above functionality.

It's written in Go and there are binaries for Linux, Mac, and Windows on the releases page.

Let me know if you find it useful.

by Steve Engledow (steve@engledow.me) at June 03, 2018 08:52 PM

May 21, 2018

Daniel Silverstone (Kinnison)

Runtime typing

I have been wrestling with a problem for a little while now and thought I might send this out into the ether for others to comment upon. (Or, in other words, Dear Lazyweb…)

I am writing system which collects data from embedded computers in my car (ECUs) over the CAN bus, using the on-board diagnostics port in the vehicle. This requires me to generate packets on the CAN bus, listen to responses, including managing flow control, and then interpret the resulting byte arrays.

I have sorted everything but the last little bit of that particular data pipeline. I have a prototype which can convert the byte arrays into "raw" values by interpreting them either as bitfields and producing booleans, or as anything from an unsigned 8 bit integer to a signed 32 bit integer in either endianness. Fortunately none of the fields I'd need to interpret are floats.

This is, however, pretty clunky and nasty. Since I asked around and a majority of people would prefer that I keep the software configurable at runtime rather than doing meta-programming to describe these fields, I need to develop a way to have the data produced by reading these byte arrays (or by processing results already interpreted out of the arrays) type-checked.

As an example, one field might be the voltage of the main breaker in the car. It's represented as a 16 bit big-endian unsigned field, in tenths of a volt. So the field must be divided by ten and then given the type "volts". Another field is the current passing through that main breaker. This is a 16 bit big-endian signed value measured in tenths of an amp, so must be interpreted as as such, divided by ten, and then given the type "amps". I intend for all values handled beyond the raw byte arrays themselves to simply be floats, so there'll be signedness available regardless.

What I'd like, is to later have a "computed" value, let's call it "power flow", which is the voltage multiplied by the current. Naturally this would need to be given the type 'watts'. What I'd dearly love is to build into my program the understanding that volts times amps equals watts, and then have the reader of the runtime configuration type-check the function for "power flow".

I'm working on this in Rust, though for now the language is less important than the algorithms involved in doing this (unless you know of a Rust library which will help me along). I'd dearly love it if someone out there could help me to understand the right way to handle such expression type checking without having to build up a massively complex type system.

Currently I am considering things (expressed for now in yaml) along the lines of:

- name: main_voltage
  type: volts
  expr: u16_be(raw_bmc, 14) / 10
- name: main_current
  type: amps
  expr: i16_be(raw_bmc, 12) / 10
- name: power_flow
  type: watts
  expr: main_voltage * main_current

What I'd like is for each expression to be type-checked. I'm happy for untyped scalars to end up auto-labelled (so the u16_be() function would return an untyped number which then ends up marked as volts since 10 is also untyped). However when power_flow is typechecked, it should be able to work out that the type of the expression is volts * amps which should then typecheck against watts and be accepted. Since there's also consideration needed for times, distances, booleans, etc. this is not a completely trivial thing to manage. I will know the set of valid types up-front though, so there's that at least.

If you have any ideas, ping me on IRC or perhaps blog a response and then drop me an email to let me know about it.

Thanks in advance.

by Daniel Silverstone at May 21, 2018 02:53 PM

February 21, 2018

MJ Ray

How hard can typing æ, ø and å be?

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

by mjr at February 21, 2018 04:14 PM

February 18, 2018

Mick Morgan

database failure

In 1909, Franz Kafka wrote the “Inclusion of Private Automobile Firms in the Compulsory Insurance Program” as part of “The Office Writings”. His experience of tortuous bureaucracy in Insurance and elsewhere was later reflected in one of his most famous novels “Der Process” (known in English translation as “The Trial”).

Back in October last year I bought another motorcycle to go with my GSX 1250. I’d just sold three other older bikes and felt the need to fill up the resultant hole in my garage. Besides, a man can never have too many motorcycles. At the time I bought the new Yamaha I spoke to my insurers about getting it added to my existing policy. Unfortunately they had recently changed their systems and I could no longer have one policy covering both bikes. So I took out a new separate policy. Oddly enough, that policy cost me twice as much as I paid for cover on the GSX, a bike with over twice the power and a lot more grunt than my new Yamaha. I was told that whilst /I/ was still the same risk, the underwriters assumed that my Yamaha was a riskier vehicle to insure. The ways of insurers are odd indeed and beyond the ken of mortal man.

For the past few months, both my bikes have been wrapped up warm and dry in my garage awaiting a change in the weather so that I no longer have to use the car for everything. This turns out to be a very good thing indeed.

A couple of days ago I received a letter from the Motor Insurer’s Bureau and DVLA. That letter, headed “Stay Insured, Stay Legal” gave the registration number of my Yamaha and stated, in red, “Do not ignore this letter” and went on to say “To avoid a penalty, you will need to take action immediately”. “The record of insurance for your vehicle [REG NO] does not appear on the Motor Insurance Database (MID) and this means if you take no action, you will get a fine.”

The letter also explained that it was my responsibility, as registered keeper, to ensure that my bike was insured. If I was certain that my bike was insured, I was instructed to “contact [my] Insurance provider” since “MIB and DVLA cannot update your records on the MID”.

Pretty worrying and very specific about what I needed to do. So, firstly I checked the MID at “askmid.com” and sure enough, my bike did not appear.

askmid database query result

I then ‘phoned my Insurers who confirmed that I was insured and had been since October of last year when I took out the policy. I explained that I knew that was the case because I had the policy in front of me. But that didn’t help me because both DVLA and the MIB believed otherwise. Worse, the MID is used by the Police who will therefore similarly believe otherwise. Worse even than that, is the fact that an extract of the MIB database is supplied for use by ANPR cameras across the UK (See www.mib.org.uk). This means that I only have to pass an ANPR (which I do – a lot) whilst riding that particular bike to almost guarantee a police stop. I therefore asked my insurers to do what the MIB suggested and update my records. No can do, say my insurers. According to their systems I /am/ already on the MIB. After several, rather fruitless conversations (they called me back, I called them again) they suggested that I call the MIB. I explained again that the MIB had clearly stated that /they/ could do nothing, it was down to my insurer and them alone to ensure that my records were correct. Furthermore, the askmid website reinforces the message that “askMID and MIB do not sell insurance nor can we update the Motor Insurance Database (MID). These services are provided by your chosen insurer or broker”.

askmid contact info

Nevertheless, since I was getting nowhere with my insurer, I agreed to try to speak to the MIB and, if necesssary, get them to talk to my insurer. Here, dear reader, is where the situation spirals further into the absurd. The letter from the MIB gives a contact telephone number which is completely automated. That advice line (you know the type, “press 1 for this option, 2 for that” etc.) eventually gave me the advice I had already received from the MIB letter and the askmid website – viz: “We cannot do anything, you must talk to your insurer”. So I went back to my insurer. You will not be surprised to read that my insurer, whilst sympathetic and understanding felt that they had done their bit and the fault lay elsewhere.

Now, as a paying customer of a (compulsory) service I don’t care where the fault lies. My only point of leverage is with my insurer. I pay them for a service which does not simply stop with them issuing cover. They must also ensure that the relevant databases are kept up to date. This requirement is laid upon them by Statutory Instrument no 37 of 2003 – “The Motor Vehicles (Compulsory Insurance) (Information Centre and Compensation Body) Regulations 2003”.

The person I spoke to on my third, or possibly fourth, conversation with my Insurer suggested that in order to show that I /was/ fully insured I should carry a copy of my policy with me at all times when riding my bike.

This completely misses the point. It is a legal requirement for my bike’s records on the MIB database to be correct. Only my Insurer can do that. If those records are not correct, I face the almost certain chance of being stopped by the police. Now whilst I can (if I remember to “carry my papers” in the correct Orwellian manner) show the Officers stopping me that I /am/ insured, that will have wasted my time and the Police Officers’ time.

Not good. Not good at all. I’m sure Kafka would have understood my frustration.

And guess what may happen when the time comes for me to renew my insurance – on all my vehicles.

by Mick at February 18, 2018 03:31 PM

March 01, 2017

Brett Parker (iDunno)

Using the Mythic Beasts IPv4 -> IPv6 Proxy for Websites on a v6 only Pi and getting the right REMOTE_ADDR

So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.

I've cheated a bit, because the turning on of ProxyProtocol for the hostedpi.com addresses is currently not exposed to customers (it's on the list!), to do it without access to Mythic's backends use your own domainname (I've also got https://pi3.sommitrealweird.co.uk/ mapped to this Pi).

So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen [::1]:8080

<IfModule ssl_module>
       Listen [::1]:4443
</IfModule>

<IfModule mod_gnutls.c>
       Listen [::1]:4443
</IfModule>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.

So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:

global
       log /dev/log    local0
       log /dev/log    local1 notice
       chroot /var/lib/haproxy
       stats socket /run/haproxy/admin.sock mode 660 level admin
       stats timeout 30s
       user haproxy
       group haproxy
       daemon

       # Default SSL material locations
       ca-base /etc/ssl/certs
       crt-base /etc/ssl/private

       # Default ciphers to use on SSL-enabled listening sockets.
       # For more information, see ciphers(1SSL). This list is from:
       #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
       ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
       ssl-default-bind-options no-sslv3

defaults
       log     global
       mode    http
       option  httplog
       option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
       errorfile 400 /etc/haproxy/errors/400.http
       errorfile 403 /etc/haproxy/errors/403.http
       errorfile 408 /etc/haproxy/errors/408.http
       errorfile 500 /etc/haproxy/errors/500.http
       errorfile 502 /etc/haproxy/errors/502.http
       errorfile 503 /etc/haproxy/errors/503.http
       errorfile 504 /etc/haproxy/errors/504.http

frontend any_http
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::80
        default_backend any_http

backend any_http
        server apache2 ::1:8080

Obviously after that you then do:

systemctl restart apache2
systemctl restart haproxy

Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:

a2enmod remoteip

And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined

And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat and add the relevant RemoteIP settings:

RemoteIPHeader X-Forwarded-For
RemoteIPTrustedProxy ::1

CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined

Now, enable the config and restart apache2:

a2enconf remoteip-logformats
systemctl restart apache2

Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.

So, you can now happily visit http://www.<your-pi-name>.hostedpi.com/, e.g. http://www.srwpi.hostedpi.com/.

Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.

/etc/dehydrated/conf.d/mail.sh:

CONTACT_EMAIL="my@email.address"

/etc/dehydrated/conf.d/domainconfig.sh:

DOMAINS_D="/etc/dehydrated/domains.d"

/etc/dehydrated/domains.d/srwpi.hostedpi.com:

HOOK="/etc/dehydrated/hooks/srwpi"

/etc/dehydrated/hooks/srwpi:

#!/bin/sh
action="$1"
domain="$2"

case $action in
  deploy_cert)
    privkey="$3"
    cert="$4"
    fullchain="$5"
    chain="$6"
    cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem
    chmod 640 /etc/ssl/private/srwpi.pem
    ;;
  *)
    ;;
esac

/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.

And finally the file /etc/dehydrated/domains.txt:

www.srwpi.hostedpi.com srwpi.hostedpi.com

Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.

Run dehydrated in cron mode (it's noisy, but meh...):

dehydrated -c

That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/www.srwpi.hostedpi.com/fullchain.pem and /var/llib/dehydrated/certs/ww.srwpi.hostedpi.com/privkey.pem files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:

a2ensite default-ssl
a2enmod ssl

And restart apache2:

systemctl restart apache2

Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:

frontend any_https
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::443 ssl crt /etc/ssl/private/srwpi.pem

        default_backend any_https

backend any_https
        server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt

Restart haproxy:

systemctl restart haproxy

And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.

by Brett Parker (iDunno@sommitrealweird.co.uk) at March 01, 2017 06:35 PM

October 18, 2016

MJ Ray

Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web. 🙂

by mjr at October 18, 2016 04:28 AM

March 09, 2015

Ben Francis

Pinned Apps – An App Model for the Web

(re-posted from a page I created on the Mozilla wiki on 17th December 2014)

Problem Statement

The per-OS app store model has resulted in a market where a small number of OS companies have a large amount of control, limiting choice for users and app developers. In order to get things done on mobile devices users are restricted to using apps from a single app store which have to be downloaded and installed on a compatible device in order to be useful.

Design Concept

Concept Overview

The idea of pinned apps is to turn the apps model on its head by making apps something you discover simply by searching and browsing the web. Web apps do not have to be installed in order to be useful, “pinning” is an optional step where the user can choose to split an app off from the rest of the web to persist it on their device and use it separately from the browser.

Pinned_apps_overview

”If you think of the current app store experience as consumers going to a grocery store to buy packaged goods off a shelf, the web is more like a hunter-gatherer exploring a forest and discovering new tools and supplies along their journey.”

App Discovery

A Web App Manifest linked from a web page says “I am part of a web app you can use separately from the browser”. Users can discover web apps simply by searching or browsing the web, and use them instantly without needing to install them first.

Pinned_apps_discovery

”App discovery could be less like shopping, and more like discovering a new piece of inventory while exploring a new level in a computer game.”

App Pinning

If the user finds a web app useful they can choose to split it off from the rest of the web to persist it on their device and use it separately from the browser. Pinned apps can provide a more app-like experience for that part of the web with no browser chrome and get their own icon on the homescreen.

Pinned_apps_pinning

”For the user pinning apps becomes like collecting pin badges for all their favourite apps, rather than cluttering their device with apps from an app store that they tried once but turned out not to be useful.”

Deep Linking

Once a pinned app is registered as managing its own part of the web (defined by URL scope), any time the user navigates to a URL within that scope, it will open in the app. This allows deep linking to a particular page inside an app and seamlessly linking from one app to another.

Pinned_apps_linking

”The browser is like a catch-all app for pages which don’t belong to a particular pinned app.”

Going Offline

Pinning an app could download its contents to the device to make it work offline, by registering a Service Worker for the app’s URL scope.

Pinned_apps_offline

”Pinned apps take pinned tabs to the next level by actually persisting an app on the device. An app pin is like an anchor point to tether a collection of web pages to a device.”

Multiple Pages

A web app is a collection of web pages dedicated to a particular task. You should be able to have multiple pages of the app open at the same time. Each app could be represented in the task manager as a collection of sheets, pinned together by the app.

Pinned_app_pages

”Exploding apps out into multiple sheets could really differentiate the Firefox OS user experience from all other mobile app platforms which are limited to one window per app.”

Travel Guide

Even in a world without app stores there would still be a need for a curated collection of content. The Marketplace could become less of a grocery store, and more of a crowdsourced travel guide for the web.

Pinned_apps_guide

”If a user discovers an app which isn’t yet included in the guide, they could be given the opportunity to submit it. The guide could be curated by the community with descriptions, ratings and tags.”

3 Questions

Pinnged_apps_pinned

What value (the importance, worth or usefulness of something) does your idea deliver?

The pinned apps concept makes web apps instantly useful by making “installation” optional. It frees users from being tied to a single app store and gives them more choice and control. It makes apps searchable and discoverable like the rest of the web and gives developers the freedom of where to host their apps and how to monetise them. It allows Mozilla to grow a catalogue of apps so large and diverse that no walled garden can compete, by leveraging its user base to discover the apps and its community to curate them.

What technological advantage will your idea deliver and why is this important?

Pinned apps would be implemented with emerging web standards like Web App Manifests and Service Workers which add new layers of functionality to the web to make it a compelling platform for mobile apps. Not just for Firefox OS, but for any user agent which implements the standards.

Why would someone invest time or pay money for this idea?

Users would benefit from a unique new web experience whilst also freeing themselves from vendor lock-in. App developers can reduce their development costs by creating one searchable and discoverable web app for multiple platforms. For Mozilla, pinned apps could leverage the unique properties of the web to differentiate Firefox OS in a way that is difficult for incumbents to follow.

UI Mockups

App Search

Pinned_apps_search

Pin App

Pin_app

Pin Page

Pin_page

Multiple Pages

Multiple_pages

App Directory

App_directory

Implementation

Web App Manifest

A manifest is linked from a web page with a link relation:

  <link rel=”manifest” href=”/manifest.json”>

A manifest can specify an app name, icon, display mode and orientation:

 {
   "name": "GMail"
   "icons": {...},
   "display": "standalone",
   "orientation": “portrait”,
   ...
 }

There is a proposal for a manifest to be able to specify an app scope:

 {
   ...
   "scope": "/"
   ...
 }

Service Worker

There is also a proposal to be able to reference a Service Worker from within the manifest:

 {
   ...
   service_worker: {
     src: "app.js",
     scope: "/"
   ...
 }

A Service Worker has an install method which can populate a cache with a web app’s resources when it is registered:

 this.addEventListener('install', function(event) {
  event.waitUntil(
    caches.create('v1').then(function(cache) {
     return cache.add(
        '/index.html',
        '/style.css',
        '/script.js',
        '/favicon.ico'
      );
    }, function(error) {
        console.error('error populating cache ' + error);
    };
  );
 });

So that the app can then respond to requests for resources when offline:

 this.addEventListener('fetch', function(event) {
  event.respondWith(
    caches.match(event.request).catch(function() {
      return event.default();
    })
  );
 });

by tola at March 09, 2015 03:54 PM

December 11, 2014

Ben Francis

The Times They Are A Changin’ (Open Web Remix)

In the run up to the “Mozlandia” work week in Portland, and in reflection of the last three years of the Firefox OS project, for a bit of fun I’ve reworked a Bob Dylan song to celebrate our incredible journey so far.

Here’s a video featuring some of my memories from the last three years, with Siobhan (my fiancée) and me singing the song at you! There are even lyrics so you can sing along 😉

“Keep on rockin’ the free web” — Potch

by tola at December 11, 2014 11:26 AM

July 10, 2014

James Taylor

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


July 10, 2014 02:09 PM

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


July 10, 2014 02:09 PM

June 12, 2014

Paul Tansom

Beginning irc

After some discussion last night at PHP Hants about the fact that irc is a great facilitator of support / discussion, but largely ignored because there is rarely enough information for a new user to get going I decided it may be worth putting together a howto type post so here goes…

What is irc?

First of all, what on earth is it? I’m tempted to describe it as Twitter done right years before Twitter even existed, but I’m a geek and I’ve been using irc for years. It has a long heritage, but unlike the ubiquitous email it hasn’t made the transition into mainstream use. In terms of usage it has similarities to things like Twitter and Instant Messaging. Let’s take a quick look at this.

Twitter allows you to broadcast messages, they get published and anyone who is subscribed to your feed can read what you say. Everything is pretty instant, and if somebody is watching the screen at the right time they can respond straight away. Instant Messaging on the other hand, is more of a direct conversation with a single person, or sometimes a group of people, but it too is pretty instantaneous – assuming, of course, that there’s someone reading what you’ve said. Both of these techonologies are pretty familiar to many. If you go to the appropriate website you are given the opportunity to sign up and either use a web based client or download one.

It is much the same for irc in terms of usage, although conversations are grouped into channels which generally focus on a particular topic rather than being generally broadcast (Twitter) or more specifically directed (Instant Messaging). The downside is that in most cases you don’t get a web page with clear instructions of how to sign up, download a client and find where the best place is to join the conversation.

Getting started

There are two things you need to get going with irc, a client and somewhere to connect to. Let’s put that into a more familiar context.

The client is what you use to connect with; this can be an application – so as an example Outlook or Thunderbird would be a mail client, or IE, Firefox, Chrome or Safari are examples of clients for web pages – or it can be a web page that does the same thing – so if you go to twitter.com and login you are using the web page as your Twitter client. Somewhere to connect to can be compared to a web address, or if you’ve got close enough to the configuration of your email to see the details, your mail server address.

Let’s start with the ‘somewhere to connect to‘ bit. Freenode is one of the most popular irc servers, so let’s take a look. First we’ll see what we can find out from their website, http://freenode.net/.

freenode

There’s a lot of very daunting information there for somebody new to irc, so ignore most of it and follow the Webchat link on the left.

webchat

That’s all very well and good, but what do we put in there? I guess the screenshot above gives a clue, but if you actually visit the page the entry boxes will be blank. Well first off there’s the Nickname, this can be pretty much anything you like, no need to register it – stick to the basics of letters, numbers and some simple punctuation (if you want to), keep it short and so long as nobody else is already using it you should be fine; if it doesn’t work try another. Channels is the awkward one, how do you know what channels there are? If you’re lucky you’re looking into this because you’ve been told there’s a channel there and hopefully you’ve been given the channel name. For now let’s just use the PHP Hants channel, so that would be #phph in the Channels box. Now all you need to do is type in the captcha, ignore the tick boxes and click Connect and you are on the irc channel and ready to chat. Down the right you’ll see a list of who else is there, and in the main window there will be a bit of introductory information (e.g. topic for the channel) and depending on how busy it is anything from nothing to a fast scrolling screen of text.

phph

If you’ve miss typed there’s a chance you’ll end up in a channel specially created for you because it didn’t exist; don’t worry, just quit and try again (I’ll explain that process shortly).

For now all you really need to worry about is typing in text an posting it, this is as simple as typing it into the entry box at the bottom of the page and pressing return. Be polite, be patient and you’ll be fine. There are plenty of commands that you can use to do things, but for now the only one you need to worry about is the one to leave, this is:

/quit

Type it in the entry box, press return and you’ve disconnected from the server. The next thing to look into is using a client program since this is far more flexible, but I’ll save that for another post.

The post Beginning irc appeared first on Linuxlore.

by Paul Tansom at June 12, 2014 04:27 PM

May 06, 2014

Richard Lewis

Refocusing Ph.D

Actual progress on this Ph.D revision has been quite slow. My current efforts are on improving the focus of the thesis. One of the criticisms the examiners made (somewhat obliquely) was that it wasn't very clear exactly what my subject was: musicology? music information retrieval? computational musicology? And the reason for this was that I failed to make that clear to myself. It was only at the writing up stage, when I was trying to put together a coherent argument, that I decided to try and make it a story about music information retrieval (MIR). I tried to argue that MIR's existing evaluation work (which was largely modelled on information retrieval evaluation from the text world) only took into account the music information needs of recreational users of MIR systems, and that there was very little in the way of studying the music information seeking behaviour of "serious" users. However, the examiners didn't even accept that information retrieval was an important problem for musicology, nevermind that there was work to be done in examining music information needs of music scholarship.

So I'm using this as an excuse to shift the focus away from MIR a little and towards something more like computational musicology and music informatics. I'm putting together a case study of a computational musicology toolkit called music21. Doing this allows me to focus in more detail on a smaller and more distinct community of users (rather than attempting to studying musicologists in general which was another problematic feature of the thesis), it makes it much clearer what kind of music research can be addressed using the technology (all of MIR is either far too diverse or far too generic, depending on how you want to spin it), and also allows me to work with the actually Purcell Plus project materials using the toolkit.

May 06, 2014 11:16 PM

March 27, 2014

Richard Lewis

Taking notes in Haskell

The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we're using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we're working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don't even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that's what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

Here's what I typed:

module MusicalDocuments where

import Data.Maybe

-- A document comprises some number of openings (double page spreads)
data Document = Document [Opening]

-- An opening comprises one or two pages (usually two)
data Opening = Opening (Page, Maybe Page)

-- A page comprises multiple systems
data Page = Page [System]

-- Each part is the line for a particular voice
data Voice = Superius | Discantus | Tenor | Contratenor | Bassus

-- A part comprises a list of musical sybmols, but it may span mutliple systems
--(including partial systems)
data Part = Part [MusicalSymbol]

-- A piece comprises some number of sections
data Piece = Piece [Section]

-- A system is a collection of staves
data System = System [Staff]

-- A staff is a list of atomic graphical symbols
data Staff = Staff [Glyph]

-- A section is a collection of parts
data Section = Section [Part]

-- These are the atomic components, MusicalSymbols are semantic and Glyphs are
--syntactic (i.e. just image elements)
data MusicalSymbol = MusicalSymbol
data Glyph = Glyph

-- If this were real, Image would abstract over some kind of binary format
data Image = Image

-- One of the important properties we need in order to be able to construct pieces
-- from the scanned components is to be able to say when objects of the some of the
-- types are strictly contiguous, i.e. this staff immediately follows that staff
class Contiguous a where
  immediatelyFollows :: a -> a -> Bool
  immediatelyPrecedes :: a -> a -> Bool
  immediatelyPrecedes a b = b `immediatelyFollows` a

instance Contiguous Staff where
  immediatelyFollows :: Staff -> Staff -> Bool
  immediatelyFollows = undefined

-- Another interesting property of this data set is that there are a number of
-- duplicate scans of openings, but nothing in the metadata that indicates this,
-- so our workflow needs to recognise duplicates
instance Eq Opening where
  (==) :: Opening -> Opening -> Bool
  (==) a b = undefined

-- Maybe it would also be useful to have equality for staves too?
instance Eq Staff where
  (==) :: Staff -> Staff -> Bool
  (==) a b = undefined

-- The following functions actually represent the workflow

collate :: [Document]
collate = undefined

scan :: Document -> [Image]
scan = undefined

split :: Image -> Opening
split = undefined

paginate :: Opening -> [Page]
paginate = undefined

omr :: Page -> [System]
omr = undefined

segment :: System -> [Staff]
segment = undefined

tokenize :: Staff -> [Glyph]
tokenize = undefined

recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol
recogniseMusicalSymbol = undefined

part :: [Glyph] -> Maybe Part
part gs =
  if null symbols then Nothing else Just $ Part symbols
  where symbols = mapMaybe recogniseMusicalSymbol gs

alignable :: Part -> Part -> Bool
alignable = undefined

piece :: [Part] -> Maybe Piece
piece = undefined

I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that's just a label for a list or pair.

I haven't written much Haskell code before, and given that I've only implemented one function here, I still haven't written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

March 27, 2014 11:13 PM

February 06, 2014

Adam Bower (quinophex)

I finally managed to beat my nemesis!

I purchased this book http://www.amazon.co.uk/dp/0738206679 (Linked, by Barabasi) on the 24th of December 2002, I had managed to make 6 or 7 aborted attempts at reading it to completion where life had suddenly got busy and just took over. This meant that I put the book down and didn't pick it up again until things were less hectic some time later and I started again.

Anyhow, I finally beat the book a few nights ago, my comprehension of it was pretty low anyhow but at least it is done. Just shows I need to read lots more given how little went in.




comment count unavailable comments

February 06, 2014 10:40 PM

February 01, 2014

Adam Bower (quinophex)

Why buying a Mio Cyclo 305 HC cycling computer was actually a great idea.

I finally made it back out onto the bike today for the first time since September last year. I'd spent some time ill in October and November which meant I had to stop exercising and as a result I've gained loads of weight over the winter and it turns out also become very unfit which can be verified by looking at the Strava ride from today: http://www.strava.com/activities/110354158

Anyhow, a nice thing about this ride is that I can record it on Strava and get this data about how unfit I have become, this is because last year I bought a Mio Cyclo 305 HC cycle computer http://eu.mio.com/en_gb/mio-cyclo-305-hc.htm from Halfords reduced to £144.50 (using a British Cycling discount). I was originally going to get a Garmin 500 but Amazon put the price up from £149.99 the day I was going to buy it to £199.99.

I knew when I got the Mio that it had a few issues surrounding usability and features but it was cheap enough at under £150 that I figured that even if I didn't get on with it I'd at least have a cadence sensor and heart rate monitor so I could just buy a Garmin 510 when they sorted out the firmware bugs with that and the price came down a bit which is still my longer term intention.

So it turns out a couple of weeks ago I plugged my Mio into a Windows VM when I was testing USB support and carried out a check for new firmware. I was rather surprised to see a new firmware update and new set of map data was available for download. So I installed it think I wasn't going to get any new features from it as Mio had released some new models but it turns out that the new firmware actually enables a single feature (amongst other things, they also tidied up the UI and sorted a few other bugs along with some other features) that makes the device massively more useful as it now also creates files in .fit format which can be uploaded directly to Strava.

This is massively useful for me as although the Mio always worked in Linux as the device is essentially just a USB mass storage device but you would have to do an intermediate step of having to use https://github.com/rhyas/GPXConverter to convert the files from the Mio-centric GPX format to something Strava would recognise. Now I can just browse to the folder and upload the file directly which is very handy.

All in it turns out that buying a Mio which reading reviews and forums were full of doom and gloom means I can wait even longer before considering replacement with a garmin.

comment count unavailable comments

February 01, 2014 02:11 PM

January 01, 2014

John Woodard

A year in Prog!


It's New Year's Day 2014 and I'm reflecting on the music of past year.

Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 01, 2014 07:56 PM

December 01, 2013

Paul Tansom

Scratch in a network environment

I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I’ve made to the default Scratch install to make things easier. So here goes:

With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn’t exactly helpful. It also isn’t ideal that they can explore the C: drive in spite of profile restrictions (although it isn’t the end of the world as there is little they can do from Scratch).

save-orig

After a bit of time with Google I found the answer, and since it didn’t immediately leap out at me when I was searching I thought I’d post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:

C:\Program Files\Scratch\Scratch.ini

Initially it looks like this:

ini-orig

Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):

Home=U:
VisibleDrives=U:

To get this:

ini-new

They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.

save-new1

The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.

save-new2

You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).

There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.

Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven’t tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:

ProxyServer=[server name or IP address]
ProxyPort=[port number]

The post Scratch in a network environment appeared first on Linuxlore.

by Paul Tansom at December 01, 2013 07:00 PM

January 16, 2013

John Woodard

LinuxMint 14 Add Printer Issue


 LinuxMint 14 Add Printer Issue



 

I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!

It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.

Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.

The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.

So come on Minty people get your bug squashing boots on and stamp on this one please.

Update

Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 16, 2013 12:39 AM

August 20, 2012

David Reynolds

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

June 25, 2012

Elisabeth Fosbrooke-Brown (sfr)

Black redstarts

It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.

Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!

The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.

by sunflowerinrain (noreply@blogger.com) at June 25, 2012 08:02 AM

June 16, 2012

Elisabeth Fosbrooke-Brown (sfr)

Roundabout at Mirambeau

Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.

A couple of years ago the roundabout at the edge of  Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.

Need a passenger to take photo!

by sunflowerinrain (noreply@blogger.com) at June 16, 2012 12:06 PM

September 04, 2006

Ashley Howes

Some new photos

Take a look at some new photos my father and I have taken. We are experimenting with our new digital SLR with a variety of lenses.

by Ashley (noreply@blogger.com) at September 04, 2006 10:42 AM

August 30, 2006

Ashley Howes

A Collection of Comments

This is a bit of fun. A collection of comments found in code. This is from The Daily WTF.

by Ashley (noreply@blogger.com) at August 30, 2006 01:13 AM