Planet ALUG

December 13, 2014

Mick Morgan

solidarity with the tor project

On Thursday 11 December, Roger Dingledine of the Tor project posted the following email to the “tor-talk” mail list (to which I am subscribed).

I’d like to draw your attention to

https://blog.torproject.org/blog/solidarity-against-online-harassment
https://twitter.com/torproject/status/543154161236586496

One of our colleagues has been the target of a sustained campaign of harassment for the past several months. We have decided to publish this statement to publicly declare our support for her, for every member of our organization, and for every member of our community who experiences this harassment. She is not alone and her experience has catalyzed us to action. This statement is a start.

Roger asked those who deplored on-line harassment (of any person, for any reason) and who supported the Tor project’s action in publicly condemning the harassment of one of the Tor developers to add their name and voice to the blog post.

I am proud to have done so.

by Mick at December 13, 2014 07:16 PM

December 11, 2014

Ben Francis

The Times They Are A Changin’ (Open Web Remix)

In the run up to the “Mozlandia” work week in Portland, and in reflection of the last three years of the Firefox OS project, for a bit of fun I’ve reworked a Bob Dylan song to celebrate our incredible journey so far.

Here’s a video featuring some of my memories from the last three years, with Siobhan (my fiancée) and me singing the song at you! There are even lyrics so you can sing along ;)

“Keep on rockin’ the free web” — Potch

by tola at December 11, 2014 11:26 AM

December 10, 2014

Chris Lamb

Starting IPython automatically from zsh

Instead of a calculator, I tend to use IPython for those quotidian bits of "mental" arithmetic:

In  [1]: 17 * 22.2
Out [1]: 377.4

However, I often forget to actually start IPython, resulting in me running the following in my shell:

$ 17 * 22.2
zsh: command not found: 17

Whilst I could learn do this maths within Zsh itself, I would prefer to dump myself into IPython instead — being able to use "_" and Python modules generally is just too useful.

After following this pattern too many times, I put together the following snippet that will detect whether I have prematurely attempted a calculation inside zsh and pretend that I ran it in IPython all along:

zmodload zsh/pcre

math_regex='^[\d\-][\d\.\s\+\*\/\-]*$'

function math_precmd() {
    if [ "${?}" = 0 ]
    then
        return
    fi

    if [ -z "${math_command}" ]
    then
        return
    fi

    if whence -- "$math_command" 2>&1 >/dev/null
    then
        return
    fi

    if [ "${math_command}" -pcre-match "${math_regex}" ]
    then
        echo
        ipython -i -c "_=${math_command}; print _"
    fi
}

function math_preexec() {
    typeset -g math_command="${1}"
}

typeset -ga precmd_functions
typeset -ga preexec_functions

precmd_functions+=math_precmd
preexec_functions+=math_preexec

For example:

lamby@seriouscat:~% 17 * 22.2
zsh: command not found: 17

377.4

In  [1]: _ + 1
Out [1]: 378.4

(Canonical version from my zshrc.d)

December 10, 2014 06:07 PM

December 09, 2014

Steve Engledow (stilvoid)

Testing a Django app with Docker

I've been playing around with Docker a fair bit and recently hit upon a configuration that works nicely for me when testing code at work.

The basic premise is that I run a docker container that pretty well emulates the exact environment that the code will run in down to the OS so I don't need to care that I'm not running the same distribution as the servers we deploy to and that I can test my code at any time without having to rebuild the docker image.

Here's an annotated Dockerfile with the project-specific details removed.

# We start with ubuntu 14.04

FROM ubuntu:14.04
MAINTAINER Steve Engledow <steve@offend.me.uk>

USER root

# Install OS packages
# This list of packages is what gets installed by default
# on Amazon's Ubuntu 14.04 AMI plus python-virtualenv

 RUN apt-get update \
     && apt-get -y install software-properties-common git \
     ssh python-dev python-virtualenv libmysqlclient-dev \
     libqrencode-dev swig libssl-dev curl screen

# Configure custom apt repositories
# and install project-specific packages

COPY apt-key.list apt-repo.list apt.list /tmp/

# Not as nice as this could be as docker defaults to sh rather than bash
RUN while read key; do curl --silent "$key" | apt-key add -; done < /tmp/apt-key.list
RUN while read repo; do add-apt-repository -y "$repo"; done < /tmp/apt-repo.list
RUN apt-get -qq update
RUN while read package; do apt-get -qq -y install "$package"; done < /tmp/apt.list

# Now we create a normal user and switch to it

RUN useradd -s /bin/bash -m ubuntu \
    && chown -R ubuntu:ubuntu /home/ubuntu \
    && passwd -d ubuntu

USER ubuntu
WORKDIR /home/ubuntu
ENV HOME /home/ubuntu

# Set up a virtualenv andinstall python packages
# from the requirements file

COPY requirements.txt /tmp/

RUN mkdir .myenv \
    && virtualenv -p /usr/bin/python2.7 ~/.myenv \
    && . ~/.myenv/bin/activate \
    && pip install -r /tmp/requirements.txt \

# Set PYTHONPATH and activate the virtualenv in .bashrc

RUN echo "export PYTHONPATH=~/myapp/src" > .bashrc \
    && echo ". ~/.myenv/bin/activate" >> .bashrc

# Copy the entrypoint script

COPY entrypoint.sh /home/ubuntu/

EXPOSE 8000

ENTRYPOINT ["/bin/bash", "entrypoint.sh"]

And here's the entrypoint script that nicely wraps up running the django application:

#!/bin/bash

. ./.bashrc

cd myapp/src

./manage.py $*

You generate the base docker image from these files with docker build -t myapp ./.

Then, when you're ready to run a test suite, you need the following invocation:

docker run -ti --rm -P -v ~/code/myapp:/home/ubuntu/myapp myapp test

This mounts ~/code/myapp and /home/ubuntu/myapp within the Docker container meaning that you're running the exact code that you're working on from inside the container :)

I have an alias that expands that for me so I only need to type docked myapp test.

Obviously, you can substitute test for runserver, syncdb or whatever :)

This is all a bit rough and ready but it's working very well for me now and is repeatable enough that I can use more-or-less the same script for a number of different django projects.

by Steve Engledow (steve@offend.me.uk) at December 09, 2014 01:00 AM

December 04, 2014

Chris Lamb

Don't ask your questions in private

(If I've linked you to this page, it is my feeble attempt to provide a more convincing justification.)


I often receive instant messages or emails requesting help or guidance at work or on one of my various programming projects.

When asked why they asked privately, the responses vary; mostly along the lines of it simply being an accident, not knowing where else to ask, as well as not wishing to "disturb" others with their bespoke question. Some will be more candid and simply admit that they were afraid of looking unknowledgable in front of others.

It is always tempting to simply reply with the answer, especially as helping another human is inherently rewarding unless one is a psychopath. However, one can actually do more good overall by insisting the the question is re-asked in a more public forum.


This is for many reasons. Most obviously, public questions are simply far more efficient as soon as more than one person asks that question — the response can be found in a search engine or linked to in the future. These time savings soon add up, meaning that simply more stuff can be done in any given day. After all, most questions are not as unique as people think.

Secondly, a private communication cannot be corrected or elaborated on if someone else notices it is incorrect or incomplete. Even this rather banal point is more subtle that it first appears — the lack of possible corrections deprives both the person asking and the person responding of the true and correct answer.

Lastly, conversations that happen in private are depriving others of the answer as well. Perhaps someone was curious but hadn't got around to asking? Maybe the answer—or even the question!—contains a clue to solving some other issue. None of this can happen if this is occurs behind closed doors.

(There are lots of subtler reasons too — in a large organisation or team, simply knowing what other people are curious about can be curiously valuable information.)


Note that this is not—as you might immediately suspect—simply a way of ensuring that one gets the public recognition or "kudos" from being seen helping others.

I wouldn't deny that technical communities work on a gift economy basis to some degree, but to attribute all acts of assistance as "selfish" and value-extracting would be to take the argument too far in the other direction. Saying that, the lure and appeal of public recognition should not be understated and can certainly provide an incentive to elaborate and provide a generally superior response.


More philosophically, there's also something fundamentally "honest" about airing issues in an appropriately public and transparent manner. I feel it promotes a culture of egoless conversations, of being able to admit one's mistakes and ultimately a healthy personal mindset.

So please, take care not only in the way you phrase and frame your question, but also consider wider context in which you are asking it. And don't take it too personally if I ask you to re-ask elsewhere...

December 04, 2014 12:55 PM

MJ Ray

Autumn Statement #AS2014, the Google tax and how it relates to Free Software

One of the attention-grabbing measures in the Autumn Statement by Chancellor George Osborne was the google tax on profits going offshore, which may prove unworkable (The Independent). This is interesting because a common mechanism for moving the profits around is so-called transfer pricing, where the business in one country pays an inflated price to its sibling in another country for some supplies. It sounds like the intended way to deal with that is by inspecting company accounts and assessing the underlying profits.

So what’s this got to do with Free Software? Well, one thing the company might buy from itself is a licence to use some branding, paying a fee for reachuse. The main reason this is possible is because copyright is usually a monopoly, so there is no supplier of a replacement product, which makes it hard to assess how much the price has been inflated.

One possible method of assessing the overpayment would be to compare with how much other businesses pay for their branding licences. It would be interesting if Revenue and Customs decide that there’s lots of Royalty Free licensing out there – including Free Software – and so all licence fees paid to related companies are a tax avoidance ruse. Similarly, any premium for a particular self-branded product over a generic equivalent could be classed as profit transfer.

This could have amusing implications for proprietary software producers who sell to sister companies but I doubt that the government will be that radical, so we’ll continue to see absurdities like Starbucks buying all their coffee from famous coffee producing countries Switzerland and the Netherlands. Shouldn’t this be stopped, really?

by mjr at December 04, 2014 04:34 AM

December 01, 2014

Steve Engledow (stilvoid)

Just call me Anneka

I had an idea a few days ago to create a Pebble watchface that works like an advent calendar; you get a new christmas-themed picture every day.

Here it is :)

The fun part however, was that I completely forgot about the idea until today. Family life and my weekly squash commitment meant that I didn't have a chance to start work on it until around 22:00 and I really wanted to get it into the Pebble store by midnight (in time for the 1st of December).

I submitted the first release at 23:55!

Enjoy :)

I'll put the source on GitHub soon. Before that, it's time for some sleep.

by Steve Engledow (steve@offend.me.uk) at December 01, 2014 12:16 AM

November 27, 2014

Mick Morgan

independent hit

On trying to reach the website of the Independent newspaper today (the Grauniad is trying my patience of late), I received the following response:

Screenshot-www.independent.co.uk - Chromium

Closing the popup takes you to this page:

Screenshot-Hacked by SEA - Chromium

I haven’t checked whether this is simply a DNS redirect or an actual compromise of the Indy site, but however the graffiti was added, it indicates that the Indy has a problem.

by Mick at November 27, 2014 11:33 AM

October 09, 2014

Wayne Stallwood (DrJeep)

Hosting Update2

Well after a year the SD card on the Raspberry Pi has failed, I noticed /var was unhappy when I tried to apply the recent Bash updates. Attempts at repair only made things worse and I suspect there is some physical issue. I had minimised writes with logs in tmpfs and the frequently updated weather site sat in tmpfs too..logging to remote systems etc. So not quite sure what happened. Of course this is all very inconvenient when your kit lives in another country, so at some point I guess I will have to build a new SD card and ship it out...for now we are back on Amazon EC2...yay for the elastic cloud \o/

October 09, 2014 09:31 PM

October 03, 2014

Ben Francis

What is a Web App?

What is a web app? What is the difference between a web app and a web site? What is the difference between a web app and a non-web app?

In terms of User Experience there is a long continuum between “web site” and “web app” and the boundary between the two is not always clear. There are some characteristics that users perceive as being more “app like” and some as more “web like”.

The presence of web browser-like user interface elements like a URL bar and navigation controls are likely to make a user feel like they’re using a web site rather than an app for example, whereas content which appears to run independently of the browser feels more like an app. Apps are generally assumed to have at least limited functionality without an Internet connection and tend to have the concept of residing in a self-contained way on the local device after being “installed”, rather than being navigated to somewhere on the Internet.

From a technical point of view there is in fact usually very little difference between a web site and a web app. Different platforms currently deal with the concept of “web apps” in all sorts of different, incompatible ways, but very often the main difference between a web site and web app is simply the presence of an “app manifest”. The app manifest is a file containing a collection of metadata which is used when “installing” the app to create an icon on a homescreen or launcher.

At the moment pretty much every platform has its own proprietary app manifest format, but the W3C has the beginnings of a proposed specification for a standard “Manifest for web applications” which is starting to get traction with multiple browser vendors.

Web Manifest – Describing an App

Below is an example of a web app manifest following the proposed standard format.

http://example.com/myapp/manifest.json:

{
  "name": "My App",
  "icons": [{
    "src": "/myapp/icon.png",
    "sizes": "64x64",
    "type": "image/png"
  }],
  "start_url": "/myapp/index.html"
}

The manifest file is referenced inside the HTML of a web page using a link relation. This is cool because with this approach a web app doesn’t have to be distributed through a centrally controlled app store, it can be discovered and installed from any web page.

http://example.com/myapp/index.html:

<!DOCTYPE html>
<html>
  <head>
    <title>My App - Welcome</title>
    <link rel="manifest" href="manifest.json">
    <meta name="application-name" content="My App">
    <link rel="icon" sizes="64x64" href="icon.png">
...

As you can see from the example, these basic pieces of metadata which describe things like a name, an icon and a start URL are not that interesting in themselves because these things can already be expressed in HTML in a web standard way. But there are some other other proposed properties which could be much more interesting.

Display Modes – Breaking out of the Browser

We said above that one thing that makes a web app feel more app like is when it runs outside of the browser, without common browser UI elements like the URL bar and navigation controls. The proposed “display” property of the manifest allows authors of web content which is designed to function without the need for these UI elements to express that they want their content to run outside of the browser.

http://example.com/myapp/manifest.json:

{
  "name": "My App",
  "icons": [{
    "src": "/myapp/icon.png",
    "sizes": "64x64",
    "type": "image/png"
  }],
  "start_url": "/myapp/index.html"
  "scope": "/myapp"
  "display": "standalone"
}

The proposed display modes are “fullscreen”, “standalone”, “minimal-ui” and “browser”. The “browser” display mode opens the content in the user agent’s conventional method (e.g. a browser tab), but all of the other display modes open the content separate from the browser, with varying levels of browser UI.

There’s also a proposed “orientation” property which allows the content author to specify the default orientation (i.e. portrait/landscape) of their content.

App Scope – A Slice of the Web

In order for a web app to be treated separately from the rest of the web, we need to be able to define which parts of the web are part of the app, and which are not. The proposed “scope” property of the manifest defines the URL scope to which the manifest applies.

By default the scope of a web app is anything from the same origin as its manifest, but a single origin can also be sliced up into multiple apps or into app and non-app content.

Below is an example of a web app manifest with a defined scope.

http://example.com/myapp/manifest.json:

{
  "name": "My App",
  "icons": [{
    "src": "/myapp/icon.png",
    "sizes": "64x64",
    "type": "image/png"
  }],
  "start_url": "/myapp/index.html"
  "scope": "/myapp"
}

From the user’s point of view they can browse around the web, seamlessly navigating between web apps and web sites until they come across something they want to keep on their device and use often. They can then slice off that part of the web by “bookmarking” or “installing” it on their device to create an icon on their homescreen or launcher. From that point on, that slice of the web will be treated separately from the browser in its own “app”.

Without a defined scope, a web app is just a web page opened in a browser window which can then be navigated to any URL. If that window doesn’t have any browser-like navigation controls or a URL bar then the user can get stranded at a dead on the web with no way to go back, or worse still can be fooled into thinking that a web page they thought was part of a web app they trust is actually from another, malicious, origin.

The web browser is like a catch-all app for browsing all of the parts of the web which the user hasn’t sliced off to use as a standalone app. Once a web app is registered with the user agent as managing a defined slice of the web, the user can seamlessly link into and out of installed web apps and the rest of the web as they please.

Service Workers – Going Offline

We said above that another characteristic users often associate with “apps” is their ability to work offline, in the absence of a connection to the Internet. This is historically something the web has done pretty badly at. AppCache was a proposed standard intended for this purpose, but there are many common problems and limitations of that technology which make it difficult or impractical to use in many cases.

A new, much more versatile, proposed standard is called Service Workers. Service Workers allow a script to be registered as managing a slice of the web, even whilst offline, by intercepting HTTP requests to URLs within a specified scope. A Service Worker can keep an offline cache of web resources and decide when to use the offline version and when to fetch a new version over the Internet.

The programmable nature of Service Workers make them an extremely versatile tool in adding app-like capabilities to web content and getting rid of the notion that using the web requires a persistent connection to the Internet. Service Workers have lots of support from multiple browser vendors and you can expect to see them coming to life soon.

The proposed “service_worker” property of the manifest allows a content author to define a Service Worker which should be registered with a specified URL scope when a web app is installed or bookmarked on a device. That means that in the process of installing a web app, an offline cache of web resources can be populated and other installation steps can take place.

Below is our example web app manifest with a Service Worker defined.

http://example.com/myapp/manifest.json:

{
  "name": "My App",
  "icons": [{
    "src": "/myapp/icon.png",
    "sizes": "64x64",
    "type": "image/png"
  }],
  "start_url": "/myapp/index.html"
  "scope": "/myapp"
  "service_worker": {
    "src": "app.js",
    "scope": "/myapp"
  }
}

Packages – The Good, the Bad and the Ugly

There’s a whole category of apps which many people refer to as “web apps” but which are delivered as a package of resources to be downloaded and installed locally on a device, separate from the web. Although these resources may use web technologies like HTML, CSS and Javascript, if those resources are not associated with real URLs on the web, then in my view they are by definition not part of a web app.

The reason this approach is commonly taken is that it allows operating system developers and content authors to side-step some of the current shortcomings of the web platform. Packaging all of the resources of an app into a single file which can be downloaded and installed on a device is the simplest way to solve the offline problem. It also has the convenience that the contents of that package can easily be reviewed and cryptographically signed by a trusted party in order to safely give the app privileged access to system functions which would currently be unsafe to expose to the web.

Unfortunately the packaged app approach misses out on many of the biggest benefits of the web, like its universal and inter-linked nature. You can’t hyperlink into a packaged app, and providing an updated version of the app requires a completely different mechanism to that of web content.

We have seen above how Service Workers hold some promise in finally solving the offline problem, but packages as a concept may still have some value on the web. The proposed “Packaging on the Web” specification is exploring ways to take advantage of some of the benefits of packages, whilst retaining all the benefits of URLs and the web.

This specification does not explore a new security model for exposing more privileged APIs to the web however, which in my view is the single biggest unsolved problem we now have left on the web as a platform.

Conclusions

In conclusion, a look at some of the latest emerging web standards tells us that the answer to the question “what is a web app?” is that a web app is simply a slice of the web which can be used separately from the browser.

With that in mind, web authors should design their content to work just as well inside and outside the browser and just as well offline as online.

Packaged apps are not web apps and are always a platform-specific solution. They should only be considered as a last resort for apps which need access to privileged functionality that can’t yet be safely exposed to the web. New web technologies will help negate the need for packages for offline functionality, but packages as a concept may still have a role on the web. A security model suitable for exposing more privileged functionality to the web is one of the last remaining unsolved challenges for the web as a platform.

The web is the biggest ecosystem of content that exists, far bigger than any proprietary walled garden of curated content. Lots of cool stuff is possible using web technologies to build experiences which users would consider “app like”, but creating a great user experience on the web doesn’t require replicating all of the other trappings of proprietary apps. The web has lots of unique benefits over other app platforms and is unrivalled in its scale, ubiquity, openness and diversity.

It’s important that as we invent cool new web technologies we remember to agree on standards for them which work cross-platform, so that we don’t miss out on these unique benefits.

The web as a platform is here to stay!

by tola at October 03, 2014 04:50 PM

September 18, 2014

Jonathan McDowell

Automatic inline signing for mutt with RT

I spend a surprising amount of my time as part of keyring-maint telling people their requests are badly formed and asking them to fix them up so I can actually process them. The one that's hardest to fault anyone on is that we require requests to be inline PGP signed (i.e. the same sort of output as you get with "gpg --clearsign"). That's because RT does various pieces of unpacking[0] of MIME messages that mean that a PGP/MIME signatures that have passed through it are no longer verifiable. Daniel has pointed out that inline PGP is a bad idea and got as far as filing a request that RT handle PGP/MIME correctly (you need a login for that but there's a generic read-only one that's easy to figure out), but until that happens the requirement stands when dealing with Debian's RT instance. So today I finally added the following lines to my .muttrc rather than having to remember to switch Mutt to inline signing for this one special case:

send-hook . "unset pgp_autoinline; unset pgp_autosign"
send-hook rt.debian.org "set pgp_autosign; set pgp_autoinline"

i.e. by default turn off auto inlined PGP signatures, but when emailing anything at rt.debian.org turn them on.

(Most of the other things I tell people to fix are covered by the replacing keys page; I advise anyone requesting a key replacement to read that page. There's even a helpful example request template at the bottom.)

[0] RT sticks a header on the plain text portion of the mail, rather than adding a new plain text part for the header if there are multiple parts (this is something Mailman handles better). It will also re-encode received mail into UTF-8 which I can understand, but Mutt will by default try to find an 8 bit encoding that can handle the mail, because that's more efficient, which tends to mean it picks latin1.

Update: Apparently Mutt in Jessie and beyond doesn't have the pgp_autosign option; you want crypt_autosign instead (and maybe crypt_autopgp but that defaults to yes so unless you've configured your setup to do S/MIME by default you should be fine). Thanks to Luca Capello for pointing this out.

September 18, 2014 10:39 AM

September 12, 2014

Jonathan McDowell

Back from DebConf 14

I previously forgot to mention that I was planning to attend DebConf14, having missed DebConf13. This year the conference was held in Portland, OR. This is a city I've been to many times before, and enjoy, but I hadn't spent any time wandering around its city centre as a pedestrian. I have to say I really prefer DebConfs that are held in middle of city. It always seems a bit of a shame to travel some distance to somewhere new and spend all the time there in a conference venue. Plus these days I have the added lure of going out and playing Ingress in a new location. DebConf14 didn't disappoint in these respects; the location was super easy to get to from the airport via public transportation, all of the evening social events were within reasonable walking distance (I'll tend to default to walking when possible) and the talk venue/accommodation were close to each other and various eating + drinking options. Throw in the fact at Portland managed to produce some excellent weather (modulo my Ingress session on the last Saturday morning, when rained on me) and it's impossible to fault the physicalities of DebConf this year.

This year the conference format was a bit different; previous years have had a week long DebConf before the week of the conference itself. This year went for a 9 day talk schedule (Saturday -> Sunday) with various gaps of hacking time interspersed. I've found it hard to justify a full two weeks away in the past, so this setup worked a lot better from my viewpoint. Also I rarely go to DebConf with a predetermined list of things to do; the stuff I work on naturally falls out of talks I attend and informal discussions I have. Having hack time throughout the conference helped me avoid feeling I was having to trade off hacking vs talks.

Naturally enough a lot of my involvement at DebConf was around OpenPGP. Gunnar and I spent a fair bit of time getting Daniel up to speed with the keyring-maint team (Gunnar more than I, I'll confess). We finally set a hard timeframe for freeing Debian of older 1024 bit keys. I was introduced to the Gnuk, which is a particularly interesting piece of open specification hardware with a completely Free software stack on top if it that implements the OpenPGP smartcard spec. Currently it's limited to 2K keys but it's hoped that 4K support can be added (and I ended up spending a couple of hours after the closing talk hacking on the source and seeing how much needs to change for 4K support, aided by the very patient Niibe). These are the sort of things that really benefit from the face time that DebConf offers to the Debian project. I've said it before, but I think it's worth saying again: Debian is a bit like a huge telecommuting organization and it's my opinion that any such organization should try and ensure its members actually spend some time together on a regular basis. It improves the ability to work remotely a hell of a lot if you can actually put a face to the entity you're emailing / IRCing and have some sort of idea where they're coming from because you've spent some time with them, whether that's in talks or over dinner or just casual hallway chats.

For once I also found myself considering alternative employment while at DebConf and it was incredibly useful to be able to have various conversations with both old friends and people who were there with an eye on recruitment. Thanks to all those whose ears I bent about the subject (and more on the outcome in a future post). Thank you also to the many people involved with the organization of DebConf; I've been on the periphery a few times over the years and it's given me a glimpse into the amount of hard work all of the volunteers (be they global team, local organizing team, video team or just random volunteers) put into making DebConf one of my must-attend yearly conferences. If you're at all involved in Debian and haven't attended I strongly urge you to do so - I'll see you all next year at DebConf15 in Heidelberg!

September 12, 2014 02:03 PM

July 22, 2014

MJ Ray

Three systems

There are three basic systems:

The first is slick and easy to use, but fiddly to set up correctly and if you want to do something that its makers don’t want you to, it’s rather difficult. If it breaks, then fixing it is also fiddly, if not impossible and requiring complete reinitialisation.

The second system is an older approach, tried and tested, but fell out of fashion with the rise of the first and very rarely comes preinstalled on new machines. Many recent installations can be switched to and from the first system at the flick of a switch if wanted. It needs a bit more thought to operate but not much and it’s still pretty obvious and intuitive. You can do all sorts of customisations and it’s usually safe to mix and match parts. It’s debatable whether it is more efficient than the first or not.

The third system is a similar approach to the other two, but simplified in some ways and all the ugly parts are hidden away inside neat packaging. These days you can maintain and customise it yourself without much more difficulty than the other systems, but the basic hardware still attracts a price premium. In theory, it’s less efficient than the other types, but in practice it’s easier to maintain so doesn’t lose much efficiency. Some support companies for the other types won’t touch it while others will only work with it.

So that’s the three types of bicycle gears: indexed, friction and hub. What did you think it was?

by mjr at July 22, 2014 03:59 AM

July 10, 2014

James Taylor

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


July 10, 2014 02:09 PM

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


July 10, 2014 02:09 PM

June 28, 2014

Brett Parker (iDunno)

Sony Entertainment Networks Insanity

So, I have a SEN account (it's part of the PSN), I have 2 videos with SEN, I have a broken PS3 so I can no deactivate video (you can only do that from the console itself, yes, really)... and the response from SEN has been abysmal, specifically:

As we take the security of SEN accounts very seriously, we are unable to provide support on this matter by e-mail as we will need you to answer some security questions before we can investigate this further. We need you to phone us in order to verify your account details because we're not allowed to verify details via e-mail.

I mean, seriously, they're going to verify my details over the phone better than over e-mail how exactly? All the contact details are tied to my e-mail account, I have logged in to their control panel and renamed the broken PS3 to "Broken PS3", I have given them the serial number of the PS3, and yet they insist that I need to call them, because apparently they're fucking stupid. I'm damned glad that I only ever got 2 videos from SEN, both of which I own on DVD now anyways, this kind of idiotic tie in to a system is badly wrong.

So, you phone the number... and now you get stuck with hold music for ever... oh, yeah, great customer service here guys. I mean, seriously, WTF.

OK - 10 minutes on the phone, and still being told "One of our advisors will be with you shortly". I get the feeling that I'll just be writing off the 2 videos that I no longer have access to.

I'm damned glad that I didn't decide to buy more content from that - at least you can reset the games entitlement once every six months without jumping through all these hoops (you have to reactivate each console that you still want to use, but hey).

by Brett Parker (iDunno@sommitrealweird.co.uk) at June 28, 2014 03:54 PM

June 12, 2014

Paul Tansom

Beginning irc

After some discussion last night at PHP Hants about the fact that irc is a great facilitator of support / discussion, but largely ignored because there is rarely enough information for a new user to get going I decided it may be worth putting together a howto type post so here goes…

What is irc?

First of all, what on earth is it? I’m tempted to describe it as Twitter done right years before Twitter even existed, but I’m a geek and I’ve been using irc for years. It has a long heritage, but unlike the ubiquitous email it hasn’t made the transition into mainstream use. In terms of usage it has similarities to things like Twitter and Instant Messaging. Let’s take a quick look at this.

Twitter allows you to broadcast messages, they get published and anyone who is subscribed to your feed can read what you say. Everything is pretty instant, and if somebody is watching the screen at the right time they can respond straight away. Instant Messaging on the other hand, is more of a direct conversation with a single person, or sometimes a group of people, but it too is pretty instantaneous – assuming, of course, that there’s someone reading what you’ve said. Both of these techonologies are pretty familiar to many. If you go to the appropriate website you are given the opportunity to sign up and either use a web based client or download one.

It is much the same for irc in terms of usage, although conversations are grouped into channels which generally focus on a particular topic rather than being generally broadcast (Twitter) or more specifically directed (Instant Messaging). The downside is that in most cases you don’t get a web page with clear instructions of how to sign up, download a client and find where the best place is to join the conversation.

Getting started

There are two things you need to get going with irc, a client and somewhere to connect to. Let’s put that into a more familiar context.

The client is what you use to connect with; this can be an application – so as an example Outlook or Thunderbird would be a mail client, or IE, Firefox, Chrome or Safari are examples of clients for web pages – or it can be a web page that does the same thing – so if you go to twitter.com and login you are using the web page as your Twitter client. Somewhere to connect to can be compared to a web address, or if you’ve got close enough to the configuration of your email to see the details, your mail server address.

Let’s start with the ‘somewhere to connect to‘ bit. Freenode is one of the most popular irc servers, so let’s take a look. First we’ll see what we can find out from their website, http://freenode.net/.

freenode

There’s a lot of very daunting information there for somebody new to irc, so ignore most of it and follow the Webchat link on the left.

webchat

That’s all very well and good, but what do we put in there? I guess the screenshot above gives a clue, but if you actually visit the page the entry boxes will be blank. Well first off there’s the Nickname, this can be pretty much anything you like, no need to register it – stick to the basics of letters, numbers and some simple punctuation (if you want to), keep it short and so long as nobody else is already using it you should be fine; if it doesn’t work try another. Channels is the awkward one, how do you know what channels there are? If you’re lucky you’re looking into this because you’ve been told there’s a channel there and hopefully you’ve been given the channel name. For now let’s just use the PHP Hants channel, so that would be #phph in the Channels box. Now all you need to do is type in the captcha, ignore the tick boxes and click Connect and you are on the irc channel and ready to chat. Down the right you’ll see a list of who else is there, and in the main window there will be a bit of introductory information (e.g. topic for the channel) and depending on how busy it is anything from nothing to a fast scrolling screen of text.

phph

If you’ve miss typed there’s a chance you’ll end up in a channel specially created for you because it didn’t exist; don’t worry, just quit and try again (I’ll explain that process shortly).

For now all you really need to worry about is typing in text an posting it, this is as simple as typing it into the entry box at the bottom of the page and pressing return. Be polite, be patient and you’ll be fine. There are plenty of commands that you can use to do things, but for now the only one you need to worry about is the one to leave, this is:

/quit

Type it in the entry box, press return and you’ve disconnected from the server. The next thing to look into is using a client program since this is far more flexible, but I’ll save that for another post.

by Paul Tansom at June 12, 2014 04:27 PM

May 06, 2014

Richard Lewis

Refocusing Ph.D

Actual progress on this Ph.D revision has been quite slow. My current efforts are on improving the focus of the thesis. One of the criticisms the examiners made (somewhat obliquely) was that it wasn&apost very clear exactly what my subject was: musicology? music information retrieval? computational musicology? And the reason for this was that I failed to make that clear to myself. It was only at the writing up stage, when I was trying to put together a coherent argument, that I decided to try and make it a story about music information retrieval (MIR). I tried to argue that MIR&aposs existing evaluation work (which was largely modelled on information retrieval evaluation from the text world) only took into account the music information needs of recreational users of MIR systems, and that there was very little in the way of studying the music information seeking behaviour of "serious" users. However, the examiners didn&apost even accept that information retrieval was an important problem for musicology, nevermind that there was work to be done in examining music information needs of music scholarship.

So I&aposm using this as an excuse to shift the focus away from MIR a little and towards something more like computational musicology and music informatics. I&aposm putting together a case study of a computational musicology toolkit called music21. Doing this allows me to focus in more detail on a smaller and more distinct community of users (rather than attempting to studying musicologists in general which was another problematic feature of the thesis), it makes it much clearer what kind of music research can be addressed using the technology (all of MIR is either far too diverse or far too generic, depending on how you want to spin it), and also allows me to work with the actually Purcell Plus project materials using the toolkit.

May 06, 2014 11:16 PM

March 27, 2014

Richard Lewis

Taking notes in Haskell

The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we&aposre using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we&aposre working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don&apost even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that&aposs what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

Here&aposs what I typed:

module MusicalDocuments where

import Data.Maybe

-- A document comprises some number of openings (double page spreads)
data Document = Document [Opening]

-- An opening comprises one or two pages (usually two)
data Opening = Opening (Page, Maybe Page)

-- A page comprises multiple systems
data Page = Page [System]

-- Each part is the line for a particular voice
data Voice = Superius | Discantus | Tenor | Contratenor | Bassus

-- A part comprises a list of musical sybmols, but it may span mutliple systems
--(including partial systems)
data Part = Part [MusicalSymbol]

-- A piece comprises some number of sections
data Piece = Piece [Section]

-- A system is a collection of staves
data System = System [Staff]

-- A staff is a list of atomic graphical symbols
data Staff = Staff [Glyph]

-- A section is a collection of parts
data Section = Section [Part]

-- These are the atomic components, MusicalSymbols are semantic and Glyphs are
--syntactic (i.e. just image elements)
data MusicalSymbol = MusicalSymbol
data Glyph = Glyph

-- If this were real, Image would abstract over some kind of binary format
data Image = Image

-- One of the important properties we need in order to be able to construct pieces
-- from the scanned components is to be able to say when objects of the some of the
-- types are strictly contiguous, i.e. this staff immediately follows that staff
class Contiguous a where
  immediatelyFollows :: a -> a -> Bool
  immediatelyPrecedes :: a -> a -> Bool
  immediatelyPrecedes a b = b `immediatelyFollows` a

instance Contiguous Staff where
  immediatelyFollows :: Staff -> Staff -> Bool
  immediatelyFollows = undefined

-- Another interesting property of this data set is that there are a number of
-- duplicate scans of openings, but nothing in the metadata that indicates this,
-- so our workflow needs to recognise duplicates
instance Eq Opening where
  (==) :: Opening -> Opening -> Bool
  (==) a b = undefined

-- Maybe it would also be useful to have equality for staves too?
instance Eq Staff where
  (==) :: Staff -> Staff -> Bool
  (==) a b = undefined

-- The following functions actually represent the workflow

collate :: [Document]
collate = undefined

scan :: Document -> [Image]
scan = undefined

split :: Image -> Opening
split = undefined

paginate :: Opening -> [Page]
paginate = undefined

omr :: Page -> [System]
omr = undefined

segment :: System -> [Staff]
segment = undefined

tokenize :: Staff -> [Glyph]
tokenize = undefined

recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol
recogniseMusicalSymbol = undefined

part :: [Glyph] -> Maybe Part
part gs =
  if null symbols then Nothing else Just $ Part symbols
  where symbols = mapMaybe recogniseMusicalSymbol gs

alignable :: Part -> Part -> Bool
alignable = undefined

piece :: [Part] -> Maybe Piece
piece = undefined

I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that&aposs just a label for a list or pair.

I haven&apost written much Haskell code before, and given that I&aposve only implemented one function here, I still haven&apost written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

March 27, 2014 11:13 PM

February 22, 2014

Wayne Stallwood (DrJeep)

Outlook 2003, Cutting off Emails

I had a friend come to me with an interesting problem they were having in their office. Due to the Exchange server and Office licencing they have they are running Outlook 2003 on Windows 7 64bit Machines.

After Internet Explorer updates to IE11 it introduces a rather annoying bug into Outlook. Typed emails often get cut off mid sentence when you click Send ! So only part of the email gets sent !

What I think is happening is that Outlook is reverting to a previously autosaved copy before sending.

Removing the IE11 update would probably fix it but perhaps the easiest way is to disable the "Autosave unsent email" option in Outlook.

Navigate to:-
Tools, Options, E-Mail Options, Advanced E-Mail Options, and disable the "Autosave unsent" option.

February 22, 2014 08:43 AM

February 06, 2014

Adam Bower (quinophex)

I finally managed to beat my nemesis!

I purchased this book http://www.amazon.co.uk/dp/0738206679 (Linked, by Barabasi) on the 24th of December 2002, I had managed to make 6 or 7 aborted attempts at reading it to completion where life had suddenly got busy and just took over. This meant that I put the book down and didn't pick it up again until things were less hectic some time later and I started again.

Anyhow, I finally beat the book a few nights ago, my comprehension of it was pretty low anyhow but at least it is done. Just shows I need to read lots more given how little went in.




comment count unavailable comments

February 06, 2014 10:40 PM

February 01, 2014

Adam Bower (quinophex)

Why buying a Mio Cyclo 305 HC cycling computer was actually a great idea.

I finally made it back out onto the bike today for the first time since September last year. I'd spent some time ill in October and November which meant I had to stop exercising and as a result I've gained loads of weight over the winter and it turns out also become very unfit which can be verified by looking at the Strava ride from today: http://www.strava.com/activities/110354158

Anyhow, a nice thing about this ride is that I can record it on Strava and get this data about how unfit I have become, this is because last year I bought a Mio Cyclo 305 HC cycle computer http://eu.mio.com/en_gb/mio-cyclo-305-hc.htm from Halfords reduced to £144.50 (using a British Cycling discount). I was originally going to get a Garmin 500 but Amazon put the price up from £149.99 the day I was going to buy it to £199.99.

I knew when I got the Mio that it had a few issues surrounding usability and features but it was cheap enough at under £150 that I figured that even if I didn't get on with it I'd at least have a cadence sensor and heart rate monitor so I could just buy a Garmin 510 when they sorted out the firmware bugs with that and the price came down a bit which is still my longer term intention.

So it turns out a couple of weeks ago I plugged my Mio into a Windows VM when I was testing USB support and carried out a check for new firmware. I was rather surprised to see a new firmware update and new set of map data was available for download. So I installed it think I wasn't going to get any new features from it as Mio had released some new models but it turns out that the new firmware actually enables a single feature (amongst other things, they also tidied up the UI and sorted a few other bugs along with some other features) that makes the device massively more useful as it now also creates files in .fit format which can be uploaded directly to Strava.

This is massively useful for me as although the Mio always worked in Linux as the device is essentially just a USB mass storage device but you would have to do an intermediate step of having to use https://github.com/rhyas/GPXConverter to convert the files from the Mio-centric GPX format to something Strava would recognise. Now I can just browse to the folder and upload the file directly which is very handy.

All in it turns out that buying a Mio which reading reviews and forums were full of doom and gloom means I can wait even longer before considering replacement with a garmin.

comment count unavailable comments

February 01, 2014 02:11 PM

January 04, 2014

Brett Parker (iDunno)

Wow, I do believe Fasthosts have outdone themselves...

So, got a beep this morning from our work monitoring system. One of our customers domain names is hosted with livedns.co.uk (which, as far as I can tell, is part of the Fasthosts franchise)... It appears that Fasthosts have managed to entirely break their DNS:

brettp@laptop:~$ host www.fasthosts.com
;; connection timed out; no servers could be reached
brettp@laptop:~$ whois fasthosts.com | grep -i "Name Server"
   Name Server: NS1.FASTHOSTS.NET.UK
   Name Server: NS2.FASTHOSTS.NET.UK
Name Server: NS1.FASTHOSTS.NET.UK
Name Server: NS2.FASTHOSTS.NET.UK
brettp@laptop:~$ whois fasthosts.net.uk | grep -A 2 "Name servers:"
    Name servers:
        ns1.fasthosts.net.uk      213.171.192.252
        ns2.fasthosts.net.uk      213.171.193.248
brettp@laptop:~$  host -t ns fasthosts.net.uk 213.171.192.252
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns fasthosts.net.uk 213.171.193.248
;; connection timed out; no servers could be reached
brettp@laptop:~$

So, that's fasthosts core nameservers not responding, good start! They also provide livedns.co.uk, so lets have a look at that:

brettp@laptop:~$ whois livedns.co.uk | grep -A 3 "Name servers:"
    Name servers:
        ns1.livedns.co.uk         213.171.192.250
        ns2.livedns.co.uk         213.171.193.250
        ns3.livedns.co.uk         213.171.192.254
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.192.250
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.193.250
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.192.254
;; connection timed out; no servers could be reached

So, erm, apparently that's all their DNS servers "Not entirely functioning correctly"! That's quite impressive!

by Brett Parker (iDunno@sommitrealweird.co.uk) at January 04, 2014 10:24 AM

January 01, 2014

John Woodard

A year in Prog!


It's New Year's Day 2014 and I'm reflecting on the music of past year.

Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 01, 2014 07:56 PM

December 01, 2013

Paul Tansom

Scratch in a network environment

I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I’ve made to the default Scratch install to make things easier. So here goes:

With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn’t exactly helpful. It also isn’t ideal that they can explore the C: drive in spite of profile restrictions (although it isn’t the end of the world as there is little they can do from Scratch).

save-orig

After a bit of time with Google I found the answer, and since it didn’t immediately leap out at me when I was searching I thought I’d post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:

C:\Program Files\Scratch\Scratch.ini

Initially it looks like this:

ini-orig

Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):

Home=U:
VisibleDrives=U:

To get this:

ini-new

They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.

save-new1

The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.

save-new2

You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).

There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.

Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven’t tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:

ProxyServer=[server name or IP address]
ProxyPort=[port number]

by Paul Tansom at December 01, 2013 07:00 PM

February 22, 2013

Joe Button

Sampler plugin for the baremetal LV2 host

I threw together a simpler sampler plugin for kicks. Like the other plugins it sounds fairly underwhelming. Next challenge will probably be to try plugging in some real LV2 plugins.

February 22, 2013 11:22 PM

February 21, 2013

Joe Button

Baremetal MIDI machine now talks to hardware MIDI devices

The Baremetal MIDI file player was cool, but not quite as cool as a real instrument.

I wired up a MIDI In port along the lines of This one here, messed with the code a bit and voila (and potentially viola), I can play LV2 instrument plugins using a MIDI keyboard:

When I say "LV2 synth plugins", I should clarify that I'm only using the LV2 plugin C API, not the whole .ttl text file shebangle. I hope to get around to that at some point but it will be a while before you can directly plug LV2s into this and expect them to just work.

February 21, 2013 04:05 PM

January 16, 2013

John Woodard

LinuxMint 14 Add Printer Issue


 LinuxMint 14 Add Printer Issue



 

I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!

It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.

Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.

The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.

So come on Minty people get your bug squashing boots on and stamp on this one please.

Update

Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 16, 2013 12:39 AM

August 20, 2012

David Reynolds

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to – because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

June 25, 2012

Elisabeth Fosbrooke-Brown (sfr)

Black redstarts

It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.

Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!

The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.

by sunflowerinrain (noreply@blogger.com) at June 25, 2012 08:02 AM

June 16, 2012

Elisabeth Fosbrooke-Brown (sfr)

Roundabout at Mirambeau

Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.

A couple of years ago the roundabout at the edge of  Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.

Need a passenger to take photo!

by sunflowerinrain (noreply@blogger.com) at June 16, 2012 12:06 PM

October 15, 2011

David Reynolds

Git Workflow

I’ve been using this for a while and had it recorded on a private on a private wiki. I was just tidying up my hosting account and thought I’d get rid of the wiki and store any useful info from it on my blog

Clone full subversion history into git repository (warning, may take a long time depending on how many commits you have in your Subversion repository).

1
$ git-svn clone -s http://example.com/my_subversion_repo local_dir

-s signifies trunk/ branches/ tags/ exist in the svn repo (standard repository setup)

Create branch for local changes and check it out

1
$ git checkout -b XXXX-description # where XXXX is a ticket number

Make my changes in the branch… Make my commits in the branch…

Change back to master branch

1
$ git checkout master

Merge branch as one commit to master

1
$ git merge --squash XXXX-description

Commit changes to master branch:

1
$ git commit -a

Push changes back to svn:

1
$ git svn dcommit

Resync local_changes to master:

1
2
$ git checkout XXXX-description
$ git rebase master

October 15, 2011 02:12 PM

September 04, 2006

Ashley Howes

Some new photos

Take a look at some new photos my father and I have taken. We are experimenting with our new digital SLR with a variety of lenses.

by Ashley (noreply@blogger.com) at September 04, 2006 10:42 AM

August 30, 2006

Ashley Howes

A Collection of Comments

This is a bit of fun. A collection of comments found in code. This is from The Daily WTF.

by Ashley (noreply@blogger.com) at August 30, 2006 01:13 AM