Planet ALUG

May 25, 2017

Mick Morgan

monday in manchester

At around 22.30 last Monday, Manchester was subjected to an horrific attack at a pop concert. As the world now knows, a suicide bomber deliberately targeted young people and their friends and families as they were leaving a concert by the young pop singer Ariana Grande. In that attack, 22 people, including children as young as 8 years old lost their lives. Many, many more received life changing injuries.

This is the first confirmed suicide bombing attack in the UK since 7 July 2005. On that day, 12 years ago, I was working in London. I can vividly recall the aftermath of that attack. Shock, horror, disbelief, later turning to anger. But I also vividly recall the reactions of Londoners and visitors to London I met, talked to or simply listened to over the days that followed. Only a few days after the 7th I was travelling by bus to a meeting when quite unbidden a middle aged American couple, obviously tourists, told me and everyone else on the bus that they shared our pain and that they were praying for us. I am not a religious man, indeed, I have no faith whatsoever, but I was deeply moved by that couple’s sincerity. Later, towards the end of July, my wife and I were travelling by Tube towards St Pancras on our way to Paris for our wedding anniversary. The driver of that Tube welcomed us (and everyone else) aboard the “up yours al-Qaeda express”. This show of defiance in the face of horror actually raised a number of smiles from those around us. London survived, Londoners endured.

The citizens of Manchester are now all facing profound shock and grief. That shock and grief will also be felt by anyone who has any shred of humanity within them. London was bad – 52 people lost their lives in that series of co-ordinated attacks. But somehow, Manchester feels worse, much worse. The London bombers targeted morning Tube and bus travellers – mainly commuters, some of whom were late for work because of earlier rail disruption that day. They were a soft target. But the Manchester bombing was callously and deliberately aimed at the ultimate soft target – kids; youngsters and their families emerging from what should have been a wonderful night out. Kids simply enjoying themselves at a concert many would have been planning for and looking forward to for months. Ariane Grande’s fanbase is primarily young women and girls. The attacker would have known that and yet he deliberately chose to detonate his bomb at that time and that place. He, and any accomplices he may have had, deserve nothing but our contempt. Manchester will survive, and Mancunians will endure. They have faced this before in the IRA truck bombing in June 1996. That attack didn’t break them. This one won’t either.

Meanwhile, everyone must grieve for the loss of so many young lives in such a pointless, pitiless attack. My thoughts, and those of my family, are with Manchester.

by Mick at May 25, 2017 03:37 PM

May 12, 2017

Mick Morgan

using a VPN to take back your privacy

With the passage into law of the iniquitous Investigatory Powers (IP) Bill in the UK at the end of November last year, it is way past time for all those who care about civil liberties in this country to exercise their right to privacy.

The new IP Act permits HMG and its various agencies to surveil the entire online population. The Act actually formalises (or in reality, legalises) activity which has long gone on in this country (as in others) in that it gives LEAs and others a blanket right of surveillance.

The Act (PDF) itself states that it is:

“An Act to make provision about the interception of communications, equipment interference and the acquisition and retention of communications data, bulk personal datasets and other information; to make provision about the treatment of material held as a result of such interception, equipment interference or acquisition or retention; to establish the Investigatory Powers Commissioner and other Judicial Commissioners and make provision about them and other oversight arrangements; to make further provision about investigatory powers and national security; to amend sections 3 and 5 of the Intelligence Services Act 1994; and for connected purposes.”

(Don’t you just love the “connected purposes” bit?)

The Open Rights Group says the Act:

“is one of the most extreme surveillance laws ever passed in a democracy. Its impact will be felt beyond the UK as other countries, including authoritarian regimes with poor human rights records, will use this law to justify their own intrusive surveillance regimes.”

Liberty, which believes the Act breeches the public’s rights under the Human Rights Act, is challenging the Act through the Courts. That organisation says:

“Liberty will seek to challenge the lawfulness of the following powers, which it believes breach the public’s rights:

– Bulk hacking – the Act lets police and agencies access, control and alter electronic devices like computers, phones and tablets on an industrial scale, regardless of whether their owners are suspected of involvement in crime – leaving them vulnerable to further attack by hackers.

– Bulk interception – the Act allows the state to read texts, online messages and emails and listen in on calls en masse, without requiring suspicion of criminal activity.

– Bulk acquisition of everybody’s communications data and internet history – the Act forces communications companies and service providers to hand over records of everybody’s emails, phone calls and texts and entire web browsing history to state agencies to store, data-mine and profile at its will.

This provides a goldmine of valuable personal information for criminal hackers and foreign spies.

– “Bulk personal datasets” – the Act lets agencies acquire and link vast databases held by the public or private sector. These contain details on religion, ethnic origin, sexuality, political leanings and health problems, potentially on the entire population – and are ripe for abuse and discrimination.”

ProtonMail, a mail provider designed and built by “scientists, engineers, and developers drawn together by a shared vision of protecting civil liberties online.” announced on Thursday 19 January that they will be providing access to their email service via a Tor onion site, accessible only over the Tor anonymising network. The ProtonMail blog entry announcing the new service says:

“As ProtonMail has evolved, the world has also been changing around us. Civil liberties have been increasingly restricted in all corners of the globe. Even Western democracies such as the US have not been immune to this trend, which is most starkly illustrated by the forced enlistment of US tech companies into the US surveillance apparatus. In fact, we have reached the point where it simply not possible to run a privacy and security focused service in the US or in the UK.

At the same time, the stakes are also higher than ever before. As ProtonMail has grown, we have become increasingly aware of our role as a tool for freedom of speech, and in particular for investigative journalism. Last fall, we were invited to the 2nd Asian Investigative Journalism Conference and were able to get a firsthand look at the importance of tools like ProtonMail in the field.

Recently, more and more countries have begun to take active measures to surveil or restrict access to privacy services, cutting off access to these vital tools. We realize that censorship of ProtonMail in certain countries is not a matter of if, but a matter of when. That’s why we have created a Tor hidden service (also known as an onion site) for ProtonMail to provide an alternative access to ProtonMail that is more secure, private, and resistant to censorship.”

So, somewhat depressingly, the UK is now widely seen as a repressive state, willing to subject its citizens to a frighteningly totalitarian level of surveillance. Personally I am not prepared to put up with this without resistance.

Snowden hype notwithstanding, HMG does not have the resources to directly monitor all electronic communications traffic within the UK or to/from the UK, so it effectively outsources that task to “communications providers” (telcos for telephony and ISPs for internet traffic). Indeed, the IP act is intended, in part, to force UK ISPs to retain internet connection records (ICRs) when required to do so by the Home Secretary. In reality, this means that all the major ISPs, who already have relationships with HMG of various kinds, will be expected to log all their customer’s internet connectivity and to retain such logs for so long as is deemed necessary under the Act. The Act then gives various parts of HMG the right to request those logs for investigatory purposes.

Given that most of us now routinely use the internet for a vast range of activity, not limited just to browsing websites, but actually transacting in the real world, this is akin to requiring that every single library records the book requests of its users, every single media outlet (newsagents, bookshops, record shops etc.) records every purchase in a form traceable back to the purchaser, every single professional service provider (solicitors, lawyers, doctors, dentists, architects, plumbers, builders etc.) record all activity by name and address of visitor. All this on top of the already existing capability of of HMG to track and record every single person, social media site or organisation we contact by email or other form of messaging.

Can you imagine how you would feel if on every occasion you left your home a Police Officer (or in fact officials from any one of 48 separate agencies, including such oddities as the Food Standards Agency, the NHS Business Services Authority or the Gambling Commission) had the right, without a warrant or justifiable cause, to stop you and search you so that (s)he could read every piece of documentation you were carrying? How do you feel about submitting to a fishing trip through your handbag, briefcase, wallet or pockets?

I have no problem whatsoever with targeted surveillance, but forgive me if I find the blanket unwarranted surveillance of the whole populace, on the off-chance it might be useful, completely unacceptable. What happened to the right to privacy and the presumption of innocence in the eyes of the law? The data collected by ISPs and telcos under the IP act gives a treasure trove of information on UK citizens that the former East German Stasi could only have dreamed about.

Now regardless of whether or not you trust HMG to use this information wisely, and only for the reasons laid out under the Act, and only in the strict circumstances laid out in the Act, and only with the effective scrutiny of “independent” oversight, how confident are you that any future administration would be similarly wise and circumspect? What is to stop a future, let us suppose, less enlightened or liberal administration, misusing that data? What happens if in future some act which is currently perfectly legal and permissible, if of somewhat dubious taste, morality and good sense (such as, say, reading the Daily Mail online) were to become illegal? What constraint would there be to prevent a retrospective search for past consumers of such dubious material in order to flag them as “persons of interest”?

And even if you are comfortable with all of that, how comfortable are you with the idea that organised crime could have access to all your personal details? Given the aggregation of data inherent in the requirement for bulk data collection by ISPs, those datasets become massive and juicy targets for data theft (by criminals as as well as foreign nations states). And if you think that could not happen because ISPs and Telcos take really, really, really good care of their customer’s data, then think about TalkTalk or Plusnet or Three or Yahoo.

And they are just a few of the recent ones that we /know/ about.

So long as I use a UK landline or mobile provider for telephony, there is little I can do about the aggregation of metadata about my contacts (and if you think metadata aggregation doesn’t matter, take a look at this EFF note. I can, of course, and do, keep a couple of (cash) pre-paid SIM only mobile ‘phones handy – after all, you never know when you may need one (such as perhaps, in future when they become “difficult” to purchase). And the very fact that I say that probably flags me as suspicious in some people’s minds. (As an aside, ask yourself what comes to mind when you think about someone using a cash paid, anonymous, second hand mobile ‘phone. See? I must be guilty of something. Notice how pernicious suspicion becomes? Tricky isn’t it?) Nor can I do much about protecting my email (unless I use GPG, but that is problematic and in any case does not hide the all important metadata in the to/from/date/subject headers). Given that, I have long treated email just as if it were correspondence by postcard, though somewhat less private. For some long time I used to routinely GPG sign all my email. I have stopped doing that because the signatures meant, of course, that I had no deniability. Nowadays I only sign (and/or encrypt) when I want my correspondents to be sure I am who I say I am (or they want that reassurance).

But that does not mean I think I should just roll over and give up. There is plenty I can do to protect both myself and my immediate family from unnecessary, intrusive, unwarranted and unwanted snooping. For over a year now I have been using my own XMPP server in place of text messaging. I have had my own email server for well over a decade, and so long as I am conversing there with others on one of my domains served by that system, then that email is pretty private too (protected in transit by TLS using my own X509 certificates). My web browsing has also long been protected by Tor. But all that still leaves trails I don’t like leaving. I might, for example, not want my ISP to even know that I am using Tor, and in the case of my browsing activity it becomes problematic to protect others in my household or to cover all the multiple devices we now have which are network connected (I’ve actually lost count and would have to sit down and list them carefully to be sure I had everything covered).

What to do? The obvious solution is to wrap all my network activity in a VPN tunnel through my ISP’s routers before I hit the wider internet. That way my ISP can’t log anything beyond the fact that I am using a VPN. But which VPN to use? And should I go for a commercial service or roll my own? Bear in mind that not all VPNs are created equal, nor are they all necessarily really private or secure. The “P” in VPN refers to the ability to interconnect two separate (probably RFC 1918) private networks across a public untrusted network. It does not actually imply anything about the end user’s privacy. And depending upon the provider chosen and the protocols used, end user privacy may be largely illusory. In the worst case scenario, depending upon the jurisdiction in which you live and your personal threat model, a badly chosen VPN provider may actually reduce privacy by drawing attention to the fact that you value that privacy. (As an aside, using Tor can also have much the same effect. Indeed, there is plenty of anecdotal evidence to suggest that Tor usage lights you up like a christmas tree in the eyes of the main GPAs.)

Back in 2015, a team of researchers from the Sapienza University of Rome and Queen Mary University of London published a paper (PDF) entitled “A Glance through the VPN Looking Glass: IPv6 Leakage and DNS Hijacking in Commercial VPN clients”. That paper described the researcher’s findings from a survey of 14 of the better known commercial VPN providers. The teams chose the providers in much the same way you or I might do so – they searched on-line for “best VPN” or “anonymous VPN” and chose the providers which came highest or most frequently in the search results. The paper is worth reading. It describes how a poor choice of provider could lead to significant traffic leakage, typically through IPV6 or DNS. The table below is taken from their paper.

image of table

The paper describes some countermeasures which may mitigate some of the problems. In my case I disable IPV6 at the router and apply firewall rules at both the desktop and VPS end of the tunnel to deny IPV6. My local DNS resolver files point to the OpenVPN endpoint (where I run a DNS resolver stub) for resolution and both that server and my local DNS resolvers (dnsmasq) point only to opennic DNS servers. It may help.

There are reports that usage of commercial VPN providers has gone up since the passage of the IP act. Many commercial VPN providers will be using the passage of the act as a potential booster for their services. And there are plenty of VPN providers about – just do what the Sapienza and Queen Mary researchers did and search for “VPN Provider” or “VPN services” to get lots of different lists, or take a look at lists provided by such sites as PrivacyTools or BestVPN. One useful point about the better commercial providers is that they usually have substantial infrastructure in place offering VPN exit points in various geographic locations. This can be particularly useful if you want to appear to be based in a particular country. Our own dear old BBC for example will block access to some services if you are not UK based (or if you are UK based and try to access services designed for overseas users). This can be problematic for UK citizens travelling overseas who wish to view UK services. A VPN with a UK exit gets around that problem. VPN users can also use local exits when they wish to access similarly (stupidly) protected services in foreign locales (the idiots in the media companies who are insistent on DRM in all its manifest forms are becoming more than just tiresome).

Some of the commercial services look better than others to me, but they all have one simple flaw as far as I am concerned. I don’t control the service. And no matter what the provider may say about “complete anonymity” (difficult if you want to pay by credit card) or “no logs”, the reality is that either there will be logs or the provider may be forced to divulge information by law. And don’t forget the problem of traffic leakage through IPV6 or DNS noted above. One further problem for me in using a commercial VPN provider rather than my own endpoint(s) is that I cannot then predict my apparent source IP address. This matters to me because my firewall rules limit ssh access to my various servers by source IP address. If I don’t know the IP address I am going to pop out on, then I’m going to have to relax that rule. I choose not to. I have simply amended my iptables rules to permit access from all my VPN endpoints.

The goldenfrog site has an interesting take on VPN anonymity. (Note that Goldenfrog market their own VPN service called “VyprVPN” so they are not entirely disinterested observers, but the post is still worth reading nevertheless). If you are simply concerned with protecting your privacy whilst browsing the net, and you are not concerned about anonymity then there may be a case for you to consider using a commercial provider – just don’t pick a UK company because they will be subject to lawful intercept requests under the IP act. Personally I’d shy away from US based companies too, (a view that is shared by PrivacyTools.io so it’s not just me). I would also only pick a provider which supports OpenVPN (or possibly SoftEther) in preference to less secure protocols such as PPTP, or L2TP. (For a comparison of the options, see this BestVPN blog post.

If you wish to use a commercial VPN provider, then I would strongly recommend that you pay for it – and check the contractual arrangements carefully to ensure that they match your requirements. I suggest this for the same reasons I recommend that you pay for an email service. You get a contract. In my view, using a free VPN service might be worse than using no VPN. Think carefully about the business model for free provision of services on the ‘net. Google is a good example of the sort of free service provider which I find problematic. Using a commercial, paid for, VPN service has the distinct advantage that the provider has a vested interest in keeping his clients’ details, and activity, private. After all, his business depends upon that. Trust is fragile and easily lost. If your business is predicated on trustworthiness then I would argue that you will (or should) work hard to maintain that trust. PrivacyTools has a good set of recommendations for VPN providers.

But what if, like me, you are still unsure about using a commercial VPN? Should you use your own setup (as I do)? Here are some things to think about.

Using a commercial VPN

 

For Against
Probably easier than setting up OpenVPN on a self-managed VPS for most people. The service provider will usually offer configuration files aimed at all the most popular operating systems. In many cases you will get a “point and click” application interface which will allow you to select the country you wish to pop out in. “Easier” does not mean “safer”. For example, the VPN provider may provide multiple users with the same private key wrapped up in its configuration files. Or the provider may not actually use OpenVPN. The provider may not offer support for YOUR chosen OS, or YOUR router. Beware in particular of “binary blob” installation of VPN software or configuration files (this applies particularly to Windows users). Unless you are technically competent (which you may not be if you are relying on this sort of installation) then you have no idea what is in that binary installation.
You get a contract (if you pay!) That contract may not be as strong as you might wish, or it might specifically exclude some things you might wish to see covered. Check the AUP before you select your provider. You get what you pay for.
Management and maintenance of the service (e.g. software patching) is handled by the provider. You rely on the provider to maintain a secure, up to date, fully patched service. Again, you get what you pay for.
The provider (should) take your security and privacy seriously. Their business depends on it. The provider may hold logs, or be forced to log activity if local LE require that. They may also make simple mistakes which leak evidence of your activity (is their DNS secure?)

The VPN service is a large, attractive, juicy target for hostile activity by organised crime and/or Global Passive Adversaries such as GCHQ and NSA. Consider your threat model and act accordingly.

Your network activity is “lost” in the noise of activity of others. But your legal and legitimate activity could provide “cover” for criminal activity of others. If this results in LEA seizure (or otherwise surveillance) of the VPN endpoint then your activity is swept up in the investigation. Are you prepared for the possible consequences of that?
You should get “unlimited” bandwidth (if you pay for it). But you may have to trade that off for reduced access speed, particularly if you are in contention for network usage with a large number of other users
You (may) be able to set up the account completely anonymously using bitcoin. Using a VPN provider cannot guarantee you are anonymous. All it can do is enhance your privacy. Do not rely on a VPN to hide illegal activity. (And don’t rely on Tor for that either!)
You may be able to select from a wide range of exit locations depending upon need. Most VPN providers are terrible

 

Using your own VPN

 

For Against
You get full control over the protocol you use, the DNS servers you use, the ciphers you choose and the location(s) you pop up in. You have to know what you are doing and you have to be comfortable in configuring the VPN software. Moreover, you need to be sure that you can actually secure the server on which you install the VPN server software as well as the client end. There is no point in having a “secure” tunnel if the end server leaks like a sieve or is subject to surveillance by the server provider – you have just shifted surveillance from the UK ISP to someone else.
It can be cheaper than using a commercial service. It may not be. If you want to be able to pop out in different countries you will have to pay for multiple VPSs in multiple datacentres. You will also be responsible for maintaining those servers.
You can be confident that your network activity is actually private because you can enforce your own no logging policy. No you can’t be sure. The VPS provider may log all activity. Check the privacy policy carefully. And be aware that the provider of a 3 euro a month VPS is very likely to dump you in the lap of any LEA who comes knocking on the door should you be stupid enough to use the VPN for illegal activity (or even any activity which breaches their AUP).

Also bear in mind the fact that you have no plausible deniability through hiding in a lot of other’s traffic if you are the only user of the VPN – which you paid for with your credit card.

 

I’ve used OpenVPN quite a lot in the past. I like it, it has a good record for privacy and security, it is relatively easy to set up, and it is well supported on a range of different devices. I have an OpenVPN endpoint on a server on the outer screened subnet which forms part of my home network so that I can connect privately to systems when I am out and about and wish my source IP to appear to be that at my home address. This can be useful when I am stuck in such places as airport lounges, internet cafes, foreign (or even domestic) hotels etc. So when the IP Act was still but a gleam in the eyes of some of our more manic lords and masters, I set up one or two more OpenVPN servers on various VPSs I have dotted about the world. In testing, I’ve found that using a standard OpenVPN setup (using UDP as the transport) has only a negligible impact on my network usage – certainly much less than using Tor.

Apart from the privacy offered by OpenVPN, particularly when properly configured to use forward secrecy as provided by TLS (see gr3t for some tips on improving security in your configuration), we can also make the tunnel difficult to block. We don’t (yet) see many blanket attempts to block VPN usage in the UK, but in some other parts of the world, notably China or reportedly the UAE for example, such activity can be common. By default OpenVPN uses UDP as the transport protocol and the server listens on port 1194. This well known port and/or protocol combination could easily be blocked at the network level. Indeed, some hotels, internet cafes and airport lounges routinely (and annoyingly) block all traffic to ports other than 80 and 443. If, however, we reconfigure OpenVPN to use TCP as the transport and listen on port 443, then its traffic becomes indistinguishable from HTTPS which makes blocking it much more difficult. There is a downside to this though. The overhead of running TCP over TCP can degrade your network experience. That said however, in my view a slightly slower connection is infinitely preferable to no connection or an unprotected connection.

In my testing, even using Tor over the OpenVPN tunnel (so that my Tor entry point appears to the Tor network to be the OpenVPN endpoint) didn’t degrade my network usage too much. This sort of Tor usage is made easier by the fact that I run my Tor client (either Tails, or Whonix) from within a virtual server instance running on one of my desktops. Thus if the desktop is connected to an OpenVPN tunnel then the Tor client is forced to use that tunnel to connect to Tor and thence the outside world.

However, this set up has a few disadvantages, not least the fact that I might forget to fire up the OpenVPN tunnel on my desktop before starting to use Tor. But the biggest problem I face in running a tunnel from my desktop is that it only protects activity /from/ that desktop. Any network connections from any of my mobile devices, my laptops, my various servers, or other network connected devices (as I said, I have lost count) or most importantly, my family’s devices, are perforce unprotected unless I can set up OpenVPN clients on them. In some cases this may be possible (my wife’s laptop for example) but it certainly isn’t ideal and in many cases (think my kid’s ‘phones for example) it is going to be completely impractical. So the obvious solution is to move the VPN tunnel entry point to my domestic router. That way, /all/ traffic to the net will be forced over the tunnel.

When thinking about this, Initially I considered using a raspberry pi as the router but my own experience of the pi’s networking capability left me wondering whether it would cope with my intended use case. The problem with the pi is that it only has one ethernet port and its broadcom chip only supports USB 2.0 connection. Internally the pi converts ethernet to USB. Since the chip is connected to four USB external ports and I would need to add a USB to ethernet conversion externally as well as USB wifi dongle in order to get the kind of connectivity I want (which includes streaming video) I fear that I might overwhelm the pi – certainly I’m pretty sure the device might become a bottleneck. However, I have /not/ tested this (yet) so I have no empirical evidence either way.

My network is already segmented in that I have a domestic ADSL router connected to my ISP and a separate, internal ethernet/WiFi only router connecting to that external router. It looks (something) like this:

 

network diagram

 

Since all the devices I care most about are inbound of the internal router (and wired rather than wifi where I really care) I can treat the network between the two devices as a sacrificial screened subnet. I consider that subnet to be almost as hostile as the outside world. I could therefore add the pi to the external screened net and thus create another separate internal network which is wifi only. That wouldn’t help with my wired devices (which tend to be the ones I really worry about) but it would give me a good test network which I could use as “guest only” access to the outside world. I have commented in the past about the etiquette of allowing guests access to my network. I currently force such access over my external router so that the guests don’t get to see my internal systems. However, that means that in future they won’t get the protection offered by my VPN. That doesn’t strike me as fair so I might yet set up a pi as described (or in fact add another router, they are cheap enough).

Having discounted the pi as a possibility, then another obvious solution would be re-purpose an old linux box (I have plenty) but that would consume way more power than I need to waste and looks to be overkill so the obvious solution is to stick with the purpose built router option. Now both OpenWrt or its fork LEDE and the more controversial DD-WRT offer the possibility of custom built routers with OpenVPN client capability built in. The OpenWrt wiki has a good description of how to set up OpenVPN. The DD-WRT wiki entry somewhat is less good, but then OpenWrt/LEDE would probably be a better choice in my view anyway. I’ve used OpenWrt in the past (on an Asus WL-500g) but found it a bit flaky. Possibly that is a reflection of the router I used (fairly old, bought cheap off ebay) and I should probably try again with a more modern device. But right now it is possible to buy new, capable SOHO routers with OpenVPN capability off the shelf. A quick search for “openvpn routers” will give you devices by Asus, Linksys, Netgear, Cisco or some really interesting little devices by GL Innovations. The Gli devices actually come with OpenWRT baked in and both the GL-MT300N and the slightly better specced GL-AR300M look to be particularly useful. I leave the choice of router to you, but you should be aware that many SOHO routers have lamentably poor security out of the box and even worse security update histories. You also need to bear in mind that VPN capability is resource intensive so you should choose the device with the fastest CPU and most RAM you can afford. I personally chose an Asus device as my VPN router (and yes, it is patched to the latest level….) simply because they are being actively audited at the moment and seem to be taking security a little more seriously than some of their competitors. I may yet experiment with one of the GL devices though.

Note here that I do /not/ use the OpenVPN router as the external router connected directly to my ISP, my new router replaced my old “inside net” router. This means that whilst all the connections I really care about are tunnelled over the OpenVPN route to my endpoint (which may be in one of several European datacentres depending upon how I feel) I can still retain a connection to the outside world which is /not/ tunnelled. There are a couple of reasons for this. Firstly some devices I use actually sometimes need a UK IP presence (think streaming video from catch-up TV or BBC news for example). Secondly, I also wish to retain a separate screened sub-net to house my internal OpenVPN server (to allow me to appear to be using my home network should I so choose when I’m out and about). And of course I may occasionally just like to use an unprotected connection simply to give my ISP some “noise” for his logs….

So, having chosen the router, we now need to configure it to use OpenVPN in client mode. My router can also be configured as a server, so that it would allow incoming tunnelled connections from the outside to my network, but I don’t want that, and nor probably do you. In my case such inbound connections would in any event fail because my external router is so configured as to only allow inbound connections to a webserver and my (separate) OpenVPN server on the screened subnet. It does not permit any other inbound connections, nor does my internal router accept connections from either the outside world or the screened subnet. My internal screened OpenVPN server is configured to route traffic back out to the outside world because it is intended only for such usage.

My new internal router expects its OpenVPN configuration file to follow a specific format. I found this to be poorly documented (but that is not unusual). Here’s how mine looks (well, not exactly for obvious reasons, in particular the (empty) keys are not real, but the format is correct).

 

# config file for router to VPN endpoint 1

# MBM 09/12/16

client
dev tun
proto udp
remote 12.34.56.78 1194
resolv-retry infinite
nobind
user nobody

# Asus router can’t cope with group change so:
# group nogroup

persist-key
persist-tun
mute-replay-warnings

<ca>

—–BEGIN CERTIFICATE—–

—–END CERTIFICATE—–

</ca>

<cert>

—–BEGIN CERTIFICATE—–

—–END CERTIFICATE—–

</cert>

<key>

—–BEGIN PRIVATE KEY—–

—–END PRIVATE KEY—–

</key>

<tls-auth>

—–BEGIN OpenVPN Static key V1—–

—–END OpenVPN Static key V1—–

</tls-auth>

key-direction 1
auth SHA512
remote-cert-tls server
cipher AES-256-CBC
comp-lzo

# end configuration

If you are using a commercial VPN service rather than your own OpenVPN endpoint, then your provider should give you configuration files much like those above. As I mentioned earlier, beware of “binary blob” non-text configurations.

If your router is anything like mine, you will need to upload the configuration file using the administrative web interface and then activate it. My router allows several different configurations to be stored so that I can vary my VPN endpoints depending on where I wish to pop up on the net. Of course this means that I have to pay for several different VPSs to run OpenVPN on, but at about 3 euros a month for a suitable server, that is not a problem. I choose providers who:

Whilst this may appear at first sight to be problematic, there are in fact a large number of such providers dotted around Europe. Be aware, however, that many small providers are simply resellers of services provided by other, larger, companies. This can mean that whilst you appear to be using ISP “X” in, say, Bulgaria, you are actually using servers owned and managed by a major German company or at least are on networks so owned. Be careful and do your homework before signing up to a service. I have found the lowendtalk site very useful for getting leads and for researching providers. The lowendbox website is also a good starting point for finding cheap deals when you want to test your setup.

Now go take back your privacy.

Notes

Some of the sites I found useful when considering my options are listed below.

Check your IP address and the DNS servers you are using at check2ip.com

Also check whether you are leaking DNS requests outside the tunnel at ipleak.net.

You can also check for DNS leakage at dnsleaktest.

PrivacyTools.io is a very useful resource – and not just for VPN comparisons

cryptostorm.is/ and Mullvad.net look to be two of the better paid for commercial services.

TheBestVPN site offers a VPN Comparison and some reviews of 20 providers.

A very thorough comparison of 180 different commercial VPN providers is given by “that one privacy guy“. The rest of his (or her) site is also well worth exploring.

by Mick at May 12, 2017 08:35 PM

May 05, 2017

Daniel Silverstone (Kinnison)

Yarn architecture discussion

Recently Rob and I visited Soile and Lars. We had a lovely time wandering around Helsinki with them, and I also spent a good chunk of time with Lars working on some design and planning for the Yarn test specification and tooling. You see, I wrote a Rust implementation of Yarn called rsyarn "for fun" and in doing so I noted a bunch of missing bits in the understanding Lars and I shared about how Yarn should work. Lars and I filled, and re-filled, a whiteboard with discussion about what the 'Yarn specification' should be, about various language extensions and changes, and also about what functionality a normative implementation of Yarn should have.

This article is meant to be a write-up of all of that discussion, but before I start on that, I should probably summarise what Yarn is.


Yarn is a mechanism for specifying tests in a form which is more like documentation than code. Yarn follows the concept of BDD story based design/testing and has a very Cucumberish scenario language in which to write tests. Yarn takes, as input, Markdown documents which contain code blocks with Yarn tests in them; and it then runs those tests and reports on the scenario failures/successes.

As an example of a poorly written but still fairly effective Yarn suite, you could look at Gitano's tests or perhaps at Obnam's tests (rendered as HTML). Yarn is not trying to replace unit testing, nor other forms of testing, but rather seeks to be one of a suite of test tools used to help validate software and to verify integrations. Lars writes Yarns which test his server setups for example.

As an example, lets look at what a simple test might be for the behaviour of the /bin/true tool:

SCENARIO true should exit with code zero

WHEN /bin/true is run with no arguments
THEN the exit code is 0
 AND stdout is empty
 AND stderr is empty

Anyone ought to be able to understand exactly what that test is doing, even though there's no obvious code to run. Yarn statements are meant to be easily grokked by both developers and managers. This should be so that managers can understand the tests which verify that requirements are being met, without needing to grok python, shell, C, or whatever else is needed to implement the test where the Yarns meet the metal.

Obviously, there needs to be a way to join the dots, and Yarn calls those things IMPLEMENTS, for example:

IMPLEMENTS WHEN (\S+) is run with no arguments
set +e
"${MATCH_1}" > "${DATADIR}/stdout" 2> "${DATADIR}/stderr"
echo $? > "${DATADIR}/exitcode"

As you can see from the example, Yarn IMPLEMENTS can use regular expressions to capture parts of their invocation, allowing the test implementer to handle many different scenario statements with one implementation block. For the rest of the implementation, whatever you assume about things will probably be okay for now.


Given all of the above, we (Lars and I) decided that it would make a lot of sense if there was a set of Yarn scenarios which could validate a Yarn implementation. Such a document could also form the basis of a Yarn specification and also a manual for writing reasonable Yarn scenarios. As such, we wrote up a three-column approach to what we'd need in that test suite.

Firstly we considered what the core features of the Yarn language are:

We considered unusual (or corner) cases and which of them needed defining in the short to medium term:

All of this comes down to how to interpret input to a Yarn implementation. In addition there were a number of things we felt any "normative" Yarn implementation would have to handle or provide in order to be considered useful. It's worth noting that we don't specify anything about an implementation being a command line tool though...

There's bound to be more, but right now with the above, we believe we have two roughly conformant Yarn implementations. Lars' Python based implementation which lives in cmdtest (and which I shall refer to as pyyarn for now) and my Rust based one (rsyarn).


One thing which rsyarn supports, but pyyarn does not, is running multiple scenarios in parallel. However when I wrote that support into rsyarn I noticed that there were plenty of issues with running stuff in parallel. (A problem I'm sure any of you who know about threads will appreciate).

One particular issue was that scenarios often need to share resources which are not easily sandboxed into the ${DATADIR} provided by Yarn. For example databases or access to limited online services. Lars and I had a good chat about that, and decided that a reasonable language extension could be:

USING database foo

with its counterpart

RESOURCE database (\S+)
LABEL database-$1
GIVEN a database called $1
FINALLY database $1 is torn down

The USING statement should be reasonably clear in its pairing to a RESOURCE statement. The LABEL statement I'll get to in a moment (though it's only relevant in a RESOURCE block, and the rest of the statements are essentially substituted into the calling scenario at the point of the USING.

This is nowhere near ready to consider adding to the specification though. Both Lars and I are uncomfortable with the $1 syntax though we can't think of anything nicer right now; and the USING/RESOURCE/LABEL vocabulary isn't set in stone either.

The idea of the LABEL is that we'd also require that a normative Yarn implementation be capable of specifying resource limits by name. E.g. if a RESOURCE used a LABEL foo then the caller of a Yarn scenario suite could specify that there were 5 foos available. The Yarn implementation would then schedule a maximum of 5 scenarios which are using that label to happen simultaneously. At bare minimum it'd gate new users, but at best it would intelligently schedule them.

In addition, since this introduces the concept of parallelism into Yarn proper, we also wanted to add a maximum parallelism setting to the Yarn implementation requirements; and to specify that any resource label which was not explicitly set had a usage limit of 1.


Once we'd discussed the parallelism, we decided that once we had a nice syntax for expanding these sets of statements anyway, we may as well have a syntax for specifying scenario language expansions which could be used to provide something akin to macros for Yarn scenarios. What we came up with as a starter-for-ten was:

CALLING write foo

paired with

EXPANDING write (\S+)
GIVEN bar
WHEN $1 is written to
THEN success was had by all

Again, the CALLING/EXPANDING keywords are not fixed yet, nor is the $1 type syntax, though whatever is used here should match the other places where we might want it.


Finally we discussed multi-line inputs in Yarn. We currently have a syntax akin to:

GIVEN foo
... bar
... baz

which is directly equivalent to:

GIVEN foo bar baz

and this is achieved by collapsing the multiple lines and using the whitespace normalisation functionality of Yarn to replace all whitespace sequences with single space characters. However this means that, for example, injecting chunks of YAML into a Yarn scenario is a pain, as would be including any amount of another whitespace-sensitive input language.

After a lot of to-ing and fro-ing, we decided that the right thing to do would be to redefine the ... Yarn statement to be whitespace preserving and to then pass that whitespace through to be matched by the IMPLEMENTS or whatever. In order for that to work, the regexp matching would have to be defined to treat the input as a single line, allowing . to match \n etc.

Of course, this would mean that the old functionality wouldn't be possible, so we considered allowing a \ at the end of a line to provide the current kind of behaviour, rewriting the above example as:

GIVEN foo \
bar \
baz

It's not as nice, but since we couldn't find any real uses of ... in any of our Yarn suites where having the whitespace preserved would be an issue, we decided it was worth the pain.


None of the above is, as of yet, set in stone. This blog posting is about me recording the information so that it can be referred to; and also to hopefully spark a little bit of discussion about Yarn. We'd welcome emails to our usual addresses, being poked on Twitter, or on IRC in the common spots we can be found. If you're honestly unsure of how to get hold of us, just comment on this blog post and I'll find your message eventually.

Hopefully soon we can start writing that Yarn suite which can be used to validate the behaviour of pyyarn and rsyarn and from there we can implement our new proposals for extending Yarn to be even more useful.

by Daniel Silverstone at May 05, 2017 03:45 PM

April 30, 2017

Chris Lamb

Free software activities in April 2017

Here is my monthly update covering what I have been doing in the free software world (previous month):


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:


I also made the following changes to diffoscope, our recursive and content-aware diff utility used to locate and diagnose reproducibility issues:



Debian


Debian LTS


This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 882-1 for the tryton-server general application platform to fix a path suffix injection attack.
  • Issued DLA 883-1 for curl preventing a buffer read overrun vulnerability.
  • Issued DLA 884-1 for collectd (a statistics collection daemon) to close a potential infinite loop vulnerability.
  • Issued DLA 885-1 for the python-django web development framework patching two open redirect & XSS attack issues.
  • Issued DLA 890-1 for ming, a library to create Flash files, closing multiple heap-based buffer overflows.
  • Issued DLA 892-1 and DLA 891-1 for the libnl3/libnl Netlink protocol libraries, fixing integer overflow issues which could have allowed arbitrary code execution.

Uploads

  • redis (4:4.0-rc3-1) — New upstream RC release.
  • adminer:
    • 4.3.0-2 — Fix debian/watch file.
    • 4.3.1-1 — New upstream release.
  • bfs:
    • 1.0-1 — Initial release.
    • 1.0-2 — Drop fstype tests as they rely on /etc/mtab being available. (#861471)
  • python-django:
    • 1:1.10.7-1 — New upstream security release.
    • 1:1.11-1 — New upstream stable release to experimental.

I sponsored the following uploads:

I also performed the following QA uploads:

  • gtkglext (1.2.0-7) — Correct installation location of gdkglext-config.h after "Multi-Archification" in 1.2.0-5. (#860007)

Finally, I made the following non-maintainer uploads (NMUs):

  • python-formencode (1.3.0-2) — Don't ship files in /usr/lib/python{2.7,3}/dist-packages/docs. (#860146)
  • django-assets (0.12-2) — Patch pytest plugin to check whether we are running in a Django context, otherwise we can break unrelated testsuites. (#859916)


FTP Team


As a Debian FTP assistant I ACCEPTed 155 packages: aiohttp-cors, bear, colorize, erlang-p1-xmpp, fenrir, firejail, fizmo-console, flask-ldapconn, flask-socketio, fontmanager.app, fonts-blankenburg, fortune-zh, fw4spl, fzy, gajim-antispam, gdal, getdns, gfal2, gmime, golang-github-go-macaron-captcha, golang-github-go-macaron-i18n, golang-github-gogits-chardet, golang-github-gopherjs-gopherjs, golang-github-jroimartin-gocui, golang-github-lunny-nodb, golang-github-markbates-goth, golang-github-neowaylabs-wabbit, golang-github-pkg-xattr, golang-github-siddontang-goredis, golang-github-unknwon-cae, golang-github-unknwon-i18n, golang-github-unknwon-paginater, grpc, grr-client-templates, gst-omx, hddemux, highwayhash, icedove, indexed-gzip, jawn, khal, kytos-utils, libbloom, libdrilbo, libhtml-gumbo-perl, libmonospaceif, libpsortb, libundead, llvm-toolchain-4.0, minetest-mod-homedecor, mini-buildd, mrboom, mumps, nnn, node-anymatch, node-asn1.js, node-assert-plus, node-binary-extensions, node-bn.js, node-boom, node-brfs, node-browser-resolve, node-browserify-des, node-browserify-zlib, node-cipher-base, node-console-browserify, node-constants-browserify, node-delegates, node-diffie-hellman, node-errno, node-falafel, node-hash-base, node-hash-test-vectors, node-hash.js, node-hmac-drbg, node-https-browserify, node-jsbn, node-json-loader, node-json-schema, node-loader-runner, node-miller-rabin, node-minimalistic-crypto-utils, node-p-limit, node-prr, node-sha.js, node-sntp, node-static-module, node-tapable, node-tough-cookie, node-tunein, node-umd, open-infrastructure-storage-tools, opensvc, openvas, pgaudit, php-cassandra, protracker, pygame, pypng, python-ase, python-bip32utils, python-ltfatpy, python-pyqrcode, python-rpaths, python-statistics, python-xarray, qtcharts-opensource-src, r-cran-cellranger, r-cran-lexrankr, r-cran-pwt9, r-cran-rematch, r-cran-shinyjs, r-cran-snowballc, ruby-ddplugin, ruby-google-protobuf, ruby-rack-proxy, ruby-rails-assets-underscore, rustc, sbt, sbt-launcher-interface, sbt-serialization, sbt-template-resolver, scopt, seqsero, shim-signed, sniproxy, sortedcollections, starjava-array, starjava-connect, starjava-datanode, starjava-fits, starjava-registry, starjava-table, starjava-task, starjava-topcat, starjava-ttools, starjava-util, starjava-vo, starjava-votable, switcheroo-control, systemd, tilix, tslib, tt-rss-notifier-chrome, u-boot, unittest++, vc, vim-ledger, vis, wesnoth-1.13, wolfssl, wuzz, xandikos, xtensor-python & xwallpaper.

I additionally filed 14 RC bugs against packages that had incomplete debian/copyright files against getdns, gfal2, grpc, mrboom, mumps, opensvc, python-ase, sniproxy, starjava-topcat, starjava-ttools, unittest++, wolfssl, xandikos & xtensor-python.

April 30, 2017 04:35 PM

April 16, 2017

Chris Lamb

Elected Debian Project Leader

I'd like to thank the entire Debian community for choosing me to represent them as the next Debian Project Leader.

I would also like to thank Mehdi for his tireless service and wish him all the best for the future. It is an honour to be elected as the DPL and I am humbled that you would place your faith and trust in me.

You can read my platform here.


April 16, 2017 12:52 PM

March 02, 2017

Jonathan McDowell

Rational thoughts on the GitHub ToS change

I woke this morning to Thorsten claiming the new GitHub Terms of Service could require the removal of Free software projects from it. This was followed by joeyh removing everything from github. I hadn’t actually been paying attention, so I went looking for some sort of summary of whether I should be worried and ended up reading the actual ToS instead. TL;DR version: No, I’m not worried and I don’t think you should be either.

First, a disclaimer. I’m not a lawyer. I have some legal training, but none of what I’m about to say is legal advice. If you’re really worried about the changes then you should engage the services of a professional.

The gist of the concerns around GitHub’s changes are that they potentially circumvent any license you have applied to your code, either converting GPL licensed software to BSD style (and thus permitting redistribution of binary forms without source) or making it illegal to host software under certain Free software licenses on GitHub due to being unable to meet the requirements of those licenses as a result of GitHub’s ToS.

My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service. There are sadly too many people who upload code there without a license, meaning that technically no one can do anything with it. Don’t do this people; make sure that any project you put on GitHub has some sort of license attached to it (don’t write your own - it’s highly likely one of Apache/BSD/GPL will suit your needs) so people know whether they can make use of it or not. “I don’t care” is not a valid reason not to do this.

Section D, relating to user generated content, is the one causing the problems. It’s possibly easiest to walk through each subsection in order.

D1 says GitHub don’t take any responsibility for your content; you make it, you’re responsible for it, they’re not accepting any blame for harm your content does nor for anything any member of the public might do with content you’ve put on GitHub. This seems uncontentious.

D2 reaffirms your ownership of any content you create, and requires you to only post 3rd party content to GitHub that you have appropriate rights to. So I can’t, for example, upload a copy of ‘Friday’ by Rebecca Black.

Thorsten has some problems with D3, where GitHub reserve the right to remove content that violates their terms or policies. He argues this could cause issues with licenses that require unmodified source code. This seems to be alarmist, and also applies to any random software mirror. The intent of such licenses is in general to ensure that the pristine source code is clearly separate from 3rd party modifications. Removal of content that infringes GitHub’s T&Cs is not going to cause an issue.

D4 is a license grant to GitHub, and I think forms part of joeyh’s problems with the changes. It affirms the content belongs to the user, but grants rights to GitHub to store and display the content, as well as make copies such as necessary to provide the GitHub service. They explicitly state that no right is granted to sell the content at all or to distribute the content outside of providing the GitHub service.

This term would seem to be the minimum necessary for GitHub to ensure they are allowed to provide code uploaded to them for download, and provide their web interface. If you’ve actually put a Free license on your code then this isn’t necessary, but from GitHub’s point of view I can understand wanting to make it explicit that they need these rights to be granted. I don’t believe it provides a method of subverting the licensing intent of Free software authors.

D5 provides more concern to Thorsten. It seems he believes that the ability to fork code on GitHub provides a mechanism to circumvent copyleft licenses. I don’t agree. The second paragraph of this subsection limits the license granted to the user to be the ability to reproduce the content on GitHub - it does not grant them additional rights to reproduce outside of GitHub. These rights, to my eye, enable the forking and viewing of content within GitHub but say nothing about my rights to check code out and ignore the author’s upstream license.

D6 clarifies that if you submit content to a GitHub repo that features a license you are licensing your contribution under these terms, assuming you have no other agreement in place. This looks to be something that benefits projects on GitHub receiving contributions from users there; it’s an explicit statement that such contributions are under the project license.

D7 confirms the retention of moral rights by the content owner, but states they are waived purely for the purposes of enabling GitHub to provide service, as stated under D4. In particular this right is revocable so in the event they do something you don’t like you can instantly remove all of their rights. Thorsten is more worried about the ability to remove attribution and thus breach CC-BY or some BSD licenses, but GitHub’s whole model is providing attribution for changesets and tracking such changes over time, so it’s hard to understand exactly where the service falls down on ensuring the provenance of content is clear.

There are reasons to be wary of GitHub (they’ve taken a decentralised revision control system and made a business model around being a centralised implementation of it, and they store additional metadata such as PRs that aren’t as easily extracted), but I don’t see any indication that the most recent changes to their Terms of Service are something to worry about. The intent is clearly to provide GitHub with the legal basis they need to provide their service, rather than to provide a means for them to subvert the license intent of any Free software uploaded.

March 02, 2017 06:13 PM

March 01, 2017

Brett Parker (iDunno)

Using the Mythic Beasts IPv4 -> IPv6 Proxy for Websites on a v6 only Pi and getting the right REMOTE_ADDR

So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.

I've cheated a bit, because the turning on of ProxyProtocol for the hostedpi.com addresses is currently not exposed to customers (it's on the list!), to do it without access to Mythic's backends use your own domainname (I've also got https://pi3.sommitrealweird.co.uk/ mapped to this Pi).

So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen [::1]:8080

<IfModule ssl_module>
       Listen [::1]:4443
</IfModule>

<IfModule mod_gnutls.c>
       Listen [::1]:4443
</IfModule>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.

So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:

global
       log /dev/log    local0
       log /dev/log    local1 notice
       chroot /var/lib/haproxy
       stats socket /run/haproxy/admin.sock mode 660 level admin
       stats timeout 30s
       user haproxy
       group haproxy
       daemon

       # Default SSL material locations
       ca-base /etc/ssl/certs
       crt-base /etc/ssl/private

       # Default ciphers to use on SSL-enabled listening sockets.
       # For more information, see ciphers(1SSL). This list is from:
       #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
       ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
       ssl-default-bind-options no-sslv3

defaults
       log     global
       mode    http
       option  httplog
       option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
       errorfile 400 /etc/haproxy/errors/400.http
       errorfile 403 /etc/haproxy/errors/403.http
       errorfile 408 /etc/haproxy/errors/408.http
       errorfile 500 /etc/haproxy/errors/500.http
       errorfile 502 /etc/haproxy/errors/502.http
       errorfile 503 /etc/haproxy/errors/503.http
       errorfile 504 /etc/haproxy/errors/504.http

frontend any_http
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::80
        default_backend any_http

backend any_http
        server apache2 ::1:8080

Obviously after that you then do:

systemctl restart apache2
systemctl restart haproxy

Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:

a2enmod remoteip

And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined

And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat and add the relevant RemoteIP settings:

RemoteIPHeader X-Forwarded-For
RemoteIPTrustedProxy ::1

CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined

Now, enable the config and restart apache2:

a2enconf remoteip-logformats
systemctl restart apache2

Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.

So, you can now happily visit http://www.<your-pi-name>.hostedpi.com/, e.g. http://www.srwpi.hostedpi.com/.

Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.

/etc/dehydrated/conf.d/mail.sh:

CONTACT_EMAIL="my@email.address"

/etc/dehydrated/conf.d/domainconfig.sh:

DOMAINS_D="/etc/dehydrated/domains.d"

/etc/dehydrated/domains.d/srwpi.hostedpi.com:

HOOK="/etc/dehydrated/hooks/srwpi"

/etc/dehydrated/hooks/srwpi:

#!/bin/sh
action="$1"
domain="$2"

case $action in
  deploy_cert)
    privkey="$3"
    cert="$4"
    fullchain="$5"
    chain="$6"
    cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem
    chmod 640 /etc/ssl/private/srwpi.pem
    ;;
  *)
    ;;
esac

/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.

And finally the file /etc/dehydrated/domains.txt:

www.srwpi.hostedpi.com srwpi.hostedpi.com

Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.

Run dehydrated in cron mode (it's noisy, but meh...):

dehydrated -c

That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/www.srwpi.hostedpi.com/fullchain.pem and /var/llib/dehydrated/certs/ww.srwpi.hostedpi.com/privkey.pem files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:

a2ensite default-ssl
a2enmod ssl

And restart apache2:

systemctl restart apache2

Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:

frontend any_https
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::443 ssl crt /etc/ssl/private/srwpi.pem

        default_backend any_https

backend any_https
        server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt

Restart haproxy:

systemctl restart haproxy

And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.

by Brett Parker (iDunno@sommitrealweird.co.uk) at March 01, 2017 06:35 PM

Ooooooh! Shiny!

Yay! So, it's a year and a bit on from the last post (eeep!), and we get the news of the Psion Gemini - I wants one, that looks nice and shiny and just the right size to not be inconvenient to lug around all the time, and far better for ssh usage than the onscreen keyboard on my phone!

by Brett Parker (iDunno@sommitrealweird.co.uk) at March 01, 2017 03:12 PM

February 07, 2017

Jonathan McDowell

GnuK on the Maple Mini

Last weekend, as a result of my addiction to buying random microcontrollers to play with, I received some Maple Minis. I bought the Baite clone direct from AliExpress - so just under £3 each including delivery. Not bad for something that’s USB capable, is based on an ARM and has plenty of IO pins.

I’m not entirely sure what my plan is for the devices, but as a first step I thought I’d look at getting GnuK up and running on it. Only to discover that chopstx already has support for the Maple Mini and it was just a matter of doing a ./configure --vidpid=234b:0000 --target=MAPLE_MINI --enable-factory-reset ; make. I’d hoped to install via the DFU bootloader already on the Mini but ended up making it unhappy so used SWD by following the same steps with OpenOCD as for the FST-01/BusPirate. (SWCLK is D21 and SWDIO is D22 on the Mini). Reset after flashing and the device is detected just fine:

usb 1-1.1: new full-speed USB device number 73 using xhci_hcd
usb 1-1.1: New USB device found, idVendor=234b, idProduct=0000
usb 1-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 1-1.1: Product: Gnuk Token
usb 1-1.1: Manufacturer: Free Software Initiative of Japan
usb 1-1.1: SerialNumber: FSIJ-1.2.3-87155426

And GPG is happy:

$ gpg --card-status
Reader ...........: 234B:0000:FSIJ-1.2.3-87155426:0
Application ID ...: D276000124010200FFFE871554260000
Version ..........: 2.0
Manufacturer .....: unmanaged S/N range
Serial number ....: 87155426
Name of cardholder: [not set]
Language prefs ...: [not set]
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

While GnuK isn’t the fastest OpenPGP smart card implementation this certainly seems to be one of the cheapest ways to get it up and running. (Plus the fact that chopstx already runs on the Mini provides me with a useful basis for other experimentation.)

February 07, 2017 06:34 PM

January 22, 2017

Steve Engledow (stilvoid)

Angst

I had planned to spend this evening playing games; something I really enjoy doing but rarely set aside any time for. However, while we were eating dinner, I put some music on and it got me in the mood for playing some guitar. Over the course of dinner and playing with my son afterwards, that developed into wanting to write and record some music. I used to write electronic nonsense sometimes but this evening, I fancied trying my hand at some metal.

The first 90 minutes was - as almost every time I get the rare combination of an urge to do something musical and time to do it in - spent trying to remember how my setup worked, which bits of software I needed to install, and how to get the right combination of inputs and outputs I want. I eventually got it sussed and decided I'd better write it down for my own future reference.

Hardware

  1. Plug the USB audio interface from the V-Amp3 into the laptop.
  2. Plug external audio sources into the audio interface's input. (e.g. the V-Amp or a synth).
  3. Plug some headphones into the headphone socket of the audio interface.
  4. Switch on the audio interface's monitoring mode ;) (this kept me going for a little while; it's a small switch)

Software

  1. The following packages need to be installed at a minimum:

    • qjackctl
    • qsynth
    • soundfont-fluidsynth
    • vkeybd
    • ardour
    • hydrogen
  2. Use pavucontrol or similar to disable the normal audio system and just use the USB audio interface.

  3. Qjackctl needs the following snippets in its config for when jack comes up and goes down, respectively:

    • pacmd suspend true

      This halts pulseaudio so that jack can take over

    • pacmd suspend false

      This starts puseaudio back up again

  4. Use the connection tool in Jack to hook hydrogen's and qsynth's outputs to ardour's input. Use the ALSA tab to connect vkeybd to qsynth.

  5. When starting Ardour and Hydrogen, make sure they're both configured to use Jack for MIDI. Switch Ardour's clock from Internal to JACK.

The big picture

For posterity, here's this evening's output.

by Steve Engledow (steve@engledow.me) at January 22, 2017 02:06 AM

November 23, 2016

Steve Engledow (stilvoid)

Win or lose?

I never paid any attention in art classes. On reflection, I think we had an awful teacher who more or less ignored those of us with no latent talent or interest. I grew up mildly jealous of people I knew who could draw and always wished I was able.

Over the past few years, I've heard several people say that artistic ability is 10% talent and 90% practice and I've considered giving it a go at some point. Recently, we bought some pencils and a pad for my son and this evening, with a glass of wine at hand and some 70s rock on the stereo, I decided to take the plunge and see what horrors I could submit the unwitting page to.

Here's the first thing I've drawn since school:

It's supposed to be my wife

It was supposed to be my wife. If you know her, you'll know I failed ;)

I focussed too much on the individual features and not enough on the overall shape. The eyes and hair aren't bad (at least they look something like hers), but the mouth and nose are too large and disproportionate - though recognisable.

I decided to try drawing what was in front of me: a ghost-shaped candle holder:

A photo of the candle holder

That's a photo by the way, not my drawing ;)

Here's the drawing. I killed the perspective somewhat but at least it's recognisable!

My drawing of it

After I'd drawn the ghost, I decided to have another go at my wife while she wasn't paying attention. This one looks more like her but the eyes look as though she's been in a fight and the hair is a tad more Edward Scissorhands than I'd intended.

Overall, I got a better result than I'd expected from my first three attempts at sketching in 20 years. This might turn into a series.

More than willing to receive criticism and advice from people who know what they're doing with a pencil :)

by Steve Engledow (steve@engledow.me) at November 23, 2016 01:02 AM

October 24, 2016

Daniel Silverstone (Kinnison)

Gitano - Approaching Release - Deprecated commands

As mentioned previously I am working toward getting Gitano into Stretch. Last time we spoke about lace, on which a colleague and friend of mine (Richard Maw) did a large pile of work. This time I'm going to discuss deprecation approaches and building more capability out of fewer features.

First, a little background -- Gitano is written in Lua which is a deliberately small language whose authors spend more time thinking about what they can remove from the language spec than they do what they could add in. I first came to Lua in the 3.2 days, a little before 4.0 came out. (The authors provide a lovely timeline in case you're interested.) With each of the releases of Lua which came after 3.2, I was struck with how the authors looked to take a number of features which the language had, and collapse them into more generic, more powerful, smaller, fewer features.

This approach to design stuck with me over the subsequent decade, and when I began Gitano I tried to have the smallest number of core features/behaviours, from which could grow the power and complexity I desired. Gitano is, at its core, a set of files in a single format (clod) stored in a consistent manner (Git) which mediate access to a resource (Git repositories). Some of those files result in emergent properties such as the concept of the 'owner' of a repository (though that can simply be considered the value of the project.owner property for the repository). Indeed the concept of the owner of a repository is a fiction generated by the ACL system with a very small amount of collusion from the core of Gitano. Yet until recently Gitano had a first class command set-owner which would alter that one configuration value.

[gitano]  set-description ---- Set the repo's short description (Takes a repo)
[gitano]         set-head ---- Set the repo's HEAD symbolic reference (Takes a repo)
[gitano]        set-owner ---- Sets the owner of a repository (Takes a repo)

Those of you with Gitano installations may see the above if you ask it for help. Yet you'll also likely see:

[gitano]           config ---- View and change configuration for a repository (Takes a repo)

The config command gives you access to the repository configuration file (which, yes, you could access over git instead, but the config command can be delegated in a more fine-grained fashion without having to write hooks). Given the config command has all the functionality of the three specific set-* commands shown above, it was time to remove the specific commands.

Migrating

If you had automation which used the set-description, set-head, or set-owner commands then you will want to switch to the config command before you migrate your server to the current or any future version of Gitano.

In brief, where you had:

ssh git@gitserver set-FOO repo something

You now need:

ssh git@gitserver config repo set project.FOO something

It looks a little more wordy but it is consistent with the other features that are keyed from the project configuration, such as:

ssh git@gitserver config repo set cgitrc.section Fooble Section Name

And, of course, you can see what configuration is present with:

ssh git@gitserver config repo show

Or look at a specific value with:

ssh git@gitserver config repo show specific.key

As always, you can get more detailed (if somewhat cryptic) help with:

ssh git@gitserver help config

Next time I'll try and touch on the new PGP/GPG integration support.

by Daniel Silverstone at October 24, 2016 02:24 AM

October 18, 2016

MJ Ray

Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web. 🙂

by mjr at October 18, 2016 04:28 AM

May 30, 2016

Wayne Stallwood (DrJeep)

UPS for Octopi or Octoprint

So it only took one mid print power cut to realise I need a UPS for my 3D printer.

it's even worse for a machine like mine with a E3D all metal head as it requires active cooling to stop damage to the head mount or prevent a right mess of molten filament inside the heatbreak.

See below for instructions on setting up an APC UPS so that it can send a command to octopi to abort the print and start cooling the head before the batteries in the UPS are exhausted.

I used a APC BackUPS Pro 550, which seems to be about the minimum spec I can get away with, on my printer this gives me approximately 5 minutes of print time without power, or 40 minutes of the printer powered but idle, other UPS's would work but APC is the only type tested with these instructions

Test this throughly and make sure you have enough runtime to cool the head before the batteries are exhausted, the only way to do this properly is to set up a test print and pull the power.

Once you have installed the power leads to and from the UPS and got the printer powered through it (not forgetting the Rpi or whatever you have running octoprint also needs power...mine is powered via the printer PSU ) You need to install acpupsd, it's in the default repo for raspian so just install it with apt.

sudo apt-get install apcupsd

Now we need to tweak apcupsd's configuration a bit

Edit the apcupsd configuration as follows, you can find it at /etc/apcupsd/apcupsd.conf, just use your favourite editor.

Find and change the following lines

UPSCABLE smart

UPSTYPE usb

DEVICE (this should be blank)

BATTERYLEVEL 50

MINUTES 5

You might need to tweak BATTERYLEVEL and MINUTES for your printer and UPS. this is the percentage of power left before the shutdown will trigger or the minutes of runtime, whichever one happens first

Remember this is minutes as calculated whilst the printer is still running. Once the print is stopped the runtime will be longer as the heaters will be off, so setting 5 minutes here would in my case give me 20 minutes of runtime once the print has aborted for the hot-end to cool

Plug the USB cable from the UPS into a spare port on the Rpi

Now activate the service by editing /etc/default/apcupsd and changing the following line

ISCONFIGURED=yes

Now start the service, it will start by itself on the next boot

sudo service apcupsd start

If all is well typing acpaccess at the prompt should get you some stats from the UPS, battery level etc

If that's all good then apcupsd is configured, now for the script that aborts your print

First go into the octoprint settings from the web interface, make sure API access is turned on and record the API key carefully

Back on the rpi go to the home directory

cd ~

Now download my custom shutdown script with wget

wget http://www.digimatic.co.uk/media/doshutdown sudo cp doshutdown /etc/apcupsd cd /etc/apcupsd

Set the permissions so the script can run

chmod 755 doshutdown

Don't be tempted to rename the file, leave it as this name

Now edit the script and change the variable at the top API_KEY to the API key you got from your copy of octoprint earlier

That should be it, the script does 3 things when the power fails and the battery goes below one of the trigger points

Prints a warning on the printer's LCD screen

Records the current printer status and print file position to a file in /home/pi, so that maybe you can work out how to slice the reminder of the model and save the print

Aborts the print

This hasn't had a massive amount of testing and there are a few bugs, if you have a really big layer going on when the power goes you might not have enough power to make it to the end, octoprint only aborts at specific points in the print, same if you are at the first stages and are heating the bed, octoprint will wait until the bed is up to temp before running the next command (abort).

The sleep at the end of the script stops the rpi from shutting down, we need to wait here and make sure the printer has taken the abort command before killing the pi so that's an unknown amount of time so I leave it running by sleeping indefinitely here

If I get time I will make a proper octoprint plugin for all this

May 30, 2016 08:13 PM

April 02, 2016

Wayne Stallwood (DrJeep)

Simple USB2 Host Switch

Initially created for the BigBox 3D printer to allow use of both the Internal Raspberry Pi running Octoprint and the rear mounted USB port for diagnostic access. The Rumba has only one USB port and can only be attached to one of these at a time.

However this circuit will work in any other scenario where you want to be able to switch between USB Hosts.

Plug a Host PC or other host device into port X1 and the device you want to control into Port X3, everything should work as normal.

Plug an additional powered Host PC or other host device into Port X2 and and the host plugged into Port X1 should be disconnected in preference to this device which should now be connected to the device plugged into port X3.

Please note, in many cases, particularly with devices that are bus powered like memory sticks, the device will not function if there is no powered host PC plugged into port X1

April 02, 2016 07:38 PM

June 11, 2015

MJ Ray

Mick Morgan: here’s why pay twice?

http://baldric.net/2015/06/05/why-pay-twice/ asks why the government hires civilians to monitor social media instead of just giving GC HQ the keywords. Us cripples aren’t allowed to comment there (physical ability test) so I reply here:

It’s pretty obvious that they have probably done both, isn’t it?

This way, they’re verifying each other. Politicians probably trust neither civilians or spies completely and that makes it worth paying twice for this.

Unlike lots of things that they seem to want not to pay for at all…

by mjr at June 11, 2015 03:49 AM

March 09, 2015

Ben Francis

Pinned Apps – An App Model for the Web

(re-posted from a page I created on the Mozilla wiki on 17th December 2014)

Problem Statement

The per-OS app store model has resulted in a market where a small number of OS companies have a large amount of control, limiting choice for users and app developers. In order to get things done on mobile devices users are restricted to using apps from a single app store which have to be downloaded and installed on a compatible device in order to be useful.

Design Concept

Concept Overview

The idea of pinned apps is to turn the apps model on its head by making apps something you discover simply by searching and browsing the web. Web apps do not have to be installed in order to be useful, “pinning” is an optional step where the user can choose to split an app off from the rest of the web to persist it on their device and use it separately from the browser.

Pinned_apps_overview

”If you think of the current app store experience as consumers going to a grocery store to buy packaged goods off a shelf, the web is more like a hunter-gatherer exploring a forest and discovering new tools and supplies along their journey.”

App Discovery

A Web App Manifest linked from a web page says “I am part of a web app you can use separately from the browser”. Users can discover web apps simply by searching or browsing the web, and use them instantly without needing to install them first.

Pinned_apps_discovery

”App discovery could be less like shopping, and more like discovering a new piece of inventory while exploring a new level in a computer game.”

App Pinning

If the user finds a web app useful they can choose to split it off from the rest of the web to persist it on their device and use it separately from the browser. Pinned apps can provide a more app-like experience for that part of the web with no browser chrome and get their own icon on the homescreen.

Pinned_apps_pinning

”For the user pinning apps becomes like collecting pin badges for all their favourite apps, rather than cluttering their device with apps from an app store that they tried once but turned out not to be useful.”

Deep Linking

Once a pinned app is registered as managing its own part of the web (defined by URL scope), any time the user navigates to a URL within that scope, it will open in the app. This allows deep linking to a particular page inside an app and seamlessly linking from one app to another.

Pinned_apps_linking

”The browser is like a catch-all app for pages which don’t belong to a particular pinned app.”

Going Offline

Pinning an app could download its contents to the device to make it work offline, by registering a Service Worker for the app’s URL scope.

Pinned_apps_offline

”Pinned apps take pinned tabs to the next level by actually persisting an app on the device. An app pin is like an anchor point to tether a collection of web pages to a device.”

Multiple Pages

A web app is a collection of web pages dedicated to a particular task. You should be able to have multiple pages of the app open at the same time. Each app could be represented in the task manager as a collection of sheets, pinned together by the app.

Pinned_app_pages

”Exploding apps out into multiple sheets could really differentiate the Firefox OS user experience from all other mobile app platforms which are limited to one window per app.”

Travel Guide

Even in a world without app stores there would still be a need for a curated collection of content. The Marketplace could become less of a grocery store, and more of a crowdsourced travel guide for the web.

Pinned_apps_guide

”If a user discovers an app which isn’t yet included in the guide, they could be given the opportunity to submit it. The guide could be curated by the community with descriptions, ratings and tags.”

3 Questions

Pinnged_apps_pinned

What value (the importance, worth or usefulness of something) does your idea deliver?

The pinned apps concept makes web apps instantly useful by making “installation” optional. It frees users from being tied to a single app store and gives them more choice and control. It makes apps searchable and discoverable like the rest of the web and gives developers the freedom of where to host their apps and how to monetise them. It allows Mozilla to grow a catalogue of apps so large and diverse that no walled garden can compete, by leveraging its user base to discover the apps and its community to curate them.

What technological advantage will your idea deliver and why is this important?

Pinned apps would be implemented with emerging web standards like Web App Manifests and Service Workers which add new layers of functionality to the web to make it a compelling platform for mobile apps. Not just for Firefox OS, but for any user agent which implements the standards.

Why would someone invest time or pay money for this idea?

Users would benefit from a unique new web experience whilst also freeing themselves from vendor lock-in. App developers can reduce their development costs by creating one searchable and discoverable web app for multiple platforms. For Mozilla, pinned apps could leverage the unique properties of the web to differentiate Firefox OS in a way that is difficult for incumbents to follow.

UI Mockups

App Search

Pinned_apps_search

Pin App

Pin_app

Pin Page

Pin_page

Multiple Pages

Multiple_pages

App Directory

App_directory

Implementation

Web App Manifest

A manifest is linked from a web page with a link relation:

  <link rel=”manifest” href=”/manifest.json”>

A manifest can specify an app name, icon, display mode and orientation:

 {
   "name": "GMail"
   "icons": {...},
   "display": "standalone",
   "orientation": “portrait”,
   ...
 }

There is a proposal for a manifest to be able to specify an app scope:

 {
   ...
   "scope": "/"
   ...
 }

Service Worker

There is also a proposal to be able to reference a Service Worker from within the manifest:

 {
   ...
   service_worker: {
     src: "app.js",
     scope: "/"
   ...
 }

A Service Worker has an install method which can populate a cache with a web app’s resources when it is registered:

 this.addEventListener('install', function(event) {
  event.waitUntil(
    caches.create('v1').then(function(cache) {
     return cache.add(
        '/index.html',
        '/style.css',
        '/script.js',
        '/favicon.ico'
      );
    }, function(error) {
        console.error('error populating cache ' + error);
    };
  );
 });

So that the app can then respond to requests for resources when offline:

 this.addEventListener('fetch', function(event) {
  event.respondWith(
    caches.match(event.request).catch(function() {
      return event.default();
    })
  );
 });

by tola at March 09, 2015 03:54 PM

December 11, 2014

Ben Francis

The Times They Are A Changin’ (Open Web Remix)

In the run up to the “Mozlandia” work week in Portland, and in reflection of the last three years of the Firefox OS project, for a bit of fun I’ve reworked a Bob Dylan song to celebrate our incredible journey so far.

Here’s a video featuring some of my memories from the last three years, with Siobhan (my fiancée) and me singing the song at you! There are even lyrics so you can sing along ;)

“Keep on rockin’ the free web” — Potch

by tola at December 11, 2014 11:26 AM

July 10, 2014

James Taylor

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


July 10, 2014 02:09 PM

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


July 10, 2014 02:09 PM

June 12, 2014

Paul Tansom

Beginning irc

After some discussion last night at PHP Hants about the fact that irc is a great facilitator of support / discussion, but largely ignored because there is rarely enough information for a new user to get going I decided it may be worth putting together a howto type post so here goes…

What is irc?

First of all, what on earth is it? I’m tempted to describe it as Twitter done right years before Twitter even existed, but I’m a geek and I’ve been using irc for years. It has a long heritage, but unlike the ubiquitous email it hasn’t made the transition into mainstream use. In terms of usage it has similarities to things like Twitter and Instant Messaging. Let’s take a quick look at this.

Twitter allows you to broadcast messages, they get published and anyone who is subscribed to your feed can read what you say. Everything is pretty instant, and if somebody is watching the screen at the right time they can respond straight away. Instant Messaging on the other hand, is more of a direct conversation with a single person, or sometimes a group of people, but it too is pretty instantaneous – assuming, of course, that there’s someone reading what you’ve said. Both of these techonologies are pretty familiar to many. If you go to the appropriate website you are given the opportunity to sign up and either use a web based client or download one.

It is much the same for irc in terms of usage, although conversations are grouped into channels which generally focus on a particular topic rather than being generally broadcast (Twitter) or more specifically directed (Instant Messaging). The downside is that in most cases you don’t get a web page with clear instructions of how to sign up, download a client and find where the best place is to join the conversation.

Getting started

There are two things you need to get going with irc, a client and somewhere to connect to. Let’s put that into a more familiar context.

The client is what you use to connect with; this can be an application – so as an example Outlook or Thunderbird would be a mail client, or IE, Firefox, Chrome or Safari are examples of clients for web pages – or it can be a web page that does the same thing – so if you go to twitter.com and login you are using the web page as your Twitter client. Somewhere to connect to can be compared to a web address, or if you’ve got close enough to the configuration of your email to see the details, your mail server address.

Let’s start with the ‘somewhere to connect to‘ bit. Freenode is one of the most popular irc servers, so let’s take a look. First we’ll see what we can find out from their website, http://freenode.net/.

freenode

There’s a lot of very daunting information there for somebody new to irc, so ignore most of it and follow the Webchat link on the left.

webchat

That’s all very well and good, but what do we put in there? I guess the screenshot above gives a clue, but if you actually visit the page the entry boxes will be blank. Well first off there’s the Nickname, this can be pretty much anything you like, no need to register it – stick to the basics of letters, numbers and some simple punctuation (if you want to), keep it short and so long as nobody else is already using it you should be fine; if it doesn’t work try another. Channels is the awkward one, how do you know what channels there are? If you’re lucky you’re looking into this because you’ve been told there’s a channel there and hopefully you’ve been given the channel name. For now let’s just use the PHP Hants channel, so that would be #phph in the Channels box. Now all you need to do is type in the captcha, ignore the tick boxes and click Connect and you are on the irc channel and ready to chat. Down the right you’ll see a list of who else is there, and in the main window there will be a bit of introductory information (e.g. topic for the channel) and depending on how busy it is anything from nothing to a fast scrolling screen of text.

phph

If you’ve miss typed there’s a chance you’ll end up in a channel specially created for you because it didn’t exist; don’t worry, just quit and try again (I’ll explain that process shortly).

For now all you really need to worry about is typing in text an posting it, this is as simple as typing it into the entry box at the bottom of the page and pressing return. Be polite, be patient and you’ll be fine. There are plenty of commands that you can use to do things, but for now the only one you need to worry about is the one to leave, this is:

/quit

Type it in the entry box, press return and you’ve disconnected from the server. The next thing to look into is using a client program since this is far more flexible, but I’ll save that for another post.

by Paul Tansom at June 12, 2014 04:27 PM

May 06, 2014

Richard Lewis

Refocusing Ph.D

Actual progress on this Ph.D revision has been quite slow. My current efforts are on improving the focus of the thesis. One of the criticisms the examiners made (somewhat obliquely) was that it wasn&apost very clear exactly what my subject was: musicology? music information retrieval? computational musicology? And the reason for this was that I failed to make that clear to myself. It was only at the writing up stage, when I was trying to put together a coherent argument, that I decided to try and make it a story about music information retrieval (MIR). I tried to argue that MIR&aposs existing evaluation work (which was largely modelled on information retrieval evaluation from the text world) only took into account the music information needs of recreational users of MIR systems, and that there was very little in the way of studying the music information seeking behaviour of "serious" users. However, the examiners didn&apost even accept that information retrieval was an important problem for musicology, nevermind that there was work to be done in examining music information needs of music scholarship.

So I&aposm using this as an excuse to shift the focus away from MIR a little and towards something more like computational musicology and music informatics. I&aposm putting together a case study of a computational musicology toolkit called music21. Doing this allows me to focus in more detail on a smaller and more distinct community of users (rather than attempting to studying musicologists in general which was another problematic feature of the thesis), it makes it much clearer what kind of music research can be addressed using the technology (all of MIR is either far too diverse or far too generic, depending on how you want to spin it), and also allows me to work with the actually Purcell Plus project materials using the toolkit.

May 06, 2014 11:16 PM

March 27, 2014

Richard Lewis

Taking notes in Haskell

The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we&aposre using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we&aposre working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don&apost even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that&aposs what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

Here&aposs what I typed:

module MusicalDocuments where

import Data.Maybe

-- A document comprises some number of openings (double page spreads)
data Document = Document [Opening]

-- An opening comprises one or two pages (usually two)
data Opening = Opening (Page, Maybe Page)

-- A page comprises multiple systems
data Page = Page [System]

-- Each part is the line for a particular voice
data Voice = Superius | Discantus | Tenor | Contratenor | Bassus

-- A part comprises a list of musical sybmols, but it may span mutliple systems
--(including partial systems)
data Part = Part [MusicalSymbol]

-- A piece comprises some number of sections
data Piece = Piece [Section]

-- A system is a collection of staves
data System = System [Staff]

-- A staff is a list of atomic graphical symbols
data Staff = Staff [Glyph]

-- A section is a collection of parts
data Section = Section [Part]

-- These are the atomic components, MusicalSymbols are semantic and Glyphs are
--syntactic (i.e. just image elements)
data MusicalSymbol = MusicalSymbol
data Glyph = Glyph

-- If this were real, Image would abstract over some kind of binary format
data Image = Image

-- One of the important properties we need in order to be able to construct pieces
-- from the scanned components is to be able to say when objects of the some of the
-- types are strictly contiguous, i.e. this staff immediately follows that staff
class Contiguous a where
  immediatelyFollows :: a -> a -> Bool
  immediatelyPrecedes :: a -> a -> Bool
  immediatelyPrecedes a b = b `immediatelyFollows` a

instance Contiguous Staff where
  immediatelyFollows :: Staff -> Staff -> Bool
  immediatelyFollows = undefined

-- Another interesting property of this data set is that there are a number of
-- duplicate scans of openings, but nothing in the metadata that indicates this,
-- so our workflow needs to recognise duplicates
instance Eq Opening where
  (==) :: Opening -> Opening -> Bool
  (==) a b = undefined

-- Maybe it would also be useful to have equality for staves too?
instance Eq Staff where
  (==) :: Staff -> Staff -> Bool
  (==) a b = undefined

-- The following functions actually represent the workflow

collate :: [Document]
collate = undefined

scan :: Document -> [Image]
scan = undefined

split :: Image -> Opening
split = undefined

paginate :: Opening -> [Page]
paginate = undefined

omr :: Page -> [System]
omr = undefined

segment :: System -> [Staff]
segment = undefined

tokenize :: Staff -> [Glyph]
tokenize = undefined

recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol
recogniseMusicalSymbol = undefined

part :: [Glyph] -> Maybe Part
part gs =
  if null symbols then Nothing else Just $ Part symbols
  where symbols = mapMaybe recogniseMusicalSymbol gs

alignable :: Part -> Part -> Bool
alignable = undefined

piece :: [Part] -> Maybe Piece
piece = undefined

I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that&aposs just a label for a list or pair.

I haven&apost written much Haskell code before, and given that I&aposve only implemented one function here, I still haven&apost written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

March 27, 2014 11:13 PM

February 06, 2014

Adam Bower (quinophex)

I finally managed to beat my nemesis!

I purchased this book http://www.amazon.co.uk/dp/0738206679 (Linked, by Barabasi) on the 24th of December 2002, I had managed to make 6 or 7 aborted attempts at reading it to completion where life had suddenly got busy and just took over. This meant that I put the book down and didn't pick it up again until things were less hectic some time later and I started again.

Anyhow, I finally beat the book a few nights ago, my comprehension of it was pretty low anyhow but at least it is done. Just shows I need to read lots more given how little went in.




comment count unavailable comments

February 06, 2014 10:40 PM

February 01, 2014

Adam Bower (quinophex)

Why buying a Mio Cyclo 305 HC cycling computer was actually a great idea.

I finally made it back out onto the bike today for the first time since September last year. I'd spent some time ill in October and November which meant I had to stop exercising and as a result I've gained loads of weight over the winter and it turns out also become very unfit which can be verified by looking at the Strava ride from today: http://www.strava.com/activities/110354158

Anyhow, a nice thing about this ride is that I can record it on Strava and get this data about how unfit I have become, this is because last year I bought a Mio Cyclo 305 HC cycle computer http://eu.mio.com/en_gb/mio-cyclo-305-hc.htm from Halfords reduced to £144.50 (using a British Cycling discount). I was originally going to get a Garmin 500 but Amazon put the price up from £149.99 the day I was going to buy it to £199.99.

I knew when I got the Mio that it had a few issues surrounding usability and features but it was cheap enough at under £150 that I figured that even if I didn't get on with it I'd at least have a cadence sensor and heart rate monitor so I could just buy a Garmin 510 when they sorted out the firmware bugs with that and the price came down a bit which is still my longer term intention.

So it turns out a couple of weeks ago I plugged my Mio into a Windows VM when I was testing USB support and carried out a check for new firmware. I was rather surprised to see a new firmware update and new set of map data was available for download. So I installed it think I wasn't going to get any new features from it as Mio had released some new models but it turns out that the new firmware actually enables a single feature (amongst other things, they also tidied up the UI and sorted a few other bugs along with some other features) that makes the device massively more useful as it now also creates files in .fit format which can be uploaded directly to Strava.

This is massively useful for me as although the Mio always worked in Linux as the device is essentially just a USB mass storage device but you would have to do an intermediate step of having to use https://github.com/rhyas/GPXConverter to convert the files from the Mio-centric GPX format to something Strava would recognise. Now I can just browse to the folder and upload the file directly which is very handy.

All in it turns out that buying a Mio which reading reviews and forums were full of doom and gloom means I can wait even longer before considering replacement with a garmin.

comment count unavailable comments

February 01, 2014 02:11 PM

January 01, 2014

John Woodard

A year in Prog!


It's New Year's Day 2014 and I'm reflecting on the music of past year.

Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 01, 2014 07:56 PM

December 01, 2013

Paul Tansom

Scratch in a network environment

I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I’ve made to the default Scratch install to make things easier. So here goes:

With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn’t exactly helpful. It also isn’t ideal that they can explore the C: drive in spite of profile restrictions (although it isn’t the end of the world as there is little they can do from Scratch).

save-orig

After a bit of time with Google I found the answer, and since it didn’t immediately leap out at me when I was searching I thought I’d post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:

C:\Program Files\Scratch\Scratch.ini

Initially it looks like this:

ini-orig

Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):

Home=U:
VisibleDrives=U:

To get this:

ini-new

They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.

save-new1

The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.

save-new2

You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).

There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.

Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven’t tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:

ProxyServer=[server name or IP address]
ProxyPort=[port number]

by Paul Tansom at December 01, 2013 07:00 PM

February 22, 2013

Joe Button

Sampler plugin for the baremetal LV2 host

I threw together a simpler sampler plugin for kicks. Like the other plugins it sounds fairly underwhelming. Next challenge will probably be to try plugging in some real LV2 plugins.

February 22, 2013 11:22 PM

February 21, 2013

Joe Button

Baremetal MIDI machine now talks to hardware MIDI devices

The Baremetal MIDI file player was cool, but not quite as cool as a real instrument.

I wired up a MIDI In port along the lines of This one here, messed with the code a bit and voila (and potentially viola), I can play LV2 instrument plugins using a MIDI keyboard:

When I say "LV2 synth plugins", I should clarify that I'm only using the LV2 plugin C API, not the whole .ttl text file shebangle. I hope to get around to that at some point but it will be a while before you can directly plug LV2s into this and expect them to just work.

February 21, 2013 04:05 PM

January 16, 2013

John Woodard

LinuxMint 14 Add Printer Issue


 LinuxMint 14 Add Printer Issue



 

I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!

It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.

Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.

The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.

So come on Minty people get your bug squashing boots on and stamp on this one please.

Update

Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 16, 2013 12:39 AM

August 20, 2012

David Reynolds

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

June 25, 2012

Elisabeth Fosbrooke-Brown (sfr)

Black redstarts

It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.

Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!

The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.

by sunflowerinrain (noreply@blogger.com) at June 25, 2012 08:02 AM

June 16, 2012

Elisabeth Fosbrooke-Brown (sfr)

Roundabout at Mirambeau

Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.

A couple of years ago the roundabout at the edge of  Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.

Need a passenger to take photo!

by sunflowerinrain (noreply@blogger.com) at June 16, 2012 12:06 PM

September 04, 2006

Ashley Howes

Some new photos

Take a look at some new photos my father and I have taken. We are experimenting with our new digital SLR with a variety of lenses.

by Ashley (noreply@blogger.com) at September 04, 2006 10:42 AM

August 30, 2006

Ashley Howes

A Collection of Comments

This is a bit of fun. A collection of comments found in code. This is from The Daily WTF.

by Ashley (noreply@blogger.com) at August 30, 2006 01:13 AM