I previously forgot to mention that I was planning to attend DebConf14, having missed DebConf13. This year the conference was held in Portland, OR. This is a city I've been to many times before, and enjoy, but I hadn't spent any time wandering around its city centre as a pedestrian. I have to say I really prefer DebConfs that are held in middle of city. It always seems a bit of a shame to travel some distance to somewhere new and spend all the time there in a conference venue. Plus these days I have the added lure of going out and playing Ingress in a new location. DebConf14 didn't disappoint in these respects; the location was super easy to get to from the airport via public transportation, all of the evening social events were within reasonable walking distance (I'll tend to default to walking when possible) and the talk venue/accommodation were close to each other and various eating + drinking options. Throw in the fact at Portland managed to produce some excellent weather (modulo my Ingress session on the last Saturday morning, when rained on me) and it's impossible to fault the physicalities of DebConf this year.
This year the conference format was a bit different; previous years have had a week long DebConf before the week of the conference itself. This year went for a 9 day talk schedule (Saturday -> Sunday) with various gaps of hacking time interspersed. I've found it hard to justify a full two weeks away in the past, so this setup worked a lot better from my viewpoint. Also I rarely go to DebConf with a predetermined list of things to do; the stuff I work on naturally falls out of talks I attend and informal discussions I have. Having hack time throughout the conference helped me avoid feeling I was having to trade off hacking vs talks.
Naturally enough a lot of my involvement at DebConf was around OpenPGP. Gunnar and I spent a fair bit of time getting Daniel up to speed with the keyring-maint team (Gunnar more than I, I'll confess). We finally set a hard timeframe for freeing Debian of older 1024 bit keys. I was introduced to the Gnuk, which is a particularly interesting piece of open specification hardware with a completely Free software stack on top if it that implements the OpenPGP smartcard spec. Currently it's limited to 2K keys but it's hoped that 4K support can be added (and I ended up spending a couple of hours after the closing talk hacking on the source and seeing how much needs to change for 4K support, aided by the very patient Niibe). These are the sort of things that really benefit from the face time that DebConf offers to the Debian project. I've said it before, but I think it's worth saying again: Debian is a bit like a huge telecommuting organization and it's my opinion that any such organization should try and ensure its members actually spend some time together on a regular basis. It improves the ability to work remotely a hell of a lot if you can actually put a face to the entity you're emailing / IRCing and have some sort of idea where they're coming from because you've spent some time with them, whether that's in talks or over dinner or just casual hallway chats.
For once I also found myself considering alternative employment while at DebConf and it was incredibly useful to be able to have various conversations with both old friends and people who were there with an eye on recruitment. Thanks to all those whose ears I bent about the subject (and more on the outcome in a future post). Thank you also to the many people involved with the organization of DebConf; I've been on the periphery a few times over the years and it's given me a glimpse into the amount of hard work all of the volunteers (be they global team, local organizing team, video team or just random volunteers) put into making DebConf one of my must-attend yearly conferences. If you're at all involved in Debian and haven't attended I strongly urge you to do so - I'll see you all next year at DebConf15 in Heidelberg!
Back in January I changed jobs. This took me longer to decide to do than it should have. My US visa (an L-1B) was tied to the old job, and not transferable, so leaving the old job also meant leaving the US. That was hard to do; I'd had a mostly fun 3 and a half years in the SF Bay Area.
The new job had an office in Belfast, and HQ in the Bay Area. I went to work in Belfast, and got sent out to the US to meet coworkers and generally get up to speed. During that visit the company applied for an H-1B visa for me. This would have let me return to the US in October 2014 and start working in the US office; up until that point I'd have continued to work from Belfast. Unfortunately there were 172,500 applications for 85,000 available visas and mine was not selected for processing.
I'm disappointed by this. I've enjoyed my time in the US. I had a green card application in process, but after nearly 2 years it still hadn't completed the initial hurdle of the labor certification stage (a combination of a number of factors, human, organizational and governmental). However the effort of returning to live here seems too great for the benefits gained. I can work for a US company with a non-US office and return on an L-1B after a year. And once again have to leave should I grow out of the job, or the job change in some way that doesn't suit me, or the company hit problems and have to lay me off. Or I can try again for an H-1B next year, aiming for an October 2015 return, and hope that this time my application gets selected for processing.
Neither really appeals. Both involve putting things on hold in the hope longer terms pans out as I hope. And to be honest I'm bored of that. I've loved living in America, but I ended up spending at least 6 months longer in the job I left in January than I'd have done if I'd been freely able to change employer without having to change continent. So it seems the time has come to accept that America and I must part ways, sad as that is. Which is why I'm currently sitting in SFO waiting for a flight back to Belfast and for the first time in 5 years not having any idea when I might be back in the US.
Owing to various factors, I'm finding it difficult to recall the things that have happened and in what order over the last few days so, for my own purposes, I'm going to note them down here.</pointless-intro>
Edit: Those were not notes. I'm a waffler.
tl;dr: We got tired, the airline lost one of our bags, we did stuff, the airline found the bag.
Woke up around 9, considered the fact that we had until around 5pm to tackle tidy the house, tackle the Everest of dishes, wash all clothes, pack, and then leave for our holiday.
Farted around a fair bit and eventually resigned ourselves to coming back to a less-than-perfectly-tidy house. I scaled Mount Crockery at least.
Around 18:30 we eventually left for Stansted. We made good time and arrived plenty early enough for our 23:35 flight to Istanbul. On check-in, we were told the flight was delayed and was expected to depart between 01:00 and 01:30. Just what we needed with our already over-tired 2 year old.
We decided we would try to take it easy; we had a pint and I walked around the airport with the little man until he had calmed down a bit.
Eventually, the plane was ready for us to board at 01:15; we did so.
The flight passed easily enough. We were served a hot meal as soon as we hit cruising altitude and then we all slept through until descent. The landing was smooth and early morning Istanbul seemed warm and inviting.
Until we had found ourselves still waiting for our luggage ninety minutes later.
2 hours later, we learned that one of our bags had been lost. After some half-hearted arguing (we were just too tired), we filled in a form and left the arrivals hall with our remaining luggage. Unfortunately, the one that was missing contained most of our clothes and, frustratingly, toys and clothes for my sister-in-law's newborn.
Brother-in-law was waiting patiently outside for us. I guess he'd been there a while because he looked very relieved to see us. We made our way to the car hire place to find that, because we were so much later than we'd told them (by this time we were 3 hours later than we had booked the car for) they had decided we'd cancelled and gave our car to someone else. After some more arguing (half-hearted again), they found us another car of "equivalent size" and told us to wait round the front.
The car was a Ford Fiesta. I'm not one of those blokey types that know about cars. But I can say with certainty that I will never buy a Ford Fiesta and hope never to have to drive one again. It was tiny and weird. If we'd had our missing bag, I don't think we could have fit everything in the car. mumble mumble small mercies or summat
With the help of b-i-l, we found our way to his house - driving on the "wrong" side of the road in a "wrong"-hand-drive car after a long and stressful night with not much sleep was fun - and greeted s-i-l and her new baby and then had breakfast.
Then we slept. Then we went to the park. Then we slept.
The oddity of travelling at night then sleeping in the day but still being tired enough to sleep again at night was a new experience for me and I am still feel quite confused but I think I've managed to convince myself that everything above under "Tuesday" is correct.
On Wednesday, we decided on the strength of internet reviews to visit Polonezköy. Don't bother, it's rubbish. We pressed on then to the "nearby" beach. It turned out to be a 45 minute drive and a storm broke out along the way. When we arrived at the little seaside town (I don't remember its name) there was nowhere to park. Being already in a grump, we decided to head home and call the day a complete loss. Half way home, we decided we would visit Kartal instead; a town near s-i-l's.
Kartal was nice :)
Shopping in Kadıköy, ferry to Beşiktaş, more shopping, ferry back, home. In all, a nice day. Rounded off by some quality time with a beer on the balcony. It is way too hot indoors, even at night.
Just after midnight, the airline called us to say that they had found our missing luggage and would be sending it to us tomorrow.
Ladar Levison and Stephen Wyatt presented the upcoming Dark Internet Mail Environment (DIME) at Defcon22 this week. According to El Reg, Levison, who shut down Lavabit, his previous mail service rather than comply with FBI demands that he divulge the private SSL certificates used to encrypt traffic on that service, said:
“I’m not upset that I got railroaded and I had to shut down my business … I’m upset because we need a mil-spec cryptographic mail system for the entire planet just to be able to talk to our friends and family without any kind of fear of government surveillance”.
As soon as the Boot to Gecko (B2G) project was announced in July 2011 I knew it something I wanted to contribute to. I’d already been working on the idea of a browser based OS for a while but it seemed Mozilla had the people, the technology and the influence to build something truly disruptive.
At the time Mozilla weren’t actively recruiting people to work on B2G, the team still only consisted of the four co-founders and the project was little more than an empty GitHub repository. But I got in touch the day after the announcement and after conversations with Chris, Andreas and Mike over Skype and a brief visit to Silicon Valley, I somehow managed to convince them to take me on (initially as a contractor) so I could work on the project full time.
A Web Browser Built from Web Technologies
On my first day Chris Jones told me “The next, highest-priority project is a very basic web browser, just a URL bar and back button basically.”
Chris and his bitesize browser, Taipei, December 2011
The team was creating a prototype smartphone user interface codenamed “Gaia”, built entirely with web technologies. Partly to prove it could be done, but partly to find the holes in the web platform that made it difficult and fill those holes with new Web APIs. I was asked to work on the first prototypes of a browser app, a camera app and a gallery app to help find some of those holes.
You might wonder why a browser-based OS needs a browser app at all, but the thinking for this prototype was that if other smartphone platforms had a browser app, then B2G would need one too.
In the beginning, there was an <iframe>
It all started with a humble iframe, a text input for the URL bar and a go button, in fact you can see the first commit here. When you clicked the go button, it set the src attribute of the iframe to the contents of the text input, which caused the iframe to load the web page at that URL.
First commit, November 2011
Another problem we came across quite quickly was that many web authors will go to great lengths to prevent their web site being loaded inside an iframe in order to prevent phishing attacks. A web server can send an X-Frame-Options HTTP response header instructing a user agent to simply not render the content, and there are also a variety of techniques for “framebusting” where a web site will actively try to break out of an iframe and load itself in the parent frame instead.
It was quickly obvious that we weren’t going to get very far building a web browser using web technologies without evolving the web technologies themselves.
The Browser API
I met Justin Lebar at the first B2G work week in Taipei in December 2011. He was tasked with modifying Gecko to make the browser app on Boot to Gecko possible. To me Gecko was (and largely still is) a giant black box of magic spells which take the code I write and turn it into dancing images on the screen. I needed a wizard who had a grasp on some of these spells, including a particularly strong spell called Docshell which only the most practised of wizards dare peer into.
Justin at the first B2G Work Week in Taipei, December 2011
When I told Justin what I needed he made the kinds of sounds a mechanic makes when you take your car in for what you think is a simple problem but turns out costing the price of a new car. Justin had a better idea than I did as to what was needed, but I don’t think either of us realised the full scale of the task at hand.
With the adding of a simple boolean “mozbrowser” attribute to the HTML iframe element in Gecko, the Browser API was born. I tried adding features to the browser app and every time I found something that wasn’t possible with current web technologies, I went back to Justin to get him to cast a new magic spell.
There were easier approaches we could have taken to build the browser app. We could have added a mechanism to allow the browser to inject scripts into the iframe and communicate freely with the content inside, but we wanted to provide a safe API which anyone could use to build their own browser app and this approach would be too risky. So instead we built an explicit privileged API into the DOM to create a new class of iframe which could one day become a new standard HTML tag.
Keeping the Web Contained
The first thing we did was to try to trick web pages loaded inside an iframe into thinking they were not in fact inside an iframe. At first we had a crude solution which just ignored X-Frame-Options headers for iframes in whitelisted domains that had the mozbrowser attribute. That’s when we discovered that some web sites are quite clever at busting out of iframes. In the end we had to take other measures like making sure window.top pointed at the iframe rather than its parent so a web site couldn’t detect that it had a parent, and eventually also run every browser tab in its own system process to completely isolate them from each other.
Once we had the animal that is the web contained, we needed to poke a few air holes to let it breathe. There’s some information we need to let out of the iframe in the form of events: when the location, title or icon of a web page changes (locationchange, titlechange and iconchange); when a page starts and finishes loading (loadstart, loadend) and when the security characteristics of the currently loaded page changes (securitychange). This all allows us to keep the address bar and title bar up to date and show a progress indicator.
The browser app needs to be able to navigate the iframe by telling it to goBack(), goForward(), stop() and reload(). We also need to be able to explicitly ask for information like characteristics of the session history (getCanGoBack(), getCanGoForward()) to determine which navigation buttons to display.
With these basics in place it was possible to build a simple functional browser app.
The Gaia project’s first UX designer was Josh Carpenter. At an intensive work week in Paris the week before Mobile World Congress in February 2012, Josh created UI mockups for all the basic features of a smartphone, including a simple browser, and we built a prototype to those designs.
Josh and me plotting over a beer in Paris.
The prototype browser app could navigate web content, keep it contained and display basic information about the content being viewed. This would be the version demonstrated at MWC in Barcelona that year.
Simple browser demo for Mobile World Congress, February 2012
Building a Team
At a work week in Qualcomm’s offices in San Diego in May 2012 I was able to give a demo of a slightly more advanced basic browser web app running inside Firefox on the desktop. But it was still very basic. We needed a team to start building something good enough that we could ship it on real devices.
“Browser Inception”, San Diego May 2012
San Diego was also where I first met Dale Harvey, a brave Scotsman who came on board to help with Gaia. His first port of call was to help out with the browser app.
Dale Getting on Board in San Diego, May 2012
One of the first things Dale worked on was creating multiple tabs in the browser and even adding a screenshotting spell to the Browser API to show thumbnails of browser tabs (I told you he was brave).
By this time we had also started to borrow Larissa Co, a brilliant designer from the Firefox team, to work on the interaction design and Patryk Adamczyk, formerly of RIM, to work on the visual design for the browser on B2G. That was when it started to look more like a Firefox browser.
Early UI Mockup, July 2012
Things that Pop Up
Web pages like to make things pop up. For a start they like to alert(), prompt() or confirm() things with you. Sometimes they like to open() a new browser window (and close() them again), open a link in a _blank window, ask you for a password, ask for your permission to do something, ask you to select an option from a menu, open a context menu or confirm re-sending the contents of a form.
An alert(), version 1.0
All of this required new events in the Browser API, which meant more spells for Justin to cast.
Scroll, Pan and Zoom
Moving around web pages on web devices works a little differently from on the desktop. Rather than scroll bars or a scroll wheel on a mouse it uses touch input and a system called Asynchronous Pan and Zoom to allow the user to pan around a web page by dragging it and scrolling it using “kinetic scrolling” which feels like it has some physics to it.
One of the trickier interactions to get right was that we wanted the address bar to hide as you scrolled down the page in order to make more room for content, then show again when you scroll back to the top of the page.
This required adding asyncscroll events which tapped directly into the Asynchronous Pan and Zoom code so that the browser knew not only when the user directly manipulated the page, but how much it scrolled based on physics, asynchronously from the user’s interaction.
One of the most loved features of Firefox is the “Awesomebar”, a combined address bar, search bar (and on mobile, title bar) which lets you quickly get to the content you’re looking for. You type a few characters and immediately start to see matching web pages from your browsing history, ranked by a “frecency” algorithm.
On the desktop and on Android all of this data is stored in the “Places” database as part of privileged “chrome” code. In order to implement this feature in B2G we would need to use the local storage capabilities of the web, and for that we chose IndexedDB. We built a Places database in IndexedDB which would store all of the “places” a user visits on the web including their URL, title and icon, and store all the times the user visited that page. It would also be used to store the users bookmarks and rank top sites by “frecency”.
Awesomebar, version 1.0
As you browse around the web Gecko also stores a bunch of data about the places you’ve been. That can be cookies, offline pages, localStorage, IndexedDB databases and all sorts of other bits of data. Firefox browsers provide a way for you to clear all of this data, so methods needed to be added to the Browser API to allow this data to be cleared from the browser settings in B2G.
Browser settings, version 1.0
Sometimes web pages crash the browser. In B2G every web app and every browser tab runs in its own system process so that should the worst happen, it will only cause that one window/tab to crash. In fact, due to the memory constraints of the low-end smartphones B2G would initially target, sometimes the system will intentionally kill a background app or browser tab in order to conserve memory. The browser app needs to be informed when this happens and needs to be able to recover seamlessly so that in most cases the user doesn’t even realise a process was killed. Events were added to the Browser API for this purpose.
Crashed tab, version 1.0
Talking to Other Apps
Common use cases of a mobile browser are for the user to want to share a URL using another app like a social networking tool, or for another app to want to view a URL using the browser.
B2G implemented Web Activities for this purpose, to add a capability to the web for apps to interact with each other, but in an app-agnostic way. So for example the user can click on a share button in the browser app and B2G will fire a “share URL” Web Activity which can then be handled by any installed app which has registered to handle that type of Web Activity.
Share Web Activity, version 1.2
Despite the fact that B2G and Gaia are built on the web, it is a requirement that all of the built-in Gaia apps should be able to function offline, when an Internet connection is unavailable or patchy, so that the user can still make phone calls, take photos and listen to music etc.. At first we started to use AppCache for this purpose, which was the web’s first attempt at making web apps work offline. Unfortunately we soon ran into many of the common problems and limitations of that technology and found it didn’t fulfill all of our requirements.
In order to ship version 1.0 of B2G on time, we were forced to implement “packaged apps” to fulfill all of the offline and security requirements for built-in Gaia apps. Packaged apps solved our problems but they are not truly web apps because they don’t have a real URL on the Internet, and attempts to standardise them didn’t get much traction. Packaged apps were intended very much as a temporary solution and we are working hard at adding new capabilities like ServiceWorkers,standardised hosted packages and manifests to the web so that eventually proprietary packaged apps won’t be necessary for a full offline experience.
Offline, version 1.4
Spit and Polish
Finally we applied a good deal of spit and polish to the browser app UI to make it clean and fluid to use, making full use of hardware-accelerated CSS animations, and a sprinkling of Firefoxy interaction and visual design to make the youngest member of the Firefox browser family feel consistent with its brothers and sisters on other platforms.
At an epic work week in Berlin in January 2013 hosted by Deutsche Telekom the whole B2G team, including engineers from multiple competing mobile networks and device manufacturers, got together with the common cause of shipping B2G 1.0, in time to demo at Mobile World Congress in Barcelona in February. The team sprinted towards this goal by fixing an incredible 200 bugs in one week.
Version 1.0 Team, Berlin Work Week, January 2013
In the last few minutes of the week Andreas Gal excitedly declared “Zarro Gaia Boogs”, signifying version 1.0 of Gaia was complete, with the rest of B2G to shortly follow over the weekend. Within around 18 months a dedicated team spanning multiple organisations had come together working entirely in the open to turn an empty GitHub repository into a fully functioning mobile operating system which would later ship on real devices as Firefox OS 1.0.1.
Zarro Gaia Boogs, January 2013
Browser app v1.0
So having attended Mobile World Congress 2012 with a prototype and a promise to deliver commercial devices into the market, we were able to return in 2013 having delivered on that promise by fully launching the “Firefox OS” brand with multiple devices on multiple mobile networks with a launch that really stole the show at the biggest mobile conference in the world. Firefox OS had arrived.
Mobile World Congress, Barcelona, February 2013
Firefox OS 1.1 quickly followed and by the time we started working on version 1.2 the project had grown significantly. We re-organised into autonomous agile teams focused on product areas, the browser app being one. That meant we now had a dedicated team with designers, engineers, a test engineer, a product manager and a project manager.
The browser team, London work week, July 2013
Firefox OS moved to a rapid release “train model” of development like Firefox, where a new version is delivered every 12 weeks. We quickly added new features and worked on improving performance to get the best out of the low end hardware we were shipping on in emerging markets.
Browser app v1.4
Version 1.0 of Firefox OS was very much about proving that we could build what already exists on other smartphones, but entirely using open web technologies. That included a browser app.
Once we’d proved that was possible and put real devices on shelves in the market it was time to figure out what would differentiate Firefox OS as a product going forward. We wanted to build something that doesn’t just imitate what’s already been done, but which plays to the unique strengths of the web to build something that’s true to Mozilla’s DNA, is the best way to experience the web, and is the platform that HTML5 deserves.
Below is a mockup I created right back towards the start of the project at the end of 2011, before we even had a UX team. I mentioned earlier that the Awesomebar is a core part of the Firefox experience in Firefox browsers. My proposal back then was to build a system-wide Awesomebar which could search the whole device, including your apps and their contents, and be accessible from anywhere in the OS.
Very early mockup of a system-wide Awesomebar, December 2011
At the time, this was considered a little too radical for version 1.0 and our focus really needed to be on innovating in the web technology needed to build a mobile OS, not necessarily the UX. We would instead take a more conservative approach to the user interface design and build a browser app a lot like the one we’d built for Android.
In practice that meant that we in fact built two browsers in Firefox OS. One was the browser app which managed the world of “web sites” and the other was the window manager in the system app which managed the world of “web apps” .
In reality on the web there isn’t so much of a distinction between web apps and web sites – each exists on a long continuum of user experience with a very blurry boundary in the middle.
Gordon’s Rocketbar Prototype, March 2013
Gordon and I started to meet weekly to discuss the concept he had by then codenamed “Rocketbar”, but it was a bit of a side project with a few interested people.
In April 2013 the UX team had a summit in London where they got together to discuss future directions for the user experience of Firefox OS. I was lucky enough to be invited along to not only observe but participate in this process, Josh being keen to maintain a close collaboration between Design and Engineering.
We brainstormed around what was unique about the experience of the web and how we might create a unique user experience which played to those strengths. A big focus was on “flow”, the way that we can meander through the web by following hyperlinks. The web isn’t a world of monolithic apps with clear boundaries between them, it is an experience of surfing from one web site to another, flowing through content.
Brainstorming session, London, April 2013
In the coming weeks the UX team would create some early designs for a concept (eventually codenamed “Haida”) which would blur the lines between web apps and web sites and create a unique user experience which flows like the web does. This would eventually include not only the “Rocketbar”, which would be accessible across the whole OS and seamlessly adapt to different types of web content, but also “sheets”, which would split single page web apps into multiple pages which you could swipe through with intuitive edge gestures. It would also eventually include a content model based around live apps which you can surf to, use, and then bookmark if you choose to, rather than monolithic apps which you have to install from a central app store before you can use them.
In June 2013 a small group of designers and engineers met in Paris to develop a throwaway prototype of Haida, to rapidly iterate on some of the more radical concepts and put them through user testing.
Haida Prototyping, Paris, June 2013
Josh and Gordon working in a highly co-ordinated fashion, Paris, June 2013
Wizards at work, Paris, June 2013
2.x and the Future
Fast forward to the present and the browser team has been merged into the “Systems Front End” team. The results of the Haida prototyping and user testing are slowly starting to make their way into the main Firefox OS product. It won’t happen all at once, but it will happen in small pieces as we iterate and learn.
In version 2.0 of Firefox OS the homescreen search feature from 1.x will be replaced with a new search experience developed in conjunction with a new homescreen, implemented by Kevin Grandon, which will lay the foundations for “Rocketbar”. In version 2.1 our intention is to completely merge the browser app into the system app so that browser tabs become “sheets” alongside apps in the task manager and the “Rocketbar” is accessible from anywhere in the OS. The Rocketbar will adapt to different types of web content and shrink down into the status bar when not in use. Edge gestures will allow you to swipe between web apps and browser windows and eventually apps will be able to spawn multiple sheets.
UI Mockups of Rocketbar in expanded and collapsed state, July 2014
Version 1.x of Firefox OS was built with web technologies but still has quite a similar user experience to other mobile platforms when it comes to installing and using apps, and browsing the web. Going forward I think you can expect to see the DNA of the web come through into the user interface with a unified experience which breaks down the barriers between web apps and web sites, allowing you to freely flow between the two.
Firefox OS is an open source project developed completely in the open. If you’re interested in contributing to Gaia, take a look at the “Developing Gaia” page on MDN. If you’re interested in creating your own HTML5 app to run on Firefox OS take a look at the “App Center“.
I've been told I don't blog about my projects often enough, so here's a feature update on my Chrome Extension that adds various improvements to the Strava.com fitness tracker.
All the features are optional and can be individually enabled in the options panel.
This adds aggregate data (fastest, slowest, average, etc.) when segments are repeated within an activity. It's particularly useful for laps or—like this Everesting attempt—hill repeats:
Changes the default leaderboard away from "Overall" when viewing a segment effort. The most rewarding training often comes from comparing your own past performances rather than those of others, so viewing your own results by default can make more sense.
You can select any of Men, Women, I'm Following or My Results:
Hide feed entries
Hides various entry types in the activity feed that can get annoying. You currently have the option of hiding challenges, route creations, goals created or club memberships:
Calculates a Variability Index (VI) from the weighted average power
and the average power, an indication of how "smooth" a ride was. Required a power meter. A VI of 1.0 would mean a ride was paced "perfectly", with very few surges of effort:
Automatically loads more dashboard entries when reaching the bottom, saving a tedious click on the "More" button:
Estimates a run's Training Stress Score (TSS) from its Grade Adjusted Pace
distribution, a measure of that workout's duration and intensity relative to the athletes's capability, providing an insight into correct recovery time and overall training load over time:
Hide "find friends"
Hides social networking buttons, including invitations to invite or find further friends:
"Enter" posts comment
Immediately posts comment when pressing Enter/Return key in
the edit box rather than adding a newline:
Changes the default sport for the "Side by Side comparison" module to running when viewing athlete profiles:
Shows running cadence by default in the elevation profile:
Running heart rate
Shows running heart rate by default in elevation profile:
Selects "Show Estimated FTP" by default on the Power Curve page:
Standard Google map
Prefer the "Standard" Google map over the "Terrain" view:
The basic operation is that it will first check whether /usr/sbin/daemon is not running and, if not, execute /usr/sbin/daemon -c /etc/daemon.cfg -p /var/run/daemon.pid. This process then has the responsibility to daemonise itself and write the resulting process ID to /var/run/daemon.pid.
start-stop-daemon then waits until /var/run/daemon.pid has been created as the test of whether the service has actually started, raising an error if that doesn't happen.
(In practice, the locations of all these files are parameterised to prevent DRY violations.)
By idempotence we are mostly concerned with repeated calls to /etc/init.d/daemon start not starting multiple versions of our daemon.
This might not seem to be particularly big issue at first but the increased adoption of stateless configuration management tools such as Ansible (which should be completely free to call start to ensure a started state) mean that one should be particularly careful of this apparent corner case.
In its usual operation, start-stop-daemon ensures only one instance of the daemon is running with the --exec parameter: if the specified pidfile exists and the PID it refers to is an "instance" of that executable, then it is assumed that the daemon is already running and another copy is not started. This is handled in the pid_is_exec method (source) - the /proc/$PID/exe symlink is resolved and checked against the value of --exec.
However, one case where this doesn't work is interpreted scripts. Lets look at what happens if /usr/sbin/daemon is such a script, eg. a file that starts:
The problem this introduces is that /proc/$PID/exe now points to the interpreter instead, often with an essentially non-deterministic version suffix:
When this process is examined using the --exec mechanism outlined above it will be rejected as an instance of /usr/sbin/daemon and therefore another instance of that daemon will be incorrectly started.
The solution is to use the --startas parameter instead. This omits the /proc/$PID/exe check and merely tests whether a PID with that number is running:
Whilst it is therefore less reliable (in that the PID found in the pidfile could actually be an entirely different process altogether) it's probably an acceptable trade-off against the case of running multiple instances of that daemon.
This danger can be ameliorated by using some of start-stop-daemon's other matching tests, such as --user or even --name.
The first is slick and easy to use, but fiddly to set up correctly and if you want to do something that its makers don’t want you to, it’s rather difficult. If it breaks, then fixing it is also fiddly, if not impossible and requiring complete reinitialisation.
The second system is an older approach, tried and tested, but fell out of fashion with the rise of the first and very rarely comes preinstalled on new machines. Many recent installations can be switched to and from the first system at the flick of a switch if wanted. It needs a bit more thought to operate but not much and it’s still pretty obvious and intuitive. You can do all sorts of customisations and it’s usually safe to mix and match parts. It’s debatable whether it is more efficient than the first or not.
The third system is a similar approach to the other two, but simplified in some ways and all the ugly parts are hidden away inside neat packaging. These days you can maintain and customise it yourself without much more difficulty than the other systems, but the basic hardware still attracts a price premium. In theory, it’s less efficient than the other types, but in practice it’s easier to maintain so doesn’t lose much efficiency. Some support companies for the other types won’t touch it while others will only work with it.
So that’s the three types of bicycle gears: indexed, friction and hub. What did you think it was?
So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.
The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.
Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…
So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.
The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.
So, I have a SEN account (it's part of the PSN), I have 2 videos with SEN, I have a broken PS3 so I can no deactivate video (you can only do that from the console itself, yes, really)... and the response from SEN has been abysmal, specifically:
As we take the security of SEN accounts very seriously, we are unable to provide support on this matter by e-mail
as we will need you to answer some security questions before we can investigate this further. We need you to
phone us in order to verify your account details because we're not allowed to verify details via e-mail.
I mean, seriously, they're going to verify my details over the phone better than over e-mail how exactly? All the contact details are tied to my e-mail account, I have logged in to their control panel and renamed the broken PS3 to "Broken PS3", I have given them the serial number of the PS3, and yet they insist that I need to call them, because apparently they're fucking stupid. I'm damned glad that I only ever got 2 videos from SEN, both of which I own on DVD now anyways, this kind of idiotic tie in to a system is badly wrong.
So, you phone the number... and now you get stuck with hold music for ever... oh, yeah, great customer service here guys. I mean, seriously, WTF.
OK - 10 minutes on the phone, and still being told "One of our advisors will be with you shortly". I get the feeling that I'll just be writing off the 2 videos that I no longer have access to.
I'm damned glad that I didn't decide to buy more content from that - at least you can reset the games entitlement once every six months without jumping through all these hoops (you have to reactivate each console that you still want to use, but hey).
While cooperatives fortnight is mostly a celebration of how well cooperatives are doing in the UK, this year is tinged with sadness for me because it sees Downham Food Coop stop trading.
This Friday and Saturday will be their last market stall, 9til 1 on the Town Square, aka Clock or Pump square.
As you can see, the downturn has hit the market hard and I guess being the last stall left outside the market square (see picture: it used to have neighbouring stalls!) was just too much. The coop cites shortage of volunteers and trading downturn as reasons for closure.
But if you’re near Downham today or tomorrow morning, please take advantage of this last chance to buy some great products in West Norfolk!
After some discussion last night at PHP Hants about the fact that irc is a great facilitator of support / discussion, but largely ignored because there is rarely enough information for a new user to get going I decided it may be worth putting together a howto type post so here goes…
What is irc?
First of all, what on earth is it? I’m tempted to describe it as Twitter done right years before Twitter even existed, but I’m a geek and I’ve been using irc for years. It has a long heritage, but unlike the ubiquitous email it hasn’t made the transition into mainstream use. In terms of usage it has similarities to things like Twitter and Instant Messaging. Let’s take a quick look at this.
Twitter allows you to broadcast messages, they get published and anyone who is subscribed to your feed can read what you say. Everything is pretty instant, and if somebody is watching the screen at the right time they can respond straight away. Instant Messaging on the other hand, is more of a direct conversation with a single person, or sometimes a group of people, but it too is pretty instantaneous – assuming, of course, that there’s someone reading what you’ve said. Both of these techonologies are pretty familiar to many. If you go to the appropriate website you are given the opportunity to sign up and either use a web based client or download one.
It is much the same for irc in terms of usage, although conversations are grouped into channels which generally focus on a particular topic rather than being generally broadcast (Twitter) or more specifically directed (Instant Messaging). The downside is that in most cases you don’t get a web page with clear instructions of how to sign up, download a client and find where the best place is to join the conversation.
There are two things you need to get going with irc, a client and somewhere to connect to. Let’s put that into a more familiar context.
The client is what you use to connect with; this can be an application – so as an example Outlook or Thunderbird would be a mail client, or IE, Firefox, Chrome or Safari are examples of clients for web pages – or it can be a web page that does the same thing – so if you go to twitter.com and login you are using the web page as your Twitter client. Somewhere to connect to can be compared to a web address, or if you’ve got close enough to the configuration of your email to see the details, your mail server address.
Let’s start with the ‘somewhere to connect to‘ bit. Freenode is one of the most popular irc servers, so let’s take a look. First we’ll see what we can find out from their website, http://freenode.net/.
There’s a lot of very daunting information there for somebody new to irc, so ignore most of it and follow the Webchat link on the left.
That’s all very well and good, but what do we put in there? I guess the screenshot above gives a clue, but if you actually visit the page the entry boxes will be blank. Well first off there’s the Nickname, this can be pretty much anything you like, no need to register it – stick to the basics of letters, numbers and some simple punctuation (if you want to), keep it short and so long as nobody else is already using it you should be fine; if it doesn’t work try another. Channels is the awkward one, how do you know what channels there are? If you’re lucky you’re looking into this because you’ve been told there’s a channel there and hopefully you’ve been given the channel name. For now let’s just use the PHP Hants channel, so that would be #phph in the Channels box. Now all you need to do is type in the captcha, ignore the tick boxes and click Connect and you are on the irc channel and ready to chat. Down the right you’ll see a list of who else is there, and in the main window there will be a bit of introductory information (e.g. topic for the channel) and depending on how busy it is anything from nothing to a fast scrolling screen of text.
If you’ve miss typed there’s a chance you’ll end up in a channel specially created for you because it didn’t exist; don’t worry, just quit and try again (I’ll explain that process shortly).
For now all you really need to worry about is typing in text an posting it, this is as simple as typing it into the entry box at the bottom of the page and pressing return. Be polite, be patient and you’ll be fine. There are plenty of commands that you can use to do things, but for now the only one you need to worry about is the one to leave, this is:
Type it in the entry box, press return and you’ve disconnected from the server. The next thing to look into is using a client program since this is far more flexible, but I’ll save that for another post.
Actual progress on this Ph.D revision has been quite slow. My current
efforts are on improving the focus of the thesis. One of the
criticisms the examiners made (somewhat obliquely) was that it wasn&apost
very clear exactly what my subject was: musicology? music information
retrieval? computational musicology? And the reason for this was that
I failed to make that clear to myself. It was only at the writing up
stage, when I was trying to put together a coherent argument, that I
decided to try and make it a story about
music information retrieval (MIR). I tried to
argue that MIR&aposs existing evaluation work (which was largely modelled
information retrieval evaluation
from the text world) only took into account the music information
needs of recreational users of MIR systems, and that there was very
little in the way of studying the music information seeking behaviour
of "serious" users. However, the examiners didn&apost even accept that
information retrieval was an important problem for musicology,
nevermind that there was work to be done in examining music
information needs of music scholarship.
So I&aposm using this as an excuse to shift the focus away from MIR a
little and towards something more like computational musicology and
music informatics. I&aposm putting together a case study of a
computational musicology toolkit called
music21. Doing this allows me to focus
in more detail on a smaller and more distinct community of users
(rather than attempting to studying musicologists in general which was
another problematic feature of the thesis), it makes it much clearer
what kind of music research can be addressed using the technology
(all of MIR is either far too diverse or far too generic, depending
on how you want to spin it), and also allows me to work with the
actually Purcell Plus project
materials using the toolkit.
The other day we had a meeting at work with a former colleague (now at
QMUL) to discuss general
project progress. The
topics covered included the somewhat complicated workflow that we&aposre
using for doing optical music recognition (OMR) on
early printed music
sources. It includes mensural notation specific OMR software called
Aruspix. Aruspix itself is fairly
accurate in its output, but the reason why our workflow is non-trivial
is that the sources we&aposre working with are partbooks; that is, each
part (or voice) of a multi-part texture is written on its own part of
the page, or even on a different page. This is very different to
modern score notation in which each part is written in vertical
alignment. In these sources, we don&apost even know where separate pieces
begin and end, and they can actually begin in the middle of a
line. The aim is to go from the double page scans ("openings") to
distinct pieces with their complete and correctly aligned parts.
Anyway, our colleague from QMUL was very interested in this little
part of the project and suggested that we spend the afternoon, after
the style of good software engineering, formalising the workflow. So
that&aposs what we did. During the course of the conversation diagrams
were drawn on the whiteboard. However (and this was really the point
of this post) I made notes in Haskell. It occurred to me a few minutes
into the conversation that laying out some types and the operations
over those types that comprise our workflow is pretty much exactly the
kind of formal specification we needed.
Here&aposs what I typed:
module MusicalDocuments whereimport Data.Maybe-- A document comprises some number of openings (double page spreads)data Document = Document [Opening]-- An opening comprises one or two pages (usually two)data Opening = Opening (Page,Maybe Page)-- A page comprises multiple systemsdata Page = Page [System]-- Each part is the line for a particular voicedata Voice = Superius | Discantus | Tenor | Contratenor | Bassus
-- A part comprises a list of musical sybmols, but it may span mutliple systems--(including partial systems)data Part = Part [MusicalSymbol]-- A piece comprises some number of sectionsdata Piece = Piece [Section]-- A system is a collection of stavesdata System = System [Staff]-- A staff is a list of atomic graphical symbolsdata Staff = Staff [Glyph]-- A section is a collection of partsdata Section = Section [Part]-- These are the atomic components, MusicalSymbols are semantic and Glyphs are--syntactic (i.e. just image elements)data MusicalSymbol = MusicalSymbol
data Glyph = Glyph
-- If this were real, Image would abstract over some kind of binary formatdata Image = Image
-- One of the important properties we need in order to be able to construct pieces-- from the scanned components is to be able to say when objects of the some of the-- types are strictly contiguous, i.e. this staff immediately follows that staffclass Contiguous a where
immediatelyFollows :: a -> a ->Bool
immediatelyPrecedes :: a -> a ->Bool
immediatelyPrecedes a b = b `immediatelyFollows` a
instance Contiguous Staff where
immediatelyFollows :: Staff -> Staff ->Bool
immediatelyFollows =undefined-- Another interesting property of this data set is that there are a number of-- duplicate scans of openings, but nothing in the metadata that indicates this,-- so our workflow needs to recognise duplicatesinstance Eq Opening where(==) :: Opening -> Opening ->Bool(==) a b =undefined-- Maybe it would also be useful to have equality for staves too?instance Eq Staff where(==) :: Staff -> Staff ->Bool(==) a b =undefined-- The following functions actually represent the workflow
collate :: [Document]
scan :: Document -> [Image]
scan =undefinedsplit:: Image -> Opening
paginate :: Opening -> [Page]
omr :: Page -> [System]
segment :: System -> [Staff]
tokenize :: Staff -> [Glyph]
recogniseMusicalSymbol :: Glyph ->Maybe MusicalSymbol
part :: [Glyph] ->Maybe Part
part gs =ifnull symbols then Nothing else Just $ Part symbols
where symbols =mapMaybe recogniseMusicalSymbol gs
alignable :: Part -> Part ->Bool
piece :: [Part] ->Maybe Piece
I then added the comments and implemented the part function later
on. Looking at it now, I keep wondering whether the types of the
functions really make sense; especially where a return type is a type
that&aposs just a label for a list or pair.
I haven&apost written much Haskell code before, and given that I&aposve only
implemented one function here, I still haven&apost written much Haskell
code. But it seemed to be a nice way to formalise this procedure. Any
criticisms (or function implementations!) welcome.
But what if you want to gain hands-on practical experience of AEM, PhoneGap, and mobile app development? If you want to roll up your sleeves and build a mobile app yourself, then we’ve got an awesome lab planned for you. In “Building mobile apps with PhoneGap Enterprise“, you’ll have the opportunity to create, build, and update a mobile application with Adobe Experience Manager. You’ll see how easy it is to deliver applications across multiple platforms. You’ll also learn how you can easily monitor app engagement through integration with Adobe Analytics and Adobe Mobile Services.
If you want to know how you can deliver more effective apps, leverage your investment in AEM, and bridge the gap between marketers and developers, then you need to attend this lab at Adobe Summit. Join us for this extended deep dive into the integration between AEM and PhoneGap. No previous experience is necessary – you don’t need to know how to code, and you don’t need to know any of the Adobe solutions, as we’ll explain it all as we go along. Some of you will also be able to leave the lab with the mobile app you wrote, so that you can show your friends and colleagues how you’re ready to continuously drive mobile app engagement and ROI, reduce your app time to market, and deliver a unified experience across channels and brands.
Are you ready to master the mobile app challenge?
All hyperbole aside, I think this is going to be a pretty interesting technology space to watch:
Being able to build a mobile app from within a CMS is incredibly powerful for certain classes of mobile app. Imagine people having the ability to build mobile apps with an easy drag-and-drop UI that requires no coding. Imagine being able to add workflows (editorial review, approvals etc) to your mobile app development.
No matter how we might wish we didn’t have these app content silos, you can’t argue against the utility of content-based mobile apps until the mobile web matures sufficiently so that everyone can build offline mobile websites with ease. Added together with over-the-air content updates, it’s really nice to be able to have access to important content even when the network is not available.
Analytics and mobile metrics are providing really useful ways to understand how people are using websites and apps. Having all the SDKs embedded in your app automatically with no extra coding required means that everyone can take advantage of these metrics. Hopefully this will lead to a corresponding leap in app quality and usability.
By using Apache Cordova, we’re ensuring that these mobile app silos are at least built using open standards and open technologies (HTML, CSS, JS, with temporary native shims). So when mobile web runtimes are mature enough, it will be trivial to switch front-end app to front-end website without retooling the entire back-end content management workflow.
I had a friend come to me with an interesting problem they were having in their office. Due to the Exchange server and Office licencing they have they are running Outlook 2003 on Windows 7 64bit Machines.
After Internet Explorer updates to IE11 it introduces a rather annoying bug into Outlook.
Typed emails often get cut off mid sentence when you click Send ! So only part of the email gets sent !
What I think is happening is that Outlook is reverting to a previously autosaved copy before sending.
Removing the IE11 update would probably fix it but perhaps the easiest way is to disable the "Autosave unsent email" option in Outlook.
Tools, Options, E-Mail Options, Advanced E-Mail Options, and disable the "Autosave unsent" option.
I purchased this book http://www.amazon.co.uk/dp/0738206679 (Linked, by Barabasi) on the 24th of December 2002, I had managed to make 6 or 7 aborted attempts at reading it to completion where life had suddenly got busy and just took over. This meant that I put the book down and didn't pick it up again until things were less hectic some time later and I started again.
Anyhow, I finally beat the book a few nights ago, my comprehension of it was pretty low anyhow but at least it is done. Just shows I need to read lots more given how little went in.
These are notes from a tech support call with my parents last night, saved here for the next time stuff breaks.
If you’re running Mac OS X Snow Leopard (and possibly other versions), you may find you can’t log in. Symptoms are:
You click on your username and enter your password
The login screen is replaced by a blue screen for a short time
You are returned to the login screen.
After searching the interwebs I found Fixing a Mac OSX Leopard Login Loop Caused by Launch Services. It seems the problem is caused by corrupted cache files (which could be caused by the computer shutting down abruptly, or may just be “one of those things” that happens from time to time). This gave me enough information to come up with these “easy” steps to resolve it:
Log in to the Mac as a different user*
Press cmd-space to open Spotlight, type “Terminal”, and click on the Terminal application.
Work out the broken user’s username by typing: ls /Users and look for the appropriate broken account name e.g. franksmith or janedoe.
Find out the user ID of the user from the previous step by typing: id -u janedoe which will print a number something like 501
Delete the user’s broken cache files. In the following command, be sure to substitute the correct username (in place of janedoe) and the correct user ID after the 023 (in place of the 501): su -l janedoe -c ‘rm /Library/Caches/com.apple.LaunchServices-023501.*’ (be very careful with this, you don’t want to delete the wrong things).
If you’re super-confident in figuring out backticks you could of course skip step 4 and instead of step 5 do: su -l janedoe -c ‘rm /Library/Caches/com.apple.LaunchServices-023`id -u janedoe`.*’
Test by logging in to the troublesome user account.
Note that if you had any apps configured to launch at login, you may need to re-add these.
* This makes me think it’s good practice when setting up a Mac to always set up an extra user account, just in case stuff breaks.
I finally made it back out onto the bike today for the first time since September last year. I'd spent some time ill in October and November which meant I had to stop exercising and as a result I've gained loads of weight over the winter and it turns out also become very unfit which can be verified by looking at the Strava ride from today: http://www.strava.com/activities/110354158
Anyhow, a nice thing about this ride is that I can record it on Strava and get this data about how unfit I have become, this is because last year I bought a Mio Cyclo 305 HC cycle computer http://eu.mio.com/en_gb/mio-cyclo-305-hc.htm from Halfords reduced to £144.50 (using a British Cycling discount). I was originally going to get a Garmin 500 but Amazon put the price up from £149.99 the day I was going to buy it to £199.99.
I knew when I got the Mio that it had a few issues surrounding usability and features but it was cheap enough at under £150 that I figured that even if I didn't get on with it I'd at least have a cadence sensor and heart rate monitor so I could just buy a Garmin 510 when they sorted out the firmware bugs with that and the price came down a bit which is still my longer term intention.
So it turns out a couple of weeks ago I plugged my Mio into a Windows VM when I was testing USB support and carried out a check for new firmware. I was rather surprised to see a new firmware update and new set of map data was available for download. So I installed it think I wasn't going to get any new features from it as Mio had released some new models but it turns out that the new firmware actually enables a single feature (amongst other things, they also tidied up the UI and sorted a few other bugs along with some other features) that makes the device massively more useful as it now also creates files in .fit format which can be uploaded directly to Strava.
This is massively useful for me as although the Mio always worked in Linux as the device is essentially just a USB mass storage device but you would have to do an intermediate step of having to use https://github.com/rhyas/GPXConverter to convert the files from the Mio-centric GPX format to something Strava would recognise. Now I can just browse to the folder and upload the file directly which is very handy.
All in it turns out that buying a Mio which reading reviews and forums were full of doom and gloom means I can wait even longer before considering replacement with a garmin.
So, got a beep this morning from our work monitoring system. One of our customers domain names is hosted with livedns.co.uk (which, as far as I can tell, is part of the Fasthosts franchise)... It appears that Fasthosts have managed to entirely break their DNS:
brettp@laptop:~$ host www.fasthosts.com
;; connection timed out; no servers could be reached
brettp@laptop:~$ whois fasthosts.com | grep -i "Name Server"
Name Server: NS1.FASTHOSTS.NET.UK
Name Server: NS2.FASTHOSTS.NET.UK
Name Server: NS1.FASTHOSTS.NET.UK
Name Server: NS2.FASTHOSTS.NET.UK
brettp@laptop:~$ whois fasthosts.net.uk | grep -A 2 "Name servers:"
brettp@laptop:~$ host -t ns fasthosts.net.uk 184.108.40.206
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns fasthosts.net.uk 220.127.116.11
;; connection timed out; no servers could be reached
So, that's fasthosts core nameservers not responding, good start! They also provide livedns.co.uk, so lets have a look at that:
brettp@laptop:~$ whois livedns.co.uk | grep -A 3 "Name servers:"
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 18.104.22.168
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 22.214.171.124
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 126.96.36.199
;; connection timed out; no servers could be reached
So, erm, apparently that's all their DNS servers "Not entirely functioning correctly"! That's quite impressive!
It's New Year's Day 2014 and I'm reflecting on the music of past year.
Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.
The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.
The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.
Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.
The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be. Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!
I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I’ve made to the default Scratch install to make things easier. So here goes:
With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn’t exactly helpful. It also isn’t ideal that they can explore the C: drive in spite of profile restrictions (although it isn’t the end of the world as there is little they can do from Scratch).
After a bit of time with Google I found the answer, and since it didn’t immediately leap out at me when I was searching I thought I’d post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:
Initially it looks like this:
Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):
To get this:
They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.
The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.
You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).
There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.
Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven’t tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:
ProxyServer=[server name or IP address]
I’m trying to build a Raspberry Pi powered robot based on the DRDs from Farscape, I thought I’d blog my progress.
DRDs or “Diagnostic Repair Drones” are robots from the cult science fiction series Farscape. They carry out various functions aboard a leviathian (a species of living biomechanoid spaceship) including repairing and maintaining the ship. They’re ovoid in shape and they have two moving eye stalks and all sorts of tools like a robotic claw and a plasma welder.
Here’s some video footage from the series to give you an idea of what these little guys get up to:
The original DRDs were designed and built by the extremely talented folks at the Jim Henson Creature Shop in London (yes Jim Henson as in the Muppets!). They built lots of different variations of the robot over the years to be used in shooting different scenes for the show, but to my knowledge they’ve never released any designs.
I assumed I was going to have to painstakingly design a 3D computer model of one based on frame grabs from my DVDs of the series. I then planned to track down someone with a CNC router and a vacuum forming machine and persuade them to let me use them. Either that or find someone with an industrial sized 3D printer!
Luckily I came across a special effects company in the US who sells a kit to build a model of a DRD. The model is made from hollow cast fiberglass and resin and comes with ribbed plastic for the eye stalks, eye pieces with clear lenses, two parts of a claw and some colourful wires to make it look the part.
The kit isn’t perfect. The size, shape and proportions aren’t quite right and the finish is a bit rough but it’s good enough for my purposes. The part I’m really interested in is the robotics so I’m grateful that someone has already done the work for me on the basic shell.
The web site provides video tutorials on how to build the model and then how to put LEDs in the eyes and mount an remote controlled car underneath to make it move about in a bit of a crude fashion.
We can be a bit more sophisticated than that.
The Raspberry Pi is a credit-card sized computer developed in the UK by the not-for-profit Raspberry Pi Foundation to promote the teaching of programming in schools. It’s a single-board computer with a 700Mhz ARM processor and 512MB RAM, boots off an SD card and costs only around £30.
This is my Raspberry Pi:
The Gertboard is an expansion board which attaches to the Raspberry Pi via its GPIO pins and helps when experimenting with interfacing the Pi with the outside world. It comes with an Arduino compatible AVR microcontroller, analogue to digital converters, digital to analogue converters, a motor controller, push buttons, LEDs and much more.
Booting the Pi
The Raspberry Pi can boot Linux from an SD card and the most popular distribution is Raspbian which is a Debian-derivative. You can download an image and flash it to an SD card, or even buy an SD card with it already loaded.
To boot the Raspberry Pi all you need to do is insert your Raspbian SD card, plug it into a TV via either the HDMI port or the composite video port and power it up by plugging it into a Micro USB phone charger.
Here’s my Raspberry Pi booted and plugged into an old CRT TV:
Logging In Remotely
It’s cool that I can plug the Raspberry Pi into a TV, but I don’t want to be squinting at an old portable TV or sitting in the lounge next to my big flatscreen TV all the time I’m programming the robot, so I want to be able to log in remotely. Also, my plan is to build a web interface to control the robot over WiFi, so it’s going to need to connect to a network at some point.
First I plugged a USB keyboard into the Raspberry Pi and an ethernet cable to connect it to my network. The SSH daemon is already started by default, but I wanted to set a static IP address so that I always knew what to log in to.
I logged into the Raspberry Pi locally (the default username is pi and the password is raspberry) and edited the network configuration using the vi text editor.
$ sudo vi /etc/network/interfaces
I provided the following configuration to assign a static IP address of 192.168.1.42 on my local network:
iface lo inet loopback
iface eth0 inet static
Then restart the network interface with:
$ sudo ifdown -a
$ sudo ifup -a
Then check that I’m connected to the network, and the Internet by pinging Google.
$ ping google.com
I see that I’m successfully connected, so I can now log into the Raspberry Pi remotely using its new static IP.
From my desktop Linux box I type:
$ ssh firstname.lastname@example.org
type in the password “raspberry”, and voilà! I’m logged in.
I hope you weren’t expecting to see a finished robot! There’s a very long way to go yet.
If you desperately wanted to see a finished robot, here’s a picture of the last one I worked on, a line following robot we built at university powered by a PIC microcontroller.
Next I want to start playing around with the Gertboard and and make LEDs blink on and off from Python.
I wired up a MIDI In port along the lines of This one here, messed with the code a bit and voila (and potentially viola), I can play LV2 instrument plugins using a MIDI keyboard:
When I say "LV2 synth plugins", I should clarify that I'm only using the LV2 plugin C API, not the whole .ttl text file shebangle. I hope to get around to that at some point but it will be a while before you can directly plug LV2s into this and expect them to just work.
I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!
It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.
Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.
The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.
So come on Minty people get your bug squashing boots on and stamp on this one please.
Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!
Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.
Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.
Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.
Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.
The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to – because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.
I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.
It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.
Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!
The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.
Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.
A couple of years ago the roundabout at the edge of Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.
I’ve been using this for a while and had it recorded on a private on a private wiki. I was just tidying up my hosting account and thought I’d get rid of the wiki and store any useful info from it on my blog
Clone full subversion history into git repository (warning, may take a long time depending on how many commits you have in your Subversion repository).