I've been meaning to write something for weeks but somehow I never quite seem to get around to it.
Never say never. That old chestnut. Two in the bush gathers no broth.
It was my birthday recently and my wife and parents clubbed together and bought me something I'd often talked about before (if somewhat whimsically): a telescope.
After an evening of setting it up indoors, figuring out how all the bits, and performing some initial calibration of the sighting scope, the skies proceeded to be full of cloud for several nights afterwards.
The first night of clear skies happened to tie in very nicely with what was apparently the perfect night for viewing Jupiter. I got myself warmly attired, put some wellies on, and went out into the garden with my telescope (and a bottle of scotch). After some more time getting used to the equipment and better calibrating the sighting scope now that I was looking at celestial bodies rather than neighbour's aerials, I finally got sight of Jupiter! At first, I was just finding my way with a low magnification but once I'd got the hang of it, I stuck my best lens in and was utterly blown away by what I saw.
It's not that I'm anywhere near tinfoil-hatted scepticism but actually seeing an object in the sky - that to the naked eye is a mere white dot - so much closer with equipment that I could naively understand felt like I was properly confirming to myself that what I'd been taught was true.
I could see the colours and some shapes on the planet itself and after some steadying and patience, I realised I could also see all of the moons :)
Back in June 2008 I noted Craig Wright had posted to bugtraq reporting a “remote exploitation of an information disclosure vulnerability in Oral B’s SmartGuide management system”. I found it faintly amusing that a security researcher should have been looking for vulnerabities in a toothbrush.
I should have known better.
A report in wednesday’s on-line Guardian points to the release of a new smart tootbrush from Oral B. Apparently that toothbush will link via bluetooth to an app on either an iPhone or Android and report back to your dentist. Apparently Oral B “sees the connected toothbrush, launched as part of Mobile World Congress’s Connected City exhibition, as the next evolution of the smart bathroom.” Wayne Randall, global vice president of Oral Care at Procter and Gamble reportedly said:
“It provides the highest degree of user interaction to track your oral care habits to help improve your oral health, and we believe it will have significant impact on the future of personal oral care, providing data-based solutions for oral health, and making the relationship between dental professionals and patients a more collaborative one.”
I had a friend come to me with an interesting problem they were having in their office. Due to the Exchange server and Office licencing they have they are running Outlook 2003 on Windows 7 64bit Machines.
After Internet Explorer updates to IE11 it introduces a rather annoying bug into Outlook.
Typed emails often get cut off mid sentence when you click Send ! So only part of the email gets sent !
What I think is happening is that Outlook is reverting to a previously autosaved copy before sending.
Removing the IE11 update would probably fix it but perhaps the easiest way is to disable the "Autosave unsent email" option in Outlook.
Tools, Options, E-Mail Options, Advanced E-Mail Options, and disable the "Autosave unsent" option.
In cycling, a ride's Training Stress Score is a function of that ride's duration, average power and the intensity of the ride relative to the rider's capability. This Slowtwitch article provides a good overview on how intensity and TSS is calculated on a bike.
However, having TSS values for other sports allows a multisport athelete to take into consideration the physiological cost of activities in different sports. This is achieved by ensuring, say, 50 TSS on the bike "counts" the same as a 50 TSS run.
This can be used to simply determine the length, intensity and scheduling of an athletes next workout (to ensure adequate recovery) regardless of the combination of sports, or to identify the athlete's long-term tolerance to—and targets for—training load using metrics such as Chronic Training Load.
To make this possible when using Strava, I wrote a Chrome extension that estimates the TSS score of a run from its Grade Adjusted Pace distribution:
The "TSS (estimated)" value is calculated by the extension.
At the tail end of last year I mentioned a couple of tools I had used in my testing of SSL/TLS certificates used for trivia itself and my mail server. However, that post concentrated on the server side certificates and ignored the security, or otherwise, offered by the browser’s configuration. It is important to know the client side capability because without proper support there for the more secure ciphers it is pointless the server offering them in the handshake – the client-server interaction will simply negotiate downwards until both sides reach agreement on a capability. That capability may be sub-optimal.
A recent post to the Tor stackexchange site posed a question about the client side security offered by the TorBrowser Bundle v. 3.5 (which uses Firefox). The questioner had used the “howsmyssl” site to check the cipher suites which would be used by firefox in a TLS/SSL exchange and been disturbed to discover that it reportedly offered an insecure cipher (SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA). Sam Whited responded with a pointer to his blog post about improving FF’s use of TLS.
From that post, it appears that TLS 1.1 and 1.2 were off by default in FF versions prior to 27. Hence they would have been off in the TorBrowser Bundle as well.
Recently I've been thinking about getting a new laptop. I have this rule that a laptop should last me at least 3 years (ideally more) and my old laptop was bought in September 2010. So for the past few months I've been trying to work out if there's something suitable on the market that is a good replacement (last time I didn't manage to find something that ticked all the boxes, but did pretty well for the price I paid).
To start with I decided to track my laptops over time - largely because one of my concerns was about the size of a replacement, because I have a significant leaning towards subnotebooks. In the end the reason I decided to upgrade was for some extra CPU grunt; my old machine had a tendency to get pretty hot under any sort of load.
Amstrad PPC 640D
NEC V30 8MHz
9" 640x200 non-backlit green LCD
2 x 3.5" FDD
Compaq Aero 4/33c
7.8" 640x480 CSTN LCD
Compaq Evo N200
10.4" 1024x768 TFT
Toshiba Portege R200
Pentium M 753 1.2GHz
12.1" 1024x768 TFT
Asus EEE 901
Atom N270 1.6GHz
8.9" 1024x600 TFT
4GB + 16GB SSD
Acer Aspire 1830T
Core-i5 470UM 1.33GHz
11.6" 1366x768 TFT
$699.99 (~ £480)
The EEE didn't actually replace the Toshiba, but I mention it for completeness. It was actually the only machine I moved to the US with, but after about a month of it as my primary machine I realized it wasn't an option for day to day use - though it was fantastic as a machine to throw in an overnight bag, especially when coupled with a 3G dongle.
I wasn't keen on significantly increasing the size of my laptop. There are a number of decent 13" Ultrabook options out there, and I looked at a few of them, but nothing grabbed me as being worth the increase in size. Also I wanted something better than the Acer - one of the major problems was finding something smaller than 13" that had 8G RAM, let alone more. There's a significant trend towards everything soldered in for the smaller/slimmer notebooks, which makes some sense but means that the base spec had better be right.
Much to my surprise the Microsoft Surface Pro 2 looked like an option. It comes with an i5-4300U processor (at least since around Christmas 2013), and the 256/512G SSD models have 8G RAM. Screen resolution is an attractive true HD (1920x1080) and the 10" display means it's smaller than the Aspire. Unfortunately the keyboard lets it down. It's fine given a flat surface, but not great if you want to support the whole thing on your lap. Which is something I tend to do with my laptop, whether that's on the sofa, or in bed, or on a bus/train.
Another option was the Sony Vaio Pro 11. This is a pretty sweet laptop (I managed to get to play with one at a Sony store in the US). Super slim and light. 8GB RAM. True HD screen. However I have bad memories of the build quality of the older Vaios and the fact that there was /no/ user replaceable parts put me off - it's a safe bet that a laptop battery is going to need replaced in a 3 year lifespan.
What I managed to find, and purchase, was a Dell Latitude E7240. I admit that the Dell brand made me wary - while I've not had any issue with their desktops I didn't associated their laptops as being particularly high quality. Mind you, I could say the same for Acer and I've been very pleased with the Aspire (if they'd had a more up to date model I'd have bought it). I bought the E7240 with the Core-i5 4300U (so the same as the Surface Pro 2) and True HD touch screen. It has a replaceable battery, expandable RAM (up to 16G) and the storage is an mSATA SSD. It also came with a built in 3G card. At 12.5" it's a little bigger than my old machine, but I decided that was a reasonable idea given the higher resolution. I'm typing this article on it now, having finally completed the setup and migration of the data from the old laptop to it this evening. More details once I've been using it for a little bit I think.
I purchased this book http://www.amazon.co.uk/dp/0738206679 (Linked, by Barabasi) on the 24th of December 2002, I had managed to make 6 or 7 aborted attempts at reading it to completion where life had suddenly got busy and just took over. This meant that I put the book down and didn't pick it up again until things were less hectic some time later and I started again.
Anyhow, I finally beat the book a few nights ago, my comprehension of it was pretty low anyhow but at least it is done. Just shows I need to read lots more given how little went in.
These are notes from a tech support call with my parents last night, saved here for the next time stuff breaks.
If you’re running Mac OS X Snow Leopard (and possibly other versions), you may find you can’t log in. Symptoms are:
You click on your username and enter your password
The login screen is replaced by a blue screen for a short time
You are returned to the login screen.
After searching the interwebs I found Fixing a Mac OSX Leopard Login Loop Caused by Launch Services. It seems the problem is caused by corrupted cache files (which could be caused by the computer shutting down abruptly, or may just be “one of those things” that happens from time to time). This gave me enough information to come up with these “easy” steps to resolve it:
Log in to the Mac as a different user*
Press cmd-space to open Spotlight, type “Terminal”, and click on the Terminal application.
Work out the broken user’s username by typing: ls /Users and look for the appropriate broken account name e.g. franksmith or janedoe.
Find out the user ID of the user from the previous step by typing: id -u janedoe which will print a number something like 501
Delete the user’s broken cache files. In the following command, be sure to substitute the correct username (in place of janedoe) and the correct user ID after the 023 (in place of the 501): su -l janedoe -c ‘rm /Library/Caches/com.apple.LaunchServices-023501.*’ (be very careful with this, you don’t want to delete the wrong things).
If you’re super-confident in figuring out backticks you could of course skip step 4 and instead of step 5 do: su -l janedoe -c ‘rm /Library/Caches/com.apple.LaunchServices-023`id -u janedoe`.*’
Test by logging in to the troublesome user account.
Note that if you had any apps configured to launch at login, you may need to re-add these.
* This makes me think it’s good practice when setting up a Mac to always set up an extra user account, just in case stuff breaks.
I finally made it back out onto the bike today for the first time since September last year. I'd spent some time ill in October and November which meant I had to stop exercising and as a result I've gained loads of weight over the winter and it turns out also become very unfit which can be verified by looking at the Strava ride from today: http://www.strava.com/activities/110354158
Anyhow, a nice thing about this ride is that I can record it on Strava and get this data about how unfit I have become, this is because last year I bought a Mio Cyclo 305 HC cycle computer http://eu.mio.com/en_gb/mio-cyclo-305-hc.htm from Halfords reduced to £144.50 (using a British Cycling discount). I was originally going to get a Garmin 500 but Amazon put the price up from £149.99 the day I was going to buy it to £199.99.
I knew when I got the Mio that it had a few issues surrounding usability and features but it was cheap enough at under £150 that I figured that even if I didn't get on with it I'd at least have a cadence sensor and heart rate monitor so I could just buy a Garmin 510 when they sorted out the firmware bugs with that and the price came down a bit which is still my longer term intention.
So it turns out a couple of weeks ago I plugged my Mio into a Windows VM when I was testing USB support and carried out a check for new firmware. I was rather surprised to see a new firmware update and new set of map data was available for download. So I installed it think I wasn't going to get any new features from it as Mio had released some new models but it turns out that the new firmware actually enables a single feature (amongst other things, they also tidied up the UI and sorted a few other bugs along with some other features) that makes the device massively more useful as it now also creates files in .fit format which can be uploaded directly to Strava.
This is massively useful for me as although the Mio always worked in Linux as the device is essentially just a USB mass storage device but you would have to do an intermediate step of having to use https://github.com/rhyas/GPXConverter to convert the files from the Mio-centric GPX format to something Strava would recognise. Now I can just browse to the folder and upload the file directly which is very handy.
All in it turns out that buying a Mio which reading reviews and forums were full of doom and gloom means I can wait even longer before considering replacement with a garmin.
I’ve been using our local Lidl recently, because their policy of regularly baking throughout the day means I can pick up fresh croissants and pains au chocolat whenever I go, whereas the local Tesco, Sainsburys, and Waitrose have usually run out by mid-morning. Are the so-called discount supermarkets really cheaper than the mainstream supermarkets? Here’s the result of one unscientific survey.
This morning I checked my till receipt against Tesco online.
There are some items that cost the same regardless of which supermarket (fabric softener, fresh orange juice). There are some items that don’t have direct equivalents across stores, so price comparisons aren’t possible. And there are some items where the price is not significantly different (fresh milk, toilet paper).
On today’s basket of comparable items, Lidl was £10.62 cheaper (costing £18.46 instead of £29.08).
There are some real eye-openers. Eggs are 1.5x more expensive at Tesco. Fresh vegetables were often almost twice the price at Tesco. And what about my fresh croissants and pains au chocolat? £0.29 and £0.39 at Lidl, vs £0.80 each at Tesco. Over twice the price — on today’s shop, buying just these alone saved me £4.70. And they were fresh from the oven, still warm when I got them home.
I was back at my parents' over Christmas, like usual. Before I got back my Dad had mentioned they'd been having ADSL stability issues. Previously I'd noticed some issues with keeping a connection up for more than a couple of days, but it had got so bad he was noticing problems during the day. The eventual resolution isn't going to surprise anyone who's dealt with these things before, but I went through a number of steps to try and improve things.
Firstly, I arranged for a new router to be delivered before I got back. My old Netgear DG834G was still in use and while it didn't seem to have been the problem I'd been meaning to get something with 802.11n instead of the 802.11g it supports for a while. I ended up with a TP-Link TD-W8980, which has dual band wifi, ADSL2+, GigE switch and looked to have some basic OpenWRT support in case I want to play with that in the future. Switching over was relatively simple and as part of that procedure I also switched the ADSL microfilter in use (I've seen these fail before with no apparent cause).
Once the new router was up I looked at trying to get some line statistics from it. Unfortunately although it supports SNMP I found it didn't provide the ADSL MIB, meaning I ended up doing some web scraping to get the upstream/downstream sync rates/SNR/attenuation details. Examination of these over the first day indicated an excessive amount of noise on the line. The ISP offer the ability in their web interface to change the target SNR for the line. I increased this from 6db to 9db in the hope of some extra stability. This resulted in a 2Mb/s drop in the sync speed for the line, but as this brought it down to 18Mb/s I wasn't too worried about that.
Watching the stats for a further few days indicated that there were still regular periods of excessive noise, so I removed the faceplate from the NTE5 master BT socket, removing all extensions from the line. This resulted in regaining the 2Mb/s that had been lost from increasing the SNR target, and after watching the line for a few days confirmed that it had significantly decreased the noise levels. It turned out that the old external ringer that was already present on the line when my parents' moved in was still connected, although it had stopped working some time previously. Also there was an unused and much spliced extension in place. Removed both of these and replacing the NTE5 faceplate led to a line that was still stable. At the time of writing the connection has been up since before the new year, significantly longer than it had managed for some time.
As I said at the start I doubt this comes as a surprise to anyone who's dealt with this sort of line issue before. It wasn't particularly surprising to me (other than the level of the noise present), but I went through each of the steps to try and be sure that I had isolated the root cause and could be sure things were actually better. It turned out that doing the screen scraping and graphing the results was a good way to verify this. Observe:
The blue/red lines indicate the SNR for the upstream and downstream links - the initial lower area is when this was set to a 6db target, then later is a 9db target. Green are the forward error correction errors divided by 100 (to make everything fit better on the same graph). These are correctable, but still indicate issues. Yellow are CRC errors, indicating something that actually caused a problem. They can be clearly seen to correlate with the FEC errors, which makes sense. Notice the huge difference removing the extensions makes to both of these numbers. Also notice just how clear graphing the data makes things - it was easy to show my parents' the graph and indicate how things had been improved and should thus be better.
Please excuse the intrusion to your usual software and co-op news items but vine seems broken and as part of my community and democratic interests, I’d like to share this short clip quoting Norfolk’s Deputy Police Commissioner Jenny McKibben about why Commissioner Stephen Bett believes it’s important to get views from the west of the county about next year’s police budget:
Personally, with a King’s Lynn + West Norfolk Bike Users Group hat on, I’d like it if people supported a 2% (£4/year average) tax increase to reduce the police’s funding cut (the grant from gov.uk is being cut by 4%) so that we’re less likely to have future cuts to traffic policing. The consultation details and response form are on the PCC website.
So, got a beep this morning from our work monitoring system. One of our customers domain names is hosted with livedns.co.uk (which, as far as I can tell, is part of the Fasthosts franchise)... It appears that Fasthosts have managed to entirely break their DNS:
brettp@laptop:~$ host www.fasthosts.com
;; connection timed out; no servers could be reached
brettp@laptop:~$ whois fasthosts.com | grep -i "Name Server"
Name Server: NS1.FASTHOSTS.NET.UK
Name Server: NS2.FASTHOSTS.NET.UK
Name Server: NS1.FASTHOSTS.NET.UK
Name Server: NS2.FASTHOSTS.NET.UK
brettp@laptop:~$ whois fasthosts.net.uk | grep -A 2 "Name servers:"
brettp@laptop:~$ host -t ns fasthosts.net.uk 126.96.36.199
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns fasthosts.net.uk 188.8.131.52
;; connection timed out; no servers could be reached
So, that's fasthosts core nameservers not responding, good start! They also provide livedns.co.uk, so lets have a look at that:
brettp@laptop:~$ whois livedns.co.uk | grep -A 3 "Name servers:"
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 184.108.40.206
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 220.127.116.11
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 18.104.22.168
;; connection timed out; no servers could be reached
So, erm, apparently that's all their DNS servers "Not entirely functioning correctly"! That's quite impressive!
It's New Year's Day 2014 and I'm reflecting on the music of past year.
Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.
The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.
The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.
Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.
The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be. Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!
There have been some dark days for UK coops recently – the crystal Methodist and all that – and I have not been able to talk about it much because of the amount of work that I want to do before the end of the year.
The pull version fails at a fairly random point after a fairly undefined period of time. The push version works everytime. This is most confusing and odd...
Dear lazyweb, please give me some new ideas as to what's going on, it's driving me nuts!
A different daemon wasn't limiting it's killing habits in the case that a certain process wasn't running, and was killing the ssh process on the new server almost at random, found the bug in the code and now testing with that.
Thanks for all the suggestions though, much appreciated.
I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I've made to the default Scratch install to make things easier. So here goes:
With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn't exactly helpful. It also isn't ideal that they can explore the C: drive in spite of profile restrictions (although it isn't the end of the world as there is little they can do from Scratch).
After a bit of time with Google I found the answer, and since it didn't immediately leap out at me when I was searching I thought I'd post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:
Initially it looks like this:
Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):
To get this:
They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.
The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.
You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).
There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.
Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven't tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:
ProxyServer=[server name or IP address]
There have been a couple of false starts in publishing the Christmas special Code Club project, Christmas Capers, this year. Since I am planning to use it at my last Code Club of this term, which is on Tuesday (much to the disappointment of my 'Codeclubbers'), I have been keen to get it tested. Unfortunately, although the course notes were circulated, the resources haven't quite made it yet, so I decided to see what I could do.
First thing I noted, having gone through my past emails, was that it was used last year as well (unfortunately I don't seem to have a copy). The link on the original Code Club blog is no longer working sadly, however there was hope that resources would be out there somewhere. After a bit of searching I found a copy on the Scratch website that someone had uploaded, so I grabbed the resources from that and tested it so I was sure everything was there. I had a slight issue with the Jingle_Bells.mp3 file not being a supported format, but this seems to be down to something missing on my netbook as all is fine under both Windows 7 and Ubuntu Linux on my main machine.
So, for anyone looking for the resources, they are here in a full package including a copy of the project and course notes.
Keep up the good work fellow Code Club volunteers, and if anyone would like to pop along and encourage my Code Club recruits to blog a bit more, we are here. As a school governor with an interest in literacy as well as computing I'm trying to make it a bit cross curricular
Oh, and if there is anyone in the Portsmouth and surrounding area interested in meeting up, I'm hoping to get my act together and do something in the new year. Do get in touch.
My formal role on the Transforming Musicology project is Project Manager. This involves ensuring that goals are reached, objectives are met, and deliverables are, well, delivered. Two things seem to be key to these ends: maintaining a good and current overview of both high and low level project activity; and maintaining good communication across the whole project team.
Early on in the project, one of the co-investigators started an IkiWiki for us to use for various project management activities. Since then it&aposs been my responsibility to develop this resource. Given that I&aposve asserted that awareness of activity and communication are crucial for project management, how have I used the wiki to enable those?
We&aposre using an IRC for part of our communication needs, although not all project team members are fully conversant in IRC and its idiosyncrasies. So I thought it would be useful to keep a log on the wiki. I already have a log file for the channel which is generated by dircproxy so I started looking for ways to get an HTML version of this onto the wiki. Nothing was immediately apparent. Stuff exists for generating HTML from channel logs, so I thought about scripting something to dump some HTML to somewhere accessible from the wiki which, in turn, lead me to thinking about automating it the proper IkiWiki way: with a plugin. And irclog was born.
It provides a directive, [[!irclog ]], which pulls a channel log from a given location, uses Parse::IRCLog to parse the log for \say and \me events, and renders those events as HTML to be included in place of the [[!irclog ]] in the page. The implementation involved adding dircproxy-specific parsing to Parse::IRCLog. (It would be nice to get that merged into Parse::IRCLog itself, but for now it&aposs bundled with the plugin.) It also involved thinking up strategies for allowing the host on which the wiki is compiled to get at the channel log at compile time. I did this by allowing the location parameter to the [[!irclog ]] directive to be a string parsable by the core URI module and then implementing (well, not quite) handlers for a number of URI schemes. In fact, I&aposve only really tested the scheme I&aposm actually using, ssh. In my case, the wiki compiling host holds an SSH key with a public part authorised on the dircproxy host to retrieve the log file. I then have cron on the wiki compiling host rebuild the wiki periodically to cause the log to be updated. (There might be a less sledge hammer-like solution to the updating problem: perhaps --rendering the page and moving the result to the $DEST_DIR?)
To make the plugin a bit more of an IkiWiki citizen, it allows inclusion of wikilinks by providing a text substitution feature. You can specify a keywords argument to the [[!irclog ]] directive which should contain a string formatted a bit like a Perl hash (e.g. richard=>[[richard]]) and which indicates that occurrences of the &aposkey&apos should be replaced by the &aposvalue&apos. The replacement text could be a wikilink, thus allowing your IRC log to integrate with the rest of your wiki. The obvious usage (and the one I&aposve implemented) is a mapping from nicks to project team members&apos user pages.
A future post may document how I&aposm using IkiWiki for task management...
A great deal of time has passed since I last wrote a blog post. During that time my partner and I have had a baby (who&aposs now 20 months old) and bought a house, I&aposve started a new job, finished my Ph.D, finished the previously mentioned new job, and started another new job.
The first new job was working for an open source consultancy firm called credativ which is based in Rugby but which, at the time I started, had recently opened a London office. Broadly, they consult on open source software for business. In practice most of the work is using OpenERP, an open source enterprise resource planning (ERP) system written in Python. I was very critical of OpenERP when I started, but I guess this was partly because my unfamiliarity with it led to me often feeling like a n00b programmer again and this was quite frustrating. By the time I finished at credativ I&aposd learned to understand how to deal with this quite large software system and I now have a better understanding of its real deficiencies: code quality in the core system is generally quite poor, although it has a decent test suite and is consequently functionally fairly sound, the code is scrappy and often quite poorly designed; the documentation is lacking and not very organised; its authors, I find, don&apost have a sense of what developers who are new to the framework actually need to know. I also found that, during the course of my employment, it took a long time to gain experience of the system from a user&aposs perspective (because I had to spend time doing development work with it); I think earlier user experience would have helped me to understand it sooner. Apart from those things, it seems like a fairly good ERP. Although one other thing I learned working with it (and with business clients in general) is the importance of domain knowledge: OpenERP is about business applications (accounting, customer relations, sales, manufacture) and, it turns out, I don&apost know anything about any of these things. That makes trying to understand software designed to solve those problems doubly hard. (In all my previous programming experience, I&aposve been working in domains that are much more familiar.)
As well as OpenERP, I&aposve also learned quite a lot about the IT services industry and about having a proper job in general. Really, this was the first proper job I&aposve ever had; I&aposve earned money for years, but always in slightly off-the-beaten-track ways. I&aposve found that team working skills (that great CV cliché) are actually not one of my strong points; I had to learn to ask for help with things, and to share responsibilities with my colleagues. I&aposve learned a lot about customers. It&aposs a very different environment where a lot of your work is reactive; I&aposve previously been used to long projects where the direction is largely self-determined. A lot of the work was making small changes requested by customers. In such cases it&aposs so important to push them to articulate as clearly as possible what they are actually trying to achieve; too often customers will describe a requirement at the wrong level of detail, that is, they&aposll describe a technical level change. What&aposs much better is if you can get them to describe the business process they are trying to implement so you can be sure the technical change they want is appropriate or specify something better. I&aposve learned quite a bit about managing my time and being productive. We undertook a lot of fixed-price work, where we were required to estimate the cost of the work beforehand. This involves really knowing how long things take which is quite a skill. We also needed to be able to account for all our working time in order to manage costs and stick within budgets for projects. So I learned some more org-mode tricks for managing effort estimates and for keeping more detailed time logs.
My new new job is working back at Goldsmiths (where I did my Ph.D) again, with mostly the same colleagues. We&aposre working on an AHRC-funded project called Transforming Musicology. We have partners at Queen Mary, the Centre for e-Research at Oxford, Oxford Music Faculty, and the Lancaster Institute for Contemporary Arts. The broad aim of the project can be understood as the practical follow-on from my Ph.D: how does the current culture of pervasive networked computing affect what it means to study music and how music gets studied? We&aposre looking for evidence of people using computers to do things which we would understand as musicology, even though they may not. We&aposre also looking at how computers can be integrated into the traditional discipline. And we&aposre working on extending some existing tools for music and sound analysis, and developing frameworks for making music resources available on the Semantic Web. My role is as project manager. I started work at the beginning of October so we&aposve done four days so far. It&aposs mainly been setting up infrastructure (website, wiki, mailing list) and trying to get a good high-level picture of how the two years should progress.
I&aposve also moved my blog from livejournal to here which I manage using Ikiwiki. Livejournal is great; I just liked the idea of publishing my blog using Ikiwiki, writing it in Emacs, and managing it using git. Let&aposs see if I stick to it...
I’m trying to build a Raspberry Pi powered robot based on the DRDs from Farscape, I thought I’d blog my progress.
DRDs or “Diagnostic Repair Drones” are robots from the cult science fiction series Farscape. They carry out various functions aboard a leviathian (a species of living biomechanoid spaceship) including repairing and maintaining the ship. They’re ovoid in shape and they have two moving eye stalks and all sorts of tools like a robotic claw and a plasma welder.
Here’s some video footage from the series to give you an idea of what these little guys get up to:
The original DRDs were designed and built by the extremely talented folks at the Jim Henson Creature Shop in London (yes Jim Henson as in the Muppets!). They built lots of different variations of the robot over the years to be used in shooting different scenes for the show, but to my knowledge they’ve never released any designs.
I assumed I was going to have to painstakingly design a 3D computer model of one based on frame grabs from my DVDs of the series. I then planned to track down someone with a CNC router and a vacuum forming machine and persuade them to let me use them. Either that or find someone with an industrial sized 3D printer!
Luckily I came across a special effects company in the US who sells a kit to build a model of a DRD. The model is made from hollow cast fiberglass and resin and comes with ribbed plastic for the eye stalks, eye pieces with clear lenses, two parts of a claw and some colourful wires to make it look the part.
The kit isn’t perfect. The size, shape and proportions aren’t quite right and the finish is a bit rough but it’s good enough for my purposes. The part I’m really interested in is the robotics so I’m grateful that someone has already done the work for me on the basic shell.
The web site provides video tutorials on how to build the model and then how to put LEDs in the eyes and mount an remote controlled car underneath to make it move about in a bit of a crude fashion.
We can be a bit more sophisticated than that.
The Raspberry Pi is a credit-card sized computer developed in the UK by the not-for-profit Raspberry Pi Foundation to promote the teaching of programming in schools. It’s a single-board computer with a 700Mhz ARM processor and 512MB RAM, boots off an SD card and costs only around £30.
This is my Raspberry Pi:
The Gertboard is an expansion board which attaches to the Raspberry Pi via its GPIO pins and helps when experimenting with interfacing the Pi with the outside world. It comes with an Arduino compatible AVR microcontroller, analogue to digital converters, digital to analogue converters, a motor controller, push buttons, LEDs and much more.
Booting the Pi
The Raspberry Pi can boot Linux from an SD card and the most popular distribution is Raspbian which is a Debian-derivative. You can download an image and flash it to an SD card, or even buy an SD card with it already loaded.
To boot the Raspberry Pi all you need to do is insert your Raspbian SD card, plug it into a TV via either the HDMI port or the composite video port and power it up by plugging it into a Micro USB phone charger.
Here’s my Raspberry Pi booted and plugged into an old CRT TV:
Logging In Remotely
It’s cool that I can plug the Raspberry Pi into a TV, but I don’t want to be squinting at an old portable TV or sitting in the lounge next to my big flatscreen TV all the time I’m programming the robot, so I want to be able to log in remotely. Also, my plan is to build a web interface to control the robot over WiFi, so it’s going to need to connect to a network at some point.
First I plugged a USB keyboard into the Raspberry Pi and an ethernet cable to connect it to my network. The SSH daemon is already started by default, but I wanted to set a static IP address so that I always knew what to log in to.
I logged into the Raspberry Pi locally (the default username is pi and the password is raspberry) and edited the network configuration using the vi text editor.
$ sudo vi /etc/network/interfaces
I provided the following configuration to assign a static IP address of 192.168.1.42 on my local network:
iface lo inet loopback
iface eth0 inet static
Then restart the network interface with:
$ sudo ifdown -a
$ sudo ifup -a
Then check that I’m connected to the network, and the Internet by pinging Google.
$ ping google.com
I see that I’m successfully connected, so I can now log into the Raspberry Pi remotely using its new static IP.
From my desktop Linux box I type:
$ ssh firstname.lastname@example.org
type in the password “raspberry”, and voilà! I’m logged in.
I hope you weren’t expecting to see a finished robot! There’s a very long way to go yet.
If you desperately wanted to see a finished robot, here’s a picture of the last one I worked on, a line following robot we built at university powered by a PIC microcontroller.
Next I want to start playing around with the Gertboard and and make LEDs blink on and off from Python.
I wired up a MIDI In port along the lines of This one here, messed with the code a bit and voila (and potentially viola), I can play LV2 instrument plugins using a MIDI keyboard:
When I say "LV2 synth plugins", I should clarify that I'm only using the LV2 plugin C API, not the whole .ttl text file shebangle. I hope to get around to that at some point but it will be a while before you can directly plug LV2s into this and expect them to just work.
I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!
It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.
Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.
The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.
So come on Minty people get your bug squashing boots on and stamp on this one please.
Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!
Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.
Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.
Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.
Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.
The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to – because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.
I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.
It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.
Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!
The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.
Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.
A couple of years ago the roundabout at the edge of Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.
Through Krellian I will be able to continue to lead the Webian project, and I will also be taking up a new contract with the Mozilla Corporation to work with them on Boot to Gecko (B2G).
Like me and the other members of the Webian community, Mozilla believes that the open web can displace proprietary, single-vendor stacks for application development. The B2G project will include prototype APIs for exposing device and OS capabilities to web content, a privilege model to safely expose these new capabilities, a complete "low-level substrate" for Android-compatible devices and a collection of web apps to prioritise and prove the power of the platform.
Benefits to Webian
The potential benefits for Webian are enormous. Webian Shell was already hitting limitations of what is currently possible with Mozilla Chromeless and this new work on the core Mozilla platform promises to make many more of Webian's goals possible. While B2G initially focuses on the mobile space, Webian can focus on nettop and netbook form factors and perhaps eventually the two projects could even converge.
Sponsorship from Krellian will provide the ongoing resources necessary for running the Webian project and ensure that it remains free and open source.
I'm excited about this new chapter in Webian's story and believe more strongly than ever in the future of the open web.
I’ve been using this for a while and had it recorded on a private on a private wiki. I was just tidying up my hosting account and thought I’d get rid of the wiki and store any useful info from it on my blog
Clone full subversion history into git repository (warning, may take a long time depending on how many commits you have in your Subversion repository).
One of the big buzz phrases at the moment seems to be Continual Integration Development. If you’re developing and wanting to deploy ‘as the features are ready’, and you have a cloud you have two main options, which both have pros and con’s but:
New Image per Milestone
Most cloud systems work by you taking an ‘image’ of a pre-setup machine, then booting new instances of these images. Each time you get to a milestone, you setup a new image, and then setup your auto scaling system to launch these instances rather then the old one, but you have to shut down all your old images and bring them up as new ones.
Pro’s: The machines come up in the new state quickly. Con’s: For each deployment, you have to do quite a bit more work making the new image. Each deployment requires shutting down all the old images and bringing up new replacements.
SCM Pull on Boot
Make one image, and give it access to your SCM (i.e. git / svn etc). Build in a boot process that brings up the service but also fetches the most recent copy of the ‘live’ branch.
Pro’s: You save a lot of time in deployments – deployments are triggered by people committing to the live branch, rather then by system administrators performing the deployments. Because they are running SCM, updating all the currently running images is as simple as just running the fetch procedure again. Con’s: You do need to maintain two branches: a live and a dev branch, and merge (some SCM’s might not like this). Also, your SCM hosting has to be able to cope with when you get loads (i.e. when new computers get added). Your machines come up a little slower as they have to do the fetch before they are usable.
I opted for the second route: we use Git, so we can clone quickly to the right branch. We’ve also added in git hooks that make sure any setup procedures (copying the right settings file in) are done when the computer comes up. Combining this with a fabric script to update all the currently running boxes is a dream.