Planet ALUG

April 16, 2014

Mick Morgan

nsa operation orchestra

In February of this year, Poul-Henning Kamp (a.k.a “PHK”) gave what now looks to be a peculiarly prescient presentation as the closing keynote to 2014′s FOSDEM.

In the presentation (PDF), PHK posits an NSA operation called ORCHESTRA which is designed to undermine internet security through a series of “disinformation” or “misinformation”, or “misdirection” sub operations. ORCHESTRA is intended to be cheap, non-technical, completely deniable, but effective. One of the opening slides gives ORCHESTRA’s “operation at a glance” overview as:

* Objective:
- Reduce cost of COMINT collection
* Scope:
- All above board
- No special authorizations
* Means:
- Eliminate/reduce/prevent encryption
- Enable access
- Frustrate players

PHK delivers the presentation as if he were a mid-ranking NSA staffer intending to brief NATO in Brussels. But “being American, he ends up [at FOSDEM] instead”. The truly scary part of this presentation is that it could all be completely true.

What makes the presentation so timely is his commentary on openssl. Watch it and weep.

by Mick at April 16, 2014 09:30 PM

more heartbleed

For any readers uncertain of exactly how the heartbleed vulberability in openssl might be exploitable, Sean Cassidy over at existential type has a good explanation.

And if you find that difficult to follow, Randall Munroe over at xkcd covers it quite nicely.

heartbleed_explanation

My thanks, and appreciation as always, to a great artist.

Of course, Randall foresaw this problem back in 2008 when he published his take on the debian openssl fiasco.

by Mick at April 16, 2014 11:04 AM

April 15, 2014

Steve Engledow (stilvoid)

Netcat

I had occasion recently to need an entry in my ssh config such that connections to a certain host would be proxied through another connection. Several sources suggested the following snippet:

Host myserver.net
    ProxyCommand nc -x <proxy host>:<proxy port> %h %p

In my situation, I wanted the connection to be proxied through an ssh tunnel that I already had set up in another part of the config. So my entry looked like:

Host myserver.net
    ProxyCommand nc -x localhost:5123 %h %p

Try as I might however, I just could not get it to work, always receiving the following message:

Error: Couldn't resolve host "localhost:5123"

After some head scratching, checking and double-checking that I had set up the proxy tunnel correctly, I finally figured out that it was because I had GNU netcat installed rather than BSD netcat. Apparently, most of the people in the internet use BSD netcat :)

Worse, -x is a valid option in both netcats but does completely different things depending on which you use; hence the less-than-specific-but-technically-correct error message.

After that revalation, I thought it was worth capturing the commonalities and differences between the options taken by the netcats.


Common options

BSD netcat only

GNU netcat only


I uninstalled GNU netcat and installed BSD netcat btw ;)

by Steve Engledow (steve@offend.me.uk) at April 15, 2014 12:30 AM

April 14, 2014

Chris Lamb

Race report: Cambridge Duathlon 2014

(This is my first race of the 2014 season.)


I had entered this race in 2013 and found it was effective for focusing winter training. As triathlons do not typically start until May in the UK, scheduling earlier races can be motivating in the colder winter months.

I didn't have any clear goals for the race except to blow out the cobwebs and improve on my 2013 time. I couldn't set reasonable or reliable target times after considerable "long & slow" training in the off-season but I did want to test some new equipment and stategies, especially race pacing with a power meter, but also a new wheelset, crankset and helmet.

Preparation was both accidentally and deliberately compromised: I did very little race-specific training as my season is based around an entirely different intensity of race, but compounding this I was confined to bed the weekend before.

Sleep was acceptable in the preceding days and I felt moderately fresh on race morning. Nutrition-wise, I had porridge and bread with jam for breakfast, a PowerGel before the race, 750ml of PowerBar Perform on the bike along with a "Hydro" PowerGel with caffeine at approximately 30km.


Run 1 (7.5km)

A few minutes before the start my race number belt—the only truly untested equipment that day—refused to tighten. However, I decided that once the race began I would either ignore it or even discard it, risking disqualification.

Despite letting everyone go up the road, my first km was still too fast so I dialed down the effort, settling into a "10k" pace and began overtaking other runners. The Fen winds and drag-strip uphill from 3km provided a bit of pacing challenge for someone used to shelter and shorter hills but I kept a metered effort through into transition.

Time
33:01 (4:24/km, T1: 00:47) — Last year: 37:47 (5:02/km)

Bike (40km)

Although my 2014 bike setup features a power meter, I had not yet had the chance to perform an FTP test outdoors. I was thus was not able to calculate a definitive target power for the bike leg. However, data from my road bike suggested I set a power ceiling of 250W on the longer hills.

This was extremely effective in avoiding going "into the red" and compromising the second run. This lends yet more weight to the idea that a power meter in multisport events is "almost like cheating".

I was not entirely comfortable with my bike position: not only were my thin sunglasses making me raise my head more than I needed to, I found myself creeping forward onto the nose of my saddle. This is sub-optimal, even if only considering that I am not training in that position.

Overall, the bike was uneventful with the only memorable moment provided by a wasp that got stuck between my head and a helmet vent. Coming into transition I didn't feel like I had really pushed myself that hard—probably a good sign—but the time difference from last year's bike leg (1:16:11) was a little underwhelming.

Time
1:10:45 (T2: 00:58)

Run 2 (7.5km)

After leaving transition, my legs were extremely uncooperative and I had great difficulty in pacing myself in the first kilometer. Concentrating hard on reducing my cadence as well as using my rehearsed mental cue, I managed to settle down.

The following 4 kilometers were a mental struggle rather than a physical one, modulo having to force a few burps to ease some discomfort, possibly from drinking too much or too fast on the bike.

I had planned to "unload" as soon as I reached 6km but I didn't really have it in me. Whilst I am physiologically faster compared to last year, I suspect the lack of threshold-level running over the winter meant the mental component required for digging deep will require some coaxing to return.

However, it is said that you have successfully paced a duathlon if the second run faster than the first. On this criterion, this was a success, but it would have been a bonus to have really felt completely completely drained at the end of the day, if only from a neo-Calvinist perspective.

Time
32:46 (4:22/km) / Last year: 38:10 (5:05/km)

Overall

Total time
2:18:19

A race that goes almost entirely to plan is a bit of a paradox – there's certainly satisfaction in setting goals and hitting them without issue, but this is a gratification of slow-burning fire rather than the jubilation of a fireworks display.

However, it was nice to learn that I managed to finish 5th in my age group despite this race attracting an extremely strong field: as an indicator, the age-group athlete finishing immediately before me was seven minutes faster and the overall winner finished in 1:54:53 (!).

The race identified the following areas to work on:

Although not strictly race-related, I also need to find techniques to ensure transporting a bike on public transport is less stressful. (Full results & full 2014 race schedule)

April 14, 2014 12:59 PM

April 11, 2014

Chris Lamb

2014 race schedule

«Swim 2.4 miles! Bike 112 miles! Run 26.2 miles! Brag for the rest of your life...»


In 2013, my training efforts were based around a "70.3"-distance race. In my second year in triathlon I will be targetting my first Ironman-distance event.

After some deliberation I decided on the Ironman event in Klagenfurt, Austria (pictured) not only because the location lends a certain tone to the occasion but because the course is suited to my relative strengths within the three disciplines.

I've made the following conscious changes to my race scheduling and selection this year:

Readers may observe that despite my primary race finishing with a marathon-distance run, I am not racing a standalone marathon in preparation. This is common practice, justified by the run-specific training leading up to a marathon and the recovery period afterwards compromising training overall.

For similar reasons, I have also chosen not to race a "70.3" distance event in 2014. Whether to do so is a more contentious issue than whether to run a marathon, but it resolved itself once I could not find an event that was suitably scheduled and I could convince myself that most of the benefits could be achieved through other means.


April 13th

Cambridge Duathlon (link)

Run: 7.5km, bike: 40km, run: 7.5km

May 11th

St Neots Olympic Tri (link)

Swim: 1,500m, bike: 40km, run: 10km

May 17th

ECCA 50-mile cycling time trial (link)

50 miles. Course: E2/50C

June 1st

Icknield RC 100-mile cycling time trial (link)

100 miles. Course: F1/100

June 15th

Cambridge Triathlon (link)

Swim: 1,500m, bike: 40km, run: 10km

June 29th

Ironman Austria (link)

Swim 2.4km, bike: 190km, run: 42.2km

April 11, 2014 10:58 PM

March 27, 2014

Richard Lewis

Taking notes in Haskell

The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we&aposre using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we&aposre working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don&apost even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that&aposs what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

Here&aposs what I typed:

module MusicalDocuments where

import Data.Maybe

-- A document comprises some number of openings (double page spreads)
data Document = Document [Opening]

-- An opening comprises one or two pages (usually two)
data Opening = Opening (Page, Maybe Page)

-- A page comprises multiple systems
data Page = Page [System]

-- Each part is the line for a particular voice
data Voice = Superius | Discantus | Tenor | Contratenor | Bassus

-- A part comprises a list of musical sybmols, but it may span mutliple systems
--(including partial systems)
data Part = Part [MusicalSymbol]

-- A piece comprises some number of sections
data Piece = Piece [Section]

-- A system is a collection of staves
data System = System [Staff]

-- A staff is a list of atomic graphical symbols
data Staff = Staff [Glyph]

-- A section is a collection of parts
data Section = Section [Part]

-- These are the atomic components, MusicalSymbols are semantic and Glyphs are
--syntactic (i.e. just image elements)
data MusicalSymbol = MusicalSymbol
data Glyph = Glyph

-- If this were real, Image would abstract over some kind of binary format
data Image = Image

-- One of the important properties we need in order to be able to construct pieces
-- from the scanned components is to be able to say when objects of the some of the
-- types are strictly contiguous, i.e. this staff immediately follows that staff
class Contiguous a where
  immediatelyFollows :: a -> a -> Bool
  immediatelyPrecedes :: a -> a -> Bool
  immediatelyPrecedes a b = b `immediatelyFollows` a

instance Contiguous Staff where
  immediatelyFollows :: Staff -> Staff -> Bool
  immediatelyFollows = undefined

-- Another interesting property of this data set is that there are a number of
-- duplicate scans of openings, but nothing in the metadata that indicates this,
-- so our workflow needs to recognise duplicates
instance Eq Opening where
  (==) :: Opening -> Opening -> Bool
  (==) a b = undefined

-- Maybe it would also be useful to have equality for staves too?
instance Eq Staff where
  (==) :: Staff -> Staff -> Bool
  (==) a b = undefined

-- The following functions actually represent the workflow

collate :: [Document]
collate = undefined

scan :: Document -> [Image]
scan = undefined

split :: Image -> Opening
split = undefined

paginate :: Opening -> [Page]
paginate = undefined

omr :: Page -> [System]
omr = undefined

segment :: System -> [Staff]
segment = undefined

tokenize :: Staff -> [Glyph]
tokenize = undefined

recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol
recogniseMusicalSymbol = undefined

part :: [Glyph] -> Maybe Part
part gs =
  if null symbols then Nothing else Just $ Part symbols
  where symbols = mapMaybe recogniseMusicalSymbol gs

alignable :: Part -> Part -> Bool
alignable = undefined

piece :: [Part] -> Maybe Piece
piece = undefined

I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that&aposs just a label for a list or pair.

I haven&apost written much Haskell code before, and given that I&aposve only implemented one function here, I still haven&apost written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

March 27, 2014 11:13 PM

March 26, 2014

Richard Lewis

Ph.D Progress

I submitted my Ph.D thesis at the end of September 2013 in time for what was believed to be the AHRC deadline. It was a rather slim submission at around 44,000 words and rejoiced under the title of Understanding Information Technology Adoption in Musicology. Here&aposs the abstract:

Since the mid 1990s, innovations and technologies have emerged which, to varying extents, allow content-based search of music corpora. These technologies and their applications are known commonly as music information retrieval (MIR). While there are a variety of stakeholders in such technologies, the academic discipline of musicology has always played an important motivating and directional role in the development of these technologies. However, despite this involvement of a small representation of the discipline in MIR, the technologies have so far failed to make any significant impact on mainstream musicology. The present thesis, carried out under a project aiming to examine just such an impact, attempts to address the question of why this has been the case by examining the histories of musicology and MIR to find their common roots and by studying musicologists themselves to gauge their level of technological sophistication. We find that some significant changes need to be made in both music information retrieval and musicology before the benefits of technology can really make themselves felt in music scholarship.

(Incidentally, the whole thing was written using org-mode, including some graphs that get automatically generated each time the text is compiled. Unfortunately I did have to cheat a little bit and typed in LaTeX \cite commands rather than using proper org-mode links for the references.)

So the thing was then examined in January 2014 by an information science, user studies expert and a musicologist. As far as it went, the defence was actually not too bad, but after defending the defensible it eventually became clear that significant portions of the thesis were just not up to scratch; not, in fact, defensible. They weren&apost prepared to pass it and have asked that I revise and then re-submit it.

Two things seem necessary to address: 1) why did this happen? And 2) what do I do next?

I started work on this Ph.D with only quite a vague notion of what it was going to be about. The Purcell Plus project left open the possibility of the Ph.D student doing some e-Science-enabled musicological study. But I think I&aposd come out of undergraduate and masters study with a view of academic research that was very much text-based; the process of research---according to the me of ca. 2008---was to read lots of things and synthesise them, and the more obscure and dense the stuff read the better. The process is one of noticing generalisations amongst all these sources that haven&apost been remarked on before and remarking on them, preferably with a good balance of academic rigour and barefaced rhetoric. And I brought this pre-conception into a computing department. My first year was intended to be a training year, but I was actually already highly computer literate with considerable programming experience and quite a bit of knowledge of at least symbolic work in computational musicology. Consequently, I didn&apost fully engage with learning new stuff during that first year and instead embarked on a project of attempting to be rhetorical. It wasn&apost until later on that I really started to understand that those around had a completely different idea as to how research can be carried out. While I was busy reading, most of my colleagues were doing experiments; they were actually finding out new stuff (or at least were attempting to) and had the potential to make an original contribution to knowledge. At this point I started to look for research methods that could be applicable to my subject matter and eventually hit upon a couple of actually quite standard social science methods. So I think that&aposs the first thing that went wrong: I failed to take on board soon enough the new research culture that I had (or, I suppose, should have) entered.

I think I&aposve always been someone who thrives on the acknowledgement of things I&aposve done; I always looked forwarded to receiving my marks at school and as an undergraduate; and I liked finding opportunities to do clever jobs for people, especially little software development projects where there&aposs someone to say, "that&aposs great! Thanks for doing that." I think I quickly found that doctoral research didn&apost offer me this at all. My experience was very much a solitary one where no one was really aware of what I was working on. Consequently two things happened: first, I tended not to pursue things very far through lack of motivation; and second (and this was the really dangerous one), I kept finding other things to do that did give me that feedback. I always found ways to justify these--lets face it---procrastination activities; mainly that they were all Goldsmiths work, including quite a lot of undergraduate and masters level teaching (the latter actually including designing a course from scratch), some Computing Department admin work, and some development projects. Doing these kinds of activities is actually generally considered very good for doctoral students, but they&aposre normally deliberately constrained to ensure that the student has plenty of research time still. Through my own choice, I let them take over far too much of my research time.

The final causal point to mention is the one that any experienced academic will immediately point to: supervision. I failed to take advantage of the possibilities of supervision. As my supervisor was both involved in the project of which the Ph.D was part and also worked in the same office as me, we never had the right kind of relationship to foster good progress and working habits from me. I spoke to my supervisor every day and so I didn&apost really push for formal supervision often enough. I can now see that it would have been better to have someone with whom I had a less familiar relationship and who had less of an interest in my work and who, as a result, would just operate and enforce the procedures of doctoral project progress. It&aposs also possible that a more formal supervision relationship would have addressed the points above: I may have been forced to solidify ideas and to identify proper methods much sooner; I may have had more of the feedback that I needed; and I may have been more strongly discouraged from engaging in so much extra-research activity.

The purpose of all this is not to apportion blame (I have a strong sense of being responsible for everything that I do), but to state publicly something that I&aposve been finding very hard to say: I failed my Ph.D. And (and this is the important bit) to make sure that I get on with what I need to do to pass it.

I&aposm going to blog this work as it goes along. So if I stop blogging, please send me harassing emails telling me to get the f*** on with it!

March 26, 2014 11:37 PM

Steve Engledow (stilvoid)

Judon't

tl;dr: broke my collar bone, ouch.

Since my last post, I've had a second Judo session which was just as enjoyable as the first except for one thing. Towards the end of the session we went into randori (free practice) - basically one-on-one sparring. I'm pleased to say that I won my first bout but in the second I went up against a guy who'd been learning for only 6 months or so. After some grappling, he threw me quite hard and with little control and I landed similarly badly - owch.

The first thing I realised was that I'd slammed my head pretty hard against the mat and that I was feeling a little groggy. After a little while, it became apparent that I'd also injured my shoulder. The session was over though and I winced through helping to put the mats away.

By the time I got in my car to drive home, my shoulder was a fair bit more painful and I made an appointment to see the doctor. When I saw her, she said there's a slight chance I'd broken my collar bone but it didn't seem very bad so I could just go home and see how it was in a couple of days.

A couple of days later I went back to the surgery. I saw a different doctor who said that she didn't think it was broken but she'd refer me for an X-Ray. The radiologist soon pointed out a nice clean break at the tip of my collar bone! I sat in A&E forever until I eventually saw a doctor who referred me to the orthopaedic clinic the following Monday.

Finally, the orthopaedic clinic's doctor told me I'd self-managed pretty well and that I should be ok to just get on with things but definitely no falling over, lifting anything heavy, and definitely no judo for up to six weeks.

Apparently, I've been very lucky as it's easy to dislodge the broken bone when the break is where mine is. Apparently, if I fall over or anything similar, I'm likely to end up in surgery :S


I've had to tell this story so many times now that I thought I might as well just write it down. For some reason, people seem to want to know all of the details when I mention that I've injured myself. Sadists.

Not Morrison's

On an unrelated subject, I've come to realise that I've developed an unhelpful manner when dealing with doors. I'm not the most graceful of people in general but it strikes me that I have a particularly awkward way of approaching doors. When I walk up to them, I hold a hand out to grasp the handle and push or pull (somehow I usually even manage to get that bit wrong regardless of signage) without slowing down at all which means that I have to swing the door quite fast to get it out of my way by the time my body catches up. Then, as the door's opening at a hefty pace, I have to grab the edge of the door and stop it before it slams into the wall. Because I'm still moving forward, this usually means that I'm partially closing the door as I move away from it.

In all, I feel awkward when passing through doors, and anybody directly behind me is liable to receive a door in the face if I'm not aware of them :S

by Steve Engledow (steve@offend.me.uk) at March 26, 2014 12:44 AM

March 23, 2014

Andrew Savory

Mastering the mobile app challenge at Adobe Summit

I’m presenting a 2 hour Building mobile apps with PhoneGap Enterprise lab at Adobe Summit in Salt Lake City on Tuesday, with my awesome colleague John Fait. Here’s a sneak preview of the blurb which will be appearing over on the Adobe Digital Marketing blog tomorrow. I’m posting it here as it may be interesting to the wider Apache Cordova community to see what Adobe are doing with a commercial version of the project…

~

Mobile apps are the next great challenge for marketing experts. Bruce Lefebvre sets the the scene perfectly in So, You Want to Build an App. In his mobile app development and content management with AEM session at Adobe Summit he’ll show you how Adobe Marketing Cloud solutions are providing amazing capabilities for delivering mobile apps. It’s a must-see session to learn about AEM and PhoneGap.

But what if you want to gain hands-on practical experience of AEM, PhoneGap, and mobile app development? If you want to roll up your sleeves and build a mobile app yourself, then we’ve got an awesome lab planned for you. In “Building mobile apps with PhoneGap Enterprise“, you’ll have the opportunity to create, build, and update a mobile application with Adobe Experience Manager. You’ll see how easy it is to deliver applications across multiple platforms. You’ll also learn how you can easily monitor app engagement through integration with Adobe Analytics and Adobe Mobile Services.

If you want to know how you can deliver more effective apps, leverage your investment in AEM, and bridge the gap between marketers and developers, then you need to attend this lab at Adobe Summit. Join us for this extended deep dive into the integration between AEM and PhoneGap. No previous experience is necessary – you don’t need to know how to code, and you don’t need to know any of the Adobe solutions, as we’ll explain it all as we go along. Some of you will also be able to leave the lab with the mobile app you wrote, so that you can show your friends and colleagues how you’re ready to continuously drive mobile app engagement and ROI, reduce your app time to market, and deliver a unified experience across channels and brands.

Are you ready to master the mobile app challenge?

~

All hyperbole aside, I think this is going to be a pretty interesting technology space to watch:

Exciting times.

by Andrew at March 23, 2014 05:00 PM

February 22, 2014

Wayne Stallwood (DrJeep)

Outlook 2003, Cutting off Emails

I had a friend come to me with an interesting problem they were having in their office. Due to the Exchange server and Office licencing they have they are running Outlook 2003 on Windows 7 64bit Machines.

After Internet Explorer updates to IE11 it introduces a rather annoying bug into Outlook. Typed emails often get cut off mid sentence when you click Send ! So only part of the email gets sent !

What I think is happening is that Outlook is reverting to a previously autosaved copy before sending.

Removing the IE11 update would probably fix it but perhaps the easiest way is to disable the "Autosave unsent email" option in Outlook.

Navigate to:-
Tools, Options, E-Mail Options, Advanced E-Mail Options, and disable the "Autosave unsent" option.

February 22, 2014 08:43 AM

February 16, 2014

Wayne Stallwood (DrJeep)

Dancing Ferrofluid first test

First attempt, This is using a coil scavenged from an old hard drive. The real project I am working on isn't really about driving it with audio but I just wanted to see how it worked out.

Fed with half wave rectified audio. The coil impedance measured at approximately 6 ohms which was convenient as it's not that far from a loudspeaker coil.

Running it with a 60W amp meant that I only had about 30 seconds before the coil started to overheat. Quite fun though I might try a bigger coil or an array of more small coils.

February 16, 2014 06:03 PM

February 11, 2014

Jonathan McDowell

Choosing a new laptop

Recently I've been thinking about getting a new laptop. I have this rule that a laptop should last me at least 3 years (ideally more) and my old laptop was bought in September 2010. So for the past few months I've been trying to work out if there's something suitable on the market that is a good replacement (last time I didn't manage to find something that ticked all the boxes, but did pretty well for the price I paid).

To start with I decided to track my laptops over time - largely because one of my concerns was about the size of a replacement, because I have a significant leaning towards subnotebooks. In the end the reason I decided to upgrade was for some extra CPU grunt; my old machine had a tendency to get pretty hot under any sort of load.

DateModelCPUScreenRAMStorageW (mm)H (mm)D (mm)WeightCost
1991Amstrad PPC 640DNEC V30 8MHz9" 640x200 non-backlit green LCD640k2 x 3.5" FDD45023010010kg???
August 1997Compaq Aero 4/33c486sx337.8" 640x480 CSTN LCD4MB80MB260190431.6kg???
July 2002Compaq Evo N200P-III 700MHz10.4" 1024x768 TFT192MB20GB251198201.1kg£939.99
August 2005Toshiba Portege R200Pentium M 753 1.2GHz12.1" 1024x768 TFT1280MB60GB286229201.29kg£1313.58
September 2008Asus EEE 901Atom N270 1.6GHz8.9" 1024x600 TFT2GB4GB + 16GB SSD248175231.1kg£299.99
September 2010Acer Aspire 1830TCore-i5 470UM 1.33GHz11.6" 1366x768 TFT8GB500GB284203281.4kg$699.99 (~ £480)

The EEE didn't actually replace the Toshiba, but I mention it for completeness. It was actually the only machine I moved to the US with, but after about a month of it as my primary machine I realized it wasn't an option for day to day use - though it was fantastic as a machine to throw in an overnight bag, especially when coupled with a 3G dongle.

I wasn't keen on significantly increasing the size of my laptop. There are a number of decent 13" Ultrabook options out there, and I looked at a few of them, but nothing grabbed me as being worth the increase in size. Also I wanted something better than the Acer - one of the major problems was finding something smaller than 13" that had 8G RAM, let alone more. There's a significant trend towards everything soldered in for the smaller/slimmer notebooks, which makes some sense but means that the base spec had better be right.

Much to my surprise the Microsoft Surface Pro 2 looked like an option. It comes with an i5-4300U processor (at least since around Christmas 2013), and the 256/512G SSD models have 8G RAM. Screen resolution is an attractive true HD (1920x1080) and the 10" display means it's smaller than the Aspire. Unfortunately the keyboard lets it down. It's fine given a flat surface, but not great if you want to support the whole thing on your lap. Which is something I tend to do with my laptop, whether that's on the sofa, or in bed, or on a bus/train.

Another option was the Sony Vaio Pro 11. This is a pretty sweet laptop (I managed to get to play with one at a Sony store in the US). Super slim and light. 8GB RAM. True HD screen. However I have bad memories of the build quality of the older Vaios and the fact that there was /no/ user replaceable parts put me off - it's a safe bet that a laptop battery is going to need replaced in a 3 year lifespan.

What I managed to find, and purchase, was a Dell Latitude E7240. I admit that the Dell brand made me wary - while I've not had any issue with their desktops I didn't associated their laptops as being particularly high quality. Mind you, I could say the same for Acer and I've been very pleased with the Aspire (if they'd had a more up to date model I'd have bought it). I bought the E7240 with the Core-i5 4300U (so the same as the Surface Pro 2) and True HD touch screen. It has a replaceable battery, expandable RAM (up to 16G) and the storage is an mSATA SSD. It also came with a built in 3G card. At 12.5" it's a little bigger than my old machine, but I decided that was a reasonable idea given the higher resolution. I'm typing this article on it now, having finally completed the setup and migration of the data from the old laptop to it this evening. More details once I've been using it for a little bit I think.

February 11, 2014 10:31 PM

February 06, 2014

Adam Bower (quinophex)

I finally managed to beat my nemesis!

I purchased this book http://www.amazon.co.uk/dp/0738206679 (Linked, by Barabasi) on the 24th of December 2002, I had managed to make 6 or 7 aborted attempts at reading it to completion where life had suddenly got busy and just took over. This meant that I put the book down and didn't pick it up again until things were less hectic some time later and I started again.

Anyhow, I finally beat the book a few nights ago, my comprehension of it was pretty low anyhow but at least it is done. Just shows I need to read lots more given how little went in.

comment count unavailable comments

February 06, 2014 10:40 PM

Andrew Savory

Login problems on Mac OS X Snow Leopard

These are notes from a tech support call with my parents last night, saved here for the next time stuff breaks.

If you’re running Mac OS X Snow Leopard (and possibly other versions), you may find you can’t log in. Symptoms are:

After searching the interwebs I found Fixing a Mac OSX Leopard Login Loop Caused by Launch Services. It seems the problem is caused by corrupted cache files (which could be caused by the computer shutting down abruptly, or may just be “one of those things” that happens from time to time). This gave me enough information to come up with these “easy” steps to resolve it:

  1. Log in to the Mac as a different user*
  2. Press cmd-space to open Spotlight, type “Terminal”, and click on the Terminal application.
  3. Work out the broken user’s username by typing: ls /Users and look for the appropriate broken account name e.g. franksmith or janedoe.
  4. Find out the user ID of the user from the previous step by typing: id -u janedoe which will print a number something like 501
  5. Delete the user’s broken cache files. In the following command, be sure to substitute the correct username (in place of janedoe) and the correct user ID after the 023 (in place of the 501): su -l janedoe -c ‘rm /Library/Caches/com.apple.LaunchServices-023501.*’ (be very careful with this, you don’t want to delete the wrong things).
  6. Test by logging in to the troublesome user account.
Note that if you had any apps configured to launch at login, you may need to re-add these.

* This makes me think it’s good practice when setting up a Mac to always set up an extra user account, just in case stuff breaks.

by Andrew at February 06, 2014 12:05 PM

February 01, 2014

Adam Bower (quinophex)

Why buying a Mio Cyclo 305 HC cycling computer was actually a great idea.

I finally made it back out onto the bike today for the first time since September last year. I'd spent some time ill in October and November which meant I had to stop exercising and as a result I've gained loads of weight over the winter and it turns out also become very unfit which can be verified by looking at the Strava ride from today: http://www.strava.com/activities/110354158

Anyhow, a nice thing about this ride is that I can record it on Strava and get this data about how unfit I have become, this is because last year I bought a Mio Cyclo 305 HC cycle computer http://eu.mio.com/en_gb/mio-cyclo-305-hc.htm from Halfords reduced to £144.50 (using a British Cycling discount). I was originally going to get a Garmin 500 but Amazon put the price up from £149.99 the day I was going to buy it to £199.99.

I knew when I got the Mio that it had a few issues surrounding usability and features but it was cheap enough at under £150 that I figured that even if I didn't get on with it I'd at least have a cadence sensor and heart rate monitor so I could just buy a Garmin 510 when they sorted out the firmware bugs with that and the price came down a bit which is still my longer term intention.

So it turns out a couple of weeks ago I plugged my Mio into a Windows VM when I was testing USB support and carried out a check for new firmware. I was rather surprised to see a new firmware update and new set of map data was available for download. So I installed it think I wasn't going to get any new features from it as Mio had released some new models but it turns out that the new firmware actually enables a single feature (amongst other things, they also tidied up the UI and sorted a few other bugs along with some other features) that makes the device massively more useful as it now also creates files in .fit format which can be uploaded directly to Strava.

This is massively useful for me as although the Mio always worked in Linux as the device is essentially just a USB mass storage device but you would have to do an intermediate step of having to use https://github.com/rhyas/GPXConverter to convert the files from the Mio-centric GPX format to something Strava would recognise. Now I can just browse to the folder and upload the file directly which is very handy.

All in it turns out that buying a Mio which reading reviews and forums were full of doom and gloom means I can wait even longer before considering replacement with a garmin.

comment count unavailable comments

February 01, 2014 02:11 PM

January 18, 2014

Jonathan McDowell

Fixing my parents' ADSL

I was back at my parents' over Christmas, like usual. Before I got back my Dad had mentioned they'd been having ADSL stability issues. Previously I'd noticed some issues with keeping a connection up for more than a couple of days, but it had got so bad he was noticing problems during the day. The eventual resolution isn't going to surprise anyone who's dealt with these things before, but I went through a number of steps to try and improve things.

Firstly, I arranged for a new router to be delivered before I got back. My old Netgear DG834G was still in use and while it didn't seem to have been the problem I'd been meaning to get something with 802.11n instead of the 802.11g it supports for a while. I ended up with a TP-Link TD-W8980, which has dual band wifi, ADSL2+, GigE switch and looked to have some basic OpenWRT support in case I want to play with that in the future. Switching over was relatively simple and as part of that procedure I also switched the ADSL microfilter in use (I've seen these fail before with no apparent cause).

Once the new router was up I looked at trying to get some line statistics from it. Unfortunately although it supports SNMP I found it didn't provide the ADSL MIB, meaning I ended up doing some web scraping to get the upstream/downstream sync rates/SNR/attenuation details. Examination of these over the first day indicated an excessive amount of noise on the line. The ISP offer the ability in their web interface to change the target SNR for the line. I increased this from 6db to 9db in the hope of some extra stability. This resulted in a 2Mb/s drop in the sync speed for the line, but as this brought it down to 18Mb/s I wasn't too worried about that.

Watching the stats for a further few days indicated that there were still regular periods of excessive noise, so I removed the faceplate from the NTE5 master BT socket, removing all extensions from the line. This resulted in regaining the 2Mb/s that had been lost from increasing the SNR target, and after watching the line for a few days confirmed that it had significantly decreased the noise levels. It turned out that the old external ringer that was already present on the line when my parents' moved in was still connected, although it had stopped working some time previously. Also there was an unused and much spliced extension in place. Removed both of these and replacing the NTE5 faceplate led to a line that was still stable. At the time of writing the connection has been up since before the new year, significantly longer than it had managed for some time.

As I said at the start I doubt this comes as a surprise to anyone who's dealt with this sort of line issue before. It wasn't particularly surprising to me (other than the level of the noise present), but I went through each of the steps to try and be sure that I had isolated the root cause and could be sure things were actually better. It turned out that doing the screen scraping and graphing the results was a good way to verify this. Observe:

adsl-noise.png

The blue/red lines indicate the SNR for the upstream and downstream links - the initial lower area is when this was set to a 6db target, then later is a 9db target. Green are the forward error correction errors divided by 100 (to make everything fit better on the same graph). These are correctable, but still indicate issues. Yellow are CRC errors, indicating something that actually caused a problem. They can be clearly seen to correlate with the FEC errors, which makes sense. Notice the huge difference removing the extensions makes to both of these numbers. Also notice just how clear graphing the data makes things - it was easy to show my parents' the graph and indicate how things had been improved and should thus be better.

January 18, 2014 06:22 AM

January 09, 2014

MJ Ray

Request for West Norfolk to Complete the PCC Consultation

Please excuse the intrusion to your usual software and co-op news items but vine seems broken and as part of my community and democratic interests, I’d like to share this short clip quoting Norfolk’s Deputy Police Commissioner Jenny McKibben about why Commissioner Stephen Bett believes it’s important to get views from the west of the county about next year’s police budget:

Personally, with a King’s Lynn + West Norfolk Bike Users Group hat on, I’d like it if people supported a 2% (£4/year average) tax increase to reduce the police’s funding cut (the grant from gov.uk is being cut by 4%) so that we’re less likely to have future cuts to traffic policing. The consultation details and response form are on the PCC website.

by mjr at January 09, 2014 01:19 PM

January 04, 2014

Brett Parker (iDunno)

Wow, I do believe Fasthosts have outdone themselves...

So, got a beep this morning from our work monitoring system. One of our customers domain names is hosted with livedns.co.uk (which, as far as I can tell, is part of the Fasthosts franchise)... It appears that Fasthosts have managed to entirely break their DNS:

brettp@laptop:~$ host www.fasthosts.com
;; connection timed out; no servers could be reached
brettp@laptop:~$ whois fasthosts.com | grep -i "Name Server"
   Name Server: NS1.FASTHOSTS.NET.UK
   Name Server: NS2.FASTHOSTS.NET.UK
Name Server: NS1.FASTHOSTS.NET.UK
Name Server: NS2.FASTHOSTS.NET.UK
brettp@laptop:~$ whois fasthosts.net.uk | grep -A 2 "Name servers:"
    Name servers:
        ns1.fasthosts.net.uk      213.171.192.252
        ns2.fasthosts.net.uk      213.171.193.248
brettp@laptop:~$  host -t ns fasthosts.net.uk 213.171.192.252
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns fasthosts.net.uk 213.171.193.248
;; connection timed out; no servers could be reached
brettp@laptop:~$

So, that's fasthosts core nameservers not responding, good start! They also provide livedns.co.uk, so lets have a look at that:

brettp@laptop:~$ whois livedns.co.uk | grep -A 3 "Name servers:"
    Name servers:
        ns1.livedns.co.uk         213.171.192.250
        ns2.livedns.co.uk         213.171.193.250
        ns3.livedns.co.uk         213.171.192.254
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.192.250
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.193.250
;; connection timed out; no servers could be reached
brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.192.254
;; connection timed out; no servers could be reached

So, erm, apparently that's all their DNS servers "Not entirely functioning correctly"! That's quite impressive!

by Brett Parker (iDunno@sommitrealweird.co.uk) at January 04, 2014 10:24 AM

January 01, 2014

John Woodard

A year in Prog!


It's New Year's Day 2014 and I'm reflecting on the music of past year.

Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 01, 2014 07:56 PM

December 09, 2013

MJ Ray

About Co-ops & Governance

There have been some dark days for UK coops recently – the crystal Methodist and all that – and I have not been able to talk about it much because of the amount of work that I want to do before the end of the year.

Happily good colleagues have been writing about it and here’s another good article from Kate Whittle that links to Ed Mayo and Ian Snaith who are the other two that I’d suggest.  http://www.cooperantics.coop/2013/12/09/co-ops-governance/

I should be back in a few days to summarise the event I attended last week.

by mjr at December 09, 2013 01:53 PM

December 03, 2013

Brett Parker (iDunno)

dd over ssh oddness

So, using the command:

root@new# ssh root@old dd if=/dev/vg/somedisk | dd of=/dev/vg/somedisk

appears to fail, getting a SIGTERM at some point for no discernable reason... however, using

root@old# dd if=/dev/vg/somedisk | ssh root@new dd of=/dev/vg/somedisk

works fine.

The pull version fails at a fairly random point after a fairly undefined period of time. The push version works everytime. This is most confusing and odd...

Dear lazyweb, please give me some new ideas as to what's going on, it's driving me nuts!

Update: solved...

A different daemon wasn't limiting it's killing habits in the case that a certain process wasn't running, and was killing the ssh process on the new server almost at random, found the bug in the code and now testing with that.

Thanks for all the suggestions though, much appreciated.

by Brett Parker (iDunno@sommitrealweird.co.uk) at December 03, 2013 10:59 AM

December 01, 2013

Paul Tansom

Scratch in a network environment

I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I've made to the default Scratch install to make things easier. So here goes:

With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn't exactly helpful. It also isn't ideal that they can explore the C: drive in spite of profile restrictions (although it isn't the end of the world as there is little they can do from Scratch).

save-orig

After a bit of time with Google I found the answer, and since it didn't immediately leap out at me when I was searching I thought I'd post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:

C:\Program Files\Scratch\Scratch.ini

Initially it looks like this:

ini-orig

Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):

Home=U:
VisibleDrives=U:

To get this:

ini-new

They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.

save-new1

The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.

save-new2

You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).

There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.

Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven't tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:

ProxyServer=[server name or IP address]
ProxyPort=[port number]

by Paul Tansom at December 01, 2013 07:00 PM

Code Club Christmas Capers

There have been a couple of false starts in publishing the Christmas special Code Club project, Christmas Capers, this year. Since I am planning to use it at my last Code Club of this term, which is on Tuesday (much to the disappointment of my 'Codeclubbers'), I have been keen to get it tested. Unfortunately, although the course notes were circulated, the resources haven't quite made it yet, so I decided to see what I could do.

First thing I noted, having gone through my past emails, was that it was used last year as well (unfortunately I don't seem to have a copy). The link on the original Code Club blog is no longer working sadly, however there was hope that resources would be out there somewhere. After a bit of searching I found a copy on the Scratch website that someone had uploaded, so I grabbed the resources from that and tested it so I was sure everything was there. I had a slight issue with the Jingle_Bells.mp3 file not being a supported format, but this seems to be down to something missing on my netbook as all is fine under both Windows 7 and Ubuntu Linux on my main machine.

So, for anyone looking for the resources, they are here in a full package including a copy of the project and course notes.

Keep up the good work fellow Code Club volunteers, and if anyone would like to pop along and encourage my Code Club recruits to blog a bit more, we are here. As a school governor with an interest in literacy as well as computing I'm trying to make it a bit cross curricular ;-)

Oh, and if there is anyone in the Portsmouth and surrounding area interested in meeting up, I'm hoping to get my act together and do something in the new year. Do get in touch.

by Paul Tansom at December 01, 2013 05:48 PM

April 07, 2013

Ben Francis

Introducing DRD Pi

I’m trying to build a Raspberry Pi powered robot based on the DRDs from Farscape, I thought I’d blog my progress.

DRD

DRDs or “Diagnostic Repair Drones” are robots from the cult science fiction series Farscape. They carry out various functions aboard a leviathian (a species of living biomechanoid spaceship) including repairing and maintaining the ship. They’re ovoid in shape and they have two moving eye stalks and all sorts of tools like a robotic claw and a plasma welder.

DRD

Here’s some video footage from the series to give you an idea of what these little guys get up to:

DRD Kit

The original DRDs were designed and built by the extremely talented folks at the Jim Henson Creature Shop in London (yes Jim Henson as in the Muppets!). They built lots of different variations of the robot over the years to be used in shooting different scenes for the show, but to my knowledge they’ve never released any designs.

I assumed I was going to have to painstakingly design a 3D computer model of one based on frame grabs from my DVDs of the series. I then planned to track down someone with a CNC router and a vacuum forming machine and persuade them to let me use them. Either that or find someone with an industrial sized 3D printer!

Luckily I came across a special effects company in the US who sells a kit to build a model of a DRD. The model is made from hollow cast fiberglass and resin and comes with ribbed plastic for the eye stalks, eye pieces with clear lenses, two parts of a claw and some colourful wires to make it look the part.

drd_kit

The kit isn’t perfect. The size, shape and proportions aren’t quite right and the finish is a bit rough but it’s good enough for my purposes. The part I’m really interested in is the robotics so I’m grateful that someone has already done the work for me on the basic shell.

The web site provides video tutorials on how to build the model and then how to put LEDs in the eyes and mount an remote controlled car underneath to make it move about in a bit of a crude fashion.

We can be a bit more sophisticated than that.

Raspberry Pi

The Raspberry Pi is a credit-card sized computer developed in the UK by the not-for-profit Raspberry Pi Foundation to promote the teaching of programming in schools. It’s a single-board computer with a 700Mhz ARM processor and 512MB RAM, boots off an SD card and costs only around £30.

This is my Raspberry Pi:

raspberry_pi

Gertboard

The Gertboard is an expansion board which attaches to the Raspberry Pi via its GPIO pins and helps when experimenting with interfacing the Pi with the outside world. It comes with an Arduino compatible AVR microcontroller, analogue to digital converters, digital to analogue converters, a motor controller, push buttons, LEDs and much more.

gertboard

Booting the Pi

The Raspberry Pi can boot Linux from an SD card and the most popular distribution is Raspbian which is a Debian-derivative. You can download an image and flash it to an SD card, or even buy an SD card with it already loaded.

To boot the Raspberry Pi all you need to do is insert your Raspbian SD card, plug it into a TV via either the HDMI port or the composite video port and power it up by plugging it into a Micro USB phone charger.

Here’s my Raspberry Pi booted and plugged into an old CRT TV:

raspberry_pi_and_tv

 

Logging In Remotely

It’s cool that I can plug the Raspberry Pi into a TV, but I don’t want to be squinting at an old portable TV or sitting in the lounge next to my big flatscreen TV all the time I’m programming the robot, so I want to be able to log in remotely. Also, my plan is to build a web interface to control the robot over WiFi, so it’s going to need to connect to a network at some point.

First I plugged a USB keyboard into the Raspberry Pi and an ethernet cable to connect it to my network. The SSH daemon is already started by default, but I wanted to set a static IP address so that I always knew what to log in to.

I logged into the Raspberry Pi locally (the default username is pi and the password is raspberry) and edited the network configuration using the vi text editor.

$ sudo vi /etc/network/interfaces

I provided the following configuration to assign a static IP address of 192.168.1.42 on my local network:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 192.168.1.42
netmask 255.255.255.0
gateway 192.168.1.1

Then restart the network interface with:
$ sudo ifdown -a
$ sudo ifup -a

Then check that I’m connected to the network, and the Internet by pinging Google.

$ ping google.com

I see that I’m successfully connected, so I can now log into the Raspberry Pi remotely using its new static IP.

From my desktop Linux box I type:

$ ssh pi@192.168.1.42

type in the password “raspberry”, and voilà! I’m logged in.

ssh

What’s Next

I hope you weren’t expecting to see a finished robot! There’s a very long way to go yet.

If you desperately wanted to see a finished robot, here’s a picture of the last one I worked on, a line following robot we built at university powered by a PIC microcontroller.

BEAST

Next I want to start playing around with the Gertboard and and make LEDs blink on and off from Python.

by tola at April 07, 2013 12:14 AM

February 22, 2013

Joe Button

Sampler plugin for the baremetal LV2 host

I threw together a simpler sampler plugin for kicks. Like the other plugins it sounds fairly underwhelming. Next challenge will probably be to try plugging in some real LV2 plugins.

February 22, 2013 11:22 PM

February 21, 2013

Joe Button

Baremetal MIDI machine now talks to hardware MIDI devices

The Baremetal MIDI file player was cool, but not quite as cool as a real instrument.

I wired up a MIDI In port along the lines of This one here, messed with the code a bit and voila (and potentially viola), I can play LV2 instrument plugins using a MIDI keyboard:

When I say "LV2 synth plugins", I should clarify that I'm only using the LV2 plugin C API, not the whole .ttl text file shebangle. I hope to get around to that at some point but it will be a while before you can directly plug LV2s into this and expect them to just work.

February 21, 2013 04:05 PM

January 16, 2013

John Woodard

LinuxMint 14 Add Printer Issue


 LinuxMint 14 Add Printer Issue



 

I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!

It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.

Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.

The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.

So come on Minty people get your bug squashing boots on and stamp on this one please.

Update

Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 16, 2013 12:39 AM

August 20, 2012

David Reynolds

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to – because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

July 21, 2012

James Taylor

Now Your All Dreams Will Going To Become Reality with Your Own Home Business

Post removed as was spam through aggregator.

July 21, 2012 03:08 AM

June 25, 2012

Elisabeth Fosbrooke-Brown (sfr)

Black redstarts

It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.

Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!

The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.

by sunflowerinrain (noreply@blogger.com) at June 25, 2012 08:02 AM

June 16, 2012

Elisabeth Fosbrooke-Brown (sfr)

Roundabout at Mirambeau

Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.

A couple of years ago the roundabout at the edge of  Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.

Need a passenger to take photo!

by sunflowerinrain (noreply@blogger.com) at June 16, 2012 12:06 PM

October 28, 2011

Ben Francis

Introducing Krellian, Webian's New Sponsor

Reposted from webian.org

Firstly, thanks for the incredible continued contributions from the Webian community and for all the work you've done on Webian Shell, which has now had more than 95,000 downloads!

Introducing Krellian

This week I left my job as Product Manager of Clinked at Rabbitsoft to start a software consultancy called Krellian.

Through Krellian I will be able to continue to lead the Webian project, and I will also be taking up a new contract with the Mozilla Corporation to work with them on Boot to Gecko (B2G).

Like me and the other members of the Webian community, Mozilla believes that the open web can displace proprietary, single-vendor stacks for application development. The B2G project will include prototype APIs for exposing device and OS capabilities to web content, a privilege model to safely expose these new capabilities, a complete "low-level substrate" for Android-compatible devices and a collection of web apps to prioritise and prove the power of the platform.

Benefits to Webian

The potential benefits for Webian are enormous. Webian Shell was already hitting limitations of what is currently possible with Mozilla Chromeless and this new work on the core Mozilla platform promises to make many more of Webian's goals possible. While B2G initially focuses on the mobile space, Webian can focus on nettop and netbook form factors and perhaps eventually the two projects could even converge.

Sponsorship from Krellian will provide the ongoing resources necessary for running the Webian project and ensure that it remains free and open source.

I'm excited about this new chapter in Webian's story and believe more strongly than ever in the future of the open web.

by tola at October 28, 2011 11:34 PM

October 15, 2011

David Reynolds

Git Workflow

I’ve been using this for a while and had it recorded on a private on a private wiki. I was just tidying up my hosting account and thought I’d get rid of the wiki and store any useful info from it on my blog

Clone full subversion history into git repository (warning, may take a long time depending on how many commits you have in your Subversion repository).

1
$ git-svn clone -s http://example.com/my_subversion_repo local_dir

-s signifies trunk/ branches/ tags/ exist in the svn repo (standard repository setup)

Create branch for local changes and check it out

1
$ git checkout -b XXXX-description # where XXXX is a ticket number

Make my changes in the branch… Make my commits in the branch…

Change back to master branch

1
$ git checkout master

Merge branch as one commit to master

1
$ git merge --squash XXXX-description

Commit changes to master branch:

1
$ git commit -a

Push changes back to svn:

1
$ git svn dcommit

Resync local_changes to master:

1
2
$ git checkout XXXX-description
$ git rebase master

October 15, 2011 02:12 PM

March 20, 2011

James Taylor

Continual Integration Development and the Cloud

One of the big buzz phrases at the moment seems to be Continual Integration Development. If you’re developing and wanting to deploy ‘as the features are ready’, and you have a cloud you have two main options, which both have pros and con’s but:

New Image per Milestone

Most cloud systems work by you taking an ‘image’ of a pre-setup machine, then booting new instances of these images. Each time you get to a milestone, you setup a new image, and then setup your auto scaling system to launch these instances rather then the old one, but you have to shut down all your old images and bring them up as new ones.

Pro’s: The machines come up in the new state quickly.
Con’s: For each deployment, you have to do quite a bit more work making the new image. Each deployment requires shutting down all the old images and bringing up new replacements.

SCM Pull on Boot

Make one image, and give it access to your SCM (i.e. git / svn etc). Build in a boot process that brings up the service but also fetches the most recent copy of the ‘live’ branch.

Pro’s: You save a lot of time in deployments – deployments are triggered by people committing to the live branch, rather then by system administrators performing the deployments. Because they are running SCM, updating all the currently running images is as simple as just running the fetch procedure again.
Con’s: You do need to maintain two branches: a live and a dev branch, and merge (some SCM’s might not like this). Also, your SCM hosting has to be able to cope with when you get loads (i.e. when new computers get added). Your machines come up a little slower as they have to do the fetch before they are usable.

I opted for the second route: we use Git, so we can clone quickly to the right branch. We’ve also added in git hooks that make sure any setup procedures (copying the right settings file in) are done when the computer comes up. Combining this with a fabric script to update all the currently running boxes is a dream.


March 20, 2011 11:20 AM

September 04, 2006

Ashley Howes

Some new photos

Take a look at some new photos my father and I have taken. We are experimenting with our new digital SLR with a variety of lenses.

by Ashley (noreply@blogger.com) at September 04, 2006 10:42 AM

August 30, 2006

Ashley Howes

A Collection of Comments

This is a bit of fun. A collection of comments found in code. This is from The Daily WTF.

by Ashley (noreply@blogger.com) at August 30, 2006 01:13 AM