Planet ALUG

September 25, 2017

Chris Lamb

Lintian: We are all Perl developers now

Lintian is a static analysis tool for Debian packages, reporting on various errors, omissions and general quality-assurance issues to maintainers.

I've previously written about my exploits with Lintian as well as authoring a short tutorial on how to write your own Lintian check.

Anyway, I recently uploaded version 2.5.53 about two months since previous release. The biggest changes you may notice are supporting the latest version of the Debian Policy as well the addition of checks to encourage the migration to Python 3.

Thanks to all who contributed patches, code review and bug reports to this release. The full changelog is as follows:

lintian (2.5.53) unstable; urgency=medium

  The "we are all Perl developers now" release.

  * Summary of tag changes:
    + Added:
      - alternatively-build-depends-on-python-sphinx-and-python3-sphinx
      - build-depends-on-python-sphinx-only
      - dependency-on-python-version-marked-for-end-of-life
      - maintainer-script-interpreter
      - missing-call-to-dpkg-maintscript-helper
      - node-package-install-in-nodejs-rootdir
      - override-file-in-wrong-package
      - package-installs-java-bytecode
      - python-foo-but-no-python3-foo
      - script-needs-depends-on-sensible-utils
      - script-uses-deprecated-nodejs-location
      - transitional-package-should-be-oldlibs-optional
      - unnecessary-testsuite-autopkgtest-header
      - vcs-browser-links-to-empty-view
    + Removed:
      - debug-package-should-be-priority-extra
      - missing-classpath
      - transitional-package-should-be-oldlibs-extra

  * checks/apache2.pm:
    + [CL] Fix an apache2-unparsable-dependency false positive by allowing
      periods (".") in dependency names.  (Closes: #873701)
  * checks/binaries.pm:
    + [CL] Apply patches from Guillem Jover & Boud Roukema to improve the
      description of the binary-file-built-without-LFS-support tag.
      (Closes: #874078)
  * checks/changes.{pm,desc}:
    + [CL] Ignore DFSG-repacked packages when checking for upstream
      source tarball signatures as they will never match by definition.
      (Closes: #871957)
    + [CL] Downgrade severity of orig-tarball-missing-upstream-signature
      from "E:" to "W:" as many common tools do not make including the
      signatures easy enough right now.  (Closes: #870722, #870069)
    + [CL] Expand the explanation of the
      orig-tarball-missing-upstream-signature tag to include the location
      of where dpkg-source will look. Thanks to Theodore Ts'o for the
      suggestion.
  * checks/copyright-file.pm:
    + [CL] Address a number of issues in copyright-year-in-future:
      - Prevent false positives in port numbers, email addresses, ISO
        standard numbers and matching specific and general street
        addresses.  (Closes: #869788)
      - Match all violating years in a line, not just the first (eg.
        "2000-2107").
      - Ignore meta copyright statements such as "Original Author". Thanks
        to Thorsten Alteholz for the bug report.  (Closes: #873323)
      - Expand testsuite.
  * checks/cruft.{pm,desc}:
    + [CL] Downgrade severity of file-contains-fixme-placeholder
      tag from "important" (ie. "E:") to "wishlist" (ie. "I:").
      Thanks to Gregor Herrmann for the suggestion.
    + [CL] Apply patch from Alex Muntada (alexm) to use "substr" instead
      of "substring" in mentions-deprecated-usr-lib-perl5-directory's
      description.  (Closes: #871767)
    + [CL] Don't check copyright_hints file for FIXME placeholders.
      (Closes: #872843)
    + [CL] Don't match quoted "FIXME" variants as they are almost always
      deliberate. Thanks to Adrian Bunk for the report.  (Closes: #870199)
    + [CL] Avoid false positives in missing source checks for "CSS Browser
      Selector".  (Closes: #874381)
  * checks/debhelper.pm:
    + [CL] Prevent a false positive of
      missing-build-dependency-for-dh_-command that can be exposed by
      following the advice for the recently added
      useless-autoreconf-build-depends tag.  (Closes: #869541)
  * checks/debian-readme.{pm,desc}:
    + [CL] Ensure readme-debian-contains-debmake-template also checks
      for templates "Automatically generated by debmake".
  * checks/description.{desc,pm}:
    + [CL] Clarify explanation of description-starts-with-leading-spaces
      tag. Thanks to Taylor Kline  for the report
      and patch.  (Closes: #849622)
    + [NT] Skip capitalization-error-in-description-synopsis for
      auto-generated packages (such as dbgsym packages).
  * checks/fields.{desc,pm}:
    + [CL] Ensure that python3-foo packages have "Section: python", not
      just python2-foo.  (Closes: #870272)
    + [RG] Do no longer require debug packages to be priority extra.
    + [BR] Use Lintian::Data for name/section mapping
    + [CL] Check for packages including "?rev=0&sc=0" in Vcs-Browser.
      (Closes: #681713)
    + [NT] Transitional packages should now be "oldlibs/optional" rather
      than "oldlibs/extra".  The related tag has been renamed accordingly.
  * checks/filename-length.pm:
    + [NT] Skip the check on auto-generated binary packages (such as
      dbgsym packages).
  * checks/files.{pm,desc}:
    + [BR] Avoid privacy-breach-generic false positives for legal.xml.
    + [BR] Detect install of node package under /usr/lib/nodejs/[^/]*$
    + [CL] Check for packages shipping compiled Java class files. Thanks
      Carnë Draug .  (Closes: #873211)
    + [BR] Privacy breach is no longer experimental.
  * checks/init.d.desc:
    + [RG] Do not recommend a versioned dependency on lsb-base in
      init.d-script-needs-depends-on-lsb-base.  (Closes: #847144)
  * checks/java.pm:
    + [CL] Additionally consider .cljc files as code to avoid false-
      positive codeless-jar warnings.  (Closes: #870649)
    + [CL] Drop problematic missing-classpath check.  (Closes: #857123)
  * checks/menu-format.desc:
    + [CL] Prevent false positives in desktop-entry-lacks-keywords-entry
      for "Link" and "Directory" .desktop files.  (Closes: #873702)
  * checks/python.{pm,desc}:
    + [CL] Split out Python checks from "scripts" check to a new, source
      check of type "source".
    + [CL] Check for python-foo without corresponding python3-foo packages
      to assist in Python 2.x deprecation.  (Closes: #870681)
    + [CL] Check for packages that Build-Depend on python-sphinx only.
      (Closes: #870730)
    + [CL] Check for packages that alternatively Build-Depend on the
      Python 2 and Python 3 versions of Sphinx.  (Closes: #870758)
    + [CL] Check for binary packages that depend on Python 2.x.
      (Closes: #870822)
  * checks/scripts.pm:
    + [CL] Correct false positives in
      unconditional-use-of-dpkg-statoverride by detecting "if !" as a
      valid shell prefix.  (Closes: #869587)
    + [CL] Check for missing calls to dpkg-maintscript-helper(1) in
      maintainer scripts.  (Closes: #872042)
    + [CL] Check for packages using sensible-utils without declaring a
      dependency after its split from debianutils.  (Closes: #872611)
    + [CL] Warn about scripts using "nodejs" as an interpreter now that
      nodejs provides /usr/bin/node.  (Closes: #873096)
    + [BR] Add a statistic tag giving interpreter.
  * checks/testsuite.{desc,pm}:
    + [CL] Remove recommendations to add a "Testsuite: autopkgtest" field
      to debian/control as it is added when needed by dpkg-source(1)
      since dpkg 1.17.1.  (Closes: #865531)
    + [CL] Warn if we see an unnecessary "Testsuite: autopkgtest" header
      in debian/control.
    + [NT] Recognise "autopkgtest-pkg-go" as a valid test suite.
    + [CL] Recognise "autopkgtest-pkg-elpa" as a valid test suite.
      (Closes: #873458)
    + [CL] Recognise "autopkgtest-pkg-octave" as a valid test suite.
      (Closes: #875985)
    + [CL] Update the description of unknown-testsuite to reflect that
      "autopkgtest" is not the only valid value; the referenced URL
      is out-of-date (filed as #876008).  (Closes: #876003)

  * data/binaries/embedded-libs:
    + [RG] Detect embedded copies of heimdal, libgxps, libquicktime,
      libsass, libytnef, and taglib.
    + [RG] Use an additional string to detect embedded copies of
      openjpeg2.  (Closes: #762956)
  * data/fields/name_section_mappings:
    + [BR] node- package section is javascript.
    + [CL] Apply patch from Guillem Jover to add more section mappings.
      (Closes: #874121)
  * data/fields/obsolete-packages:
    + [MR] Add dh-systemd.  (Closes: #872076)
  * data/fields/perl-provides:
    + [CL] Refresh perl provides.
  * data/fields/virtual-packages:
    + [CL] Update data file from archive. This fixes a false positive for
      "bacula-director".  (Closes: #835120)
  * data/files/obsolete-paths:
    + [CL] Add note to /etc/bash_completion.d entry regarding stricter
      filename requirements.  (Closes: #814599)
  * data/files/privacy-breaker-websites:
    + [BR] Detect custom donation logos like apache.
    + [BR] Detect generic counter website.
  * data/standards-version/release-dates:
    + [CL] Add 4.0.1 and 4.1.0 as known standards versions.
      (Closes: #875509)

  * debian/control:
    + [CL] Mention Debian Policy v4.1.0 in the description.
    + [CL] Add myself to Uploaders.
    + [CL] Drop unnecessary "Testsuite: autopkgtest"; this is implied from
      debian/tests/control existing.

  * commands/info.pm:
    + [CL] Add a --list-tags option to print all tags Lintian knows about.
      Thanks to Rajendra Gokhale for the suggestion.  (Closes: #779675)
  * commands/lintian.pm:
    + [CL] Apply patch from Maia Everett to avoid British spelling when
      using en_US locale.  (Closes: #868897)

  * lib/Lintian/Check.pm:
    + [CL] Stop emitting {maintainer,uploader}-address-causes-mail-loops
      for @packages.debian.org addresses.  (Closes: #871575)
  * lib/Lintian/Collect/Binary.pm:
    + [NT] Introduce an "auto-generated" argument for "is_pkg_class".
  * lib/Lintian/Data.pm:
    + [CL] Modify Lintian::Data's "all" to always return keys in insertion
      order, dropping dependency on libtie-ixhash-perl.

  * helpers/coll/objdump-info-helper:
    + [CL] Apply patch from Steve Langasek to accommodate binutils 2.29
      outputting symbols in a different format on ppc64el.
      (Closes: #869750)

  * t/tests/fields-perl-provides/tags:
    + [CL] Update expected output to match new Perl provides.
  * t/tests/files-privacybreach/*:
    + [CL] Add explicit test for packages including external fonts via
      the Google Font API. Thanks to Ian Jackson for the report.
      (Closes: #873434)
    + [CL] Add explicit test for packages including external fonts via
      the Typekit API via <script/> HTML tags.
  * t/tests/*/desc:
    + [CL] Add missing entries in "Test-For" fields to make
      development/testing workflow less error-prone.

  * private/generate-tag-summary:
    + [CL] git-describe(1) will usually emit 7 hexadecimal digits as the
      abbreviated object name,  However, as this can be user-dependent,
      pass --abbrev=0 to ensure it does not vary between systems.  This
      also means we do not need to strip it ourselves.
  * private/refresh-*:
    + [CL] Use deb.debian.org as the default mirror.
    + [CL] Update locations of Contents-<arch> files; they are now
      namespaced by distribution (eg. "main").

 -- Chris Lamb <lamby@debian.org>  Wed, 20 Sep 2017 09:25:06 +0100

September 25, 2017 04:26 PM

September 15, 2017

Chris Lamb

Which packages on my system are reproducible?

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process.

As part of this project I wrote a script to determine which packages installed on your system are "reproducible" or not:

$ apt install devscripts
[]

$ reproducible-check
[]
W: subversion (1.9.7-2) is unreproducible (libsvn-perl, libsvn1, subversion) <https://tests.reproducible-builds.org/debian/subversion>
W: taglib (1.11.1+dfsg.1-0.1) is unreproducible (libtag1v5, libtag1v5-vanilla) <https://tests.reproducible-builds.org/debian/taglib>
W: tcltk-defaults (8.6.0+9) is unreproducible (tcl, tk) <https://tests.reproducible-builds.org/debian/tcltk-defaults>
W: tk8.6 (8.6.7-1) is unreproducible (libtk8.6, tk8.6) <https://tests.reproducible-builds.org/debian/tk8.6>
W: valgrind (1:3.13.0-1) is unreproducible <https://tests.reproducible-builds.org/debian/valgrind>
W: wavpack (5.1.0-2) is unreproducible (libwavpack1) <https://tests.reproducible-builds.org/debian/wavpack>
W: x265 (2.5-2) is unreproducible (libx265-130) <https://tests.reproducible-builds.org/debian/x265>
W: xen (4.8.1-1+deb9u1) is unreproducible (libxen-4.8, libxenstore3.0) <https://tests.reproducible-builds.org/debian/xen>
W: xmlstarlet (1.6.1-2) is unreproducible <https://tests.reproducible-builds.org/debian/xmlstarlet>
W: xorg-server (2:1.19.3-2) is unreproducible (xserver-xephyr, xserver-xorg-core) <https://tests.reproducible-builds.org/debian/xorg-server>
282/4494 (6.28%) of installed binary packages are unreproducible.

Whether a package is "reproducible" or not is determined by querying the Debian Reproducible Builds testing framework.



The --raw command-line argument lets you play with the data in more detail. For example, you can see who maintains your unreproducible packages:

$ reproducible-check --raw | dd-list --stdin
Alec Leamas <leamas.alec@gmail.com>
   lirc (U)

Alessandro Ghedini <ghedo@debian.org>
   valgrind

Alessio Treglia <alessio@debian.org>
   fluidsynth (U)
   libsoxr (U)
[]


reproducible-check is available in devscripts since version 2.17.10, which landed in Debian unstable on 14th September 2017.

September 15, 2017 08:29 AM

September 02, 2017

Daniel Silverstone (Kinnison)

F/LOSS activity, August 2017

Shockingly enough, my focus started out on Gitano once more. We managed a 1.1 release of Gitano during the Debian conference's "camp" which occurs in the week before the conference. This was a joint effort of myself, Richard Maw, and Richard Ipsum. I have to take my hat off to Richard Maw, because without his dedication to features, 1.1 would lack some stuff which Richard Ipsum proposed around ruleset support for basic readers/writers and frankly 1.1 would be a weaker release without it.

Because of the debconf situation, we didn't have a Gitano developer day which, while sad, didn't slow us down much...

Not Gitano

Of course, not everything I did in August was Gitano related. In fact once I had completed the 1.1 release and uploaded everything to Debian I decided that I was going to take a break from Gitano until the next developer day. (In fact there's even some patch series still unread on the mailing list which I will get to when I start the developer day.)

I have long been interested in STM32 microcontrollers, using them in a variety of projects including the Entropy Key which some of you may remember. Jorge Aparicio was working on Cortex-M3 support (among other microcontrollers) in Rust and he then extended that to include a realtime framework called RTFM and from there I got interested in what I might be able to do with Rust on STM32. I noticed that there weren't any pure Rust implementations of the USB device stack which would be necessary in order to make a device, programmed in Rust, appear on a USB port for a computer to control/use. This tweaked my interest.

As many of my readers are aware, I am very bad at doing things without some external motivation. As such, I then immediately offered to give a talk at a conference which should be happening in November, just so that I'd be forced to get on with learning and implementing the stack. I have been chronicling my work in this blog, and you're encouraged to go back and read them if you have similar interests. I'm sure that as my work progresses, I'll be doing more and more of that and less of Gitano, for at least the next two months.

To bring that into context as F/LOSS work, I did end up submitting some patches to Jorge's STM32F103xx repository to support a couple more clock configuration entries so that USB and ADCs can be set up cleanly. So at least there's that.

by Daniel Silverstone at September 02, 2017 10:00 AM

August 31, 2017

Jonathan McDowell

C, floating point, and help!

Floating point is a pain. I know this. But I recently took over the sigrok packages in Debian and as part of updating to the latest libsigkrok4 library enabled the post compilation tests. Which promptly failed on i386. Some narrowing down of the problem leads to the following test case (which fails on both gcc-6 under Debian/Stretch and gcc-7 on Debian/Testing):

#include <inttypes.h>
#include <stdio.h>
#include <stdint.h>

int main(int argc, char *argv[])
{
        printf("%" PRIu64 "\n", (uint64_t)((1.034567) * (uint64_t)(1000000ULL)));
}

We expect to see 1.034567 printed out. On x86_64 we do:

$ arch
x86_64
$ gcc -Wall t.c -o t ; ./t
1034567

If we compile for 32-bit the result is also as expected:

$ gcc -Wall -m32 t.c -o t ; ./t
1034567

Where things get interesting is when we enable --std=c99:

$ gcc -Wall --std=c99 t.c -o t ; ./t
1034567
$ gcc -Wall -m32 --std=c99 t.c -o t ; ./t
1034566

What? It turns out all the cXX standards result in the last digit incorrectly being 6, while the gnuXX standards (gnu11 is apparently the default) result in the correct trailing 7. Is there some postfix I can add to the value to prevent the floating point truncation taking place? Or do I just have to accept this? It works fine on armel, so it’s not a simple 32/64 bit issue.

August 31, 2017 04:58 PM

August 30, 2017

Daniel Silverstone (Kinnison)

STM32 USB and Rust - Packet Memory Area

In this, our next exciting installment of STM32 and Rust for USB device drivers, we're going to look at what the STM32 calls the 'packet memory area'. If you've been reading along with the course, including reading up on the datasheet content then you'll be aware that as well as the STM32's normal SRAM, there's a 512 byte SRAM dedicated to the USB peripheral. This SRAM is called the 'packet memory area' and is shared between the main bus and the USB peripheral core. Its purpose is, simply, to store packets in transit. Both those IN to the host (so stored queued for transmission) or OUT from the host (so stored, queued for the application to extract and consume).

It's time to actually put hand to keyboard on some Rust code, and the PMA is the perfect starting point, since it involves two basic structures. Packets are the obvious first structure, and they are contiguous sets of bytes which for the purpose of our work we shall assume are one to sixty-four bytes long. The second is what the STM32 datasheet refers to as the BTABLE or Buffer Descriptor Table. Let's consider the BTABLE first.

The Buffer Descriptor Table

The BTABLE is arranged in quads of 16bit words. For "normal" endpoints this is a pair of descriptors, each consisting of two words, one for transmission, and one for reception. The STM32 also has a concept of double buffered endpoints, but we're not going to consider those in our proof-of-concept work. The STM32 allows for up to eight endpoints (EP0 through EP7) in internal register naming, though they support endpoints numbered from zero to fifteen in the sense of the endpoint address numbering. As such there're eight descriptors each four 16bit words long (eight bytes) making for a buffer descriptor table which is 64 bytes in size at most.

Buffer Descriptor Table
Byte offset in PMA Field name Description
(EPn * 8) + 0 USB_ADDRn_TX The address (inside the PMA) of the TX buffer for EPn
(EPn * 8) + 2 USB_COUNTn_TX The number of bytes present in the TX buffer for EPn
(EPn * 8) + 4 USB_ADDRn_RX The address (inside the PMA) of the RX buffer for EPn
(EPn * 8) + 6 USB_COUNTn_RX The number of bytes of space available for the RX buffer for EPn (and once received, the number of bytes received)

The TX entries are trivial to comprehend. To transmit a packet, part of the process involves writing the packet into the PMA, putting the address into the appropriate USB_ADDRn_TX entry, and the length into the corresponding USB_COUNTn_TX entry, before marking the endpoint as ready to transmit.

To receive a packet though is slightly more complex. The application must allocate some space in the PMA, setting the address into the USB_ADDRn_RX entry of the BTABLE before filling out the top half of the USB_COUNTn_RX entry. For ease of bit sizing, the STM32 only supports space allocations of two to sixty-two bytes in steps of two bytes at a time, or thirty-two to five-hundred-twelve bytes in steps of thirty-two bytes at a time. Once the packet is received, the USB peripheral will fill out the lower bits of the USB_COUNTn_RX entry with the actual number of bytes filled out in the buffer.

Packets themselves

Since packets are, typically, a maximum of 64 bytes long (for USB 2.0) and are simply sequences of bytes with no useful structure to them (as far as the USB peripheral itself is concerned) the PMA simply requires that they be present and contiguous in PMA memory space. Addresses of packets are relative to the base of the PMA and are byte-addressed, however they cannot start on an odd byte, so essentially they are 16bit addressed. Since the BTABLE can be anywhere within the PMA, as can the packets, the application will have to do some memory management (either statically, or dynamically) to manage the packets in the PMA.

Accessing the PMA

The PMA is accessed in 16bit word sections. It's not possible to access single bytes of the PMA, nor is it conveniently structured as far as the CPU is concerned. Instead the PMA's 16bit words are spread on 32bit word boundaries as far as the CPU knows. This is done for convenience and simplicity of hardware, but it means that we need to ensure our library code knows how to deal with this.

First up, to convert an address in the PMA into something which the CPU can use we need to know where in the CPU's address space the PMA is. Fortunately this is fixed at 0x4000_6000. Secondly we need to know what address in the PMA we wish to access, so we can determine which 16bit word that is, and thus what the address is as far as the CPU is concerned. If we assume we only ever want to access 16bit entries, we can just multiply the PMA offset by two before adding it to the PMA base address. So, to access the 16bit word at byte-offset 8 in the PMA, we'd look for the 16bit word at 0x4000_6000 + (0x08 * 2) => 0x4000_6010.

Bundling the PMA into something we can use

I said we'd do some Rust, and so we shall…

    // Thanks to the work by Jorge Aparicio, we have a convenient wrapper
    // for peripherals which means we can declare a PMA peripheral:
    pub const PMA: Peripheral<PMA> = unsafe { Peripheral::new(0x4000_6000) };

    // The PMA struct type which the peripheral will return a ref to
    pub struct PMA {
        pma_area: PMA_Area,
    }

    // And the way we turn that ref into something we can put a useful impl on
    impl Deref for PMA {
        type Target = PMA_Area;
        fn deref(&self) -> &PMA_Area {
            &self.pma_area
        }
    }

    // This is the actual representation of the peripheral, we use the C repr
    // in order to ensure it ends up packed nicely together
    #[repr(C)]
    pub struct PMA_Area {
        // The PMA consists of 256 u16 words separated by u16 gaps, so lets
        // represent that as 512 u16 words which we'll only use every other of.
        words: [VolatileCell<u16>; 512],
    }

That block of code gives us three important things. Firstly a peripheral object which we will be able to (later) manage nicely as part of the set of peripherals which RTFM will look after for us. Secondly we get a convenient packed array of u16s which will be considered volatile (the compiler won't optimise around the ordering of writes etc). Finally we get a struct on which we can hang an implementation to give our PMA more complex functionality.

A useful first pair of functions would be to simply let us get and put u16s in and out of that word array, since we're only using every other word…

    impl PMA_Area {
        pub fn get_u16(&self, offset: usize) -> u16 {
            assert!((offset & 0x01) == 0);
            self.words[offset].get()
        }

        pub fn set_u16(&self, offset: usize, val: u16) {
            assert!((offset & 0x01) == 0);
            self.words[offset].set(val);
        }
    }

These two functions take an offset in the PMA and return the u16 word at that offset. They only work on u16 boundaries and as such they assert that the bottom bit of the offset is unset. In a release build, that will go away, but during debugging this might be essential. Since we're only using 16bit boundaries, this means that the first word in the PMA will be at offset zero, and the second at offset two, then four, then six, etc. Since we allocated our words array to expect to use every other entry, this automatically converts into the addresses we desire.

If we pop (and please don't worry about the unsafe{} stuff for now):

    unsafe { (&*usb::pma::PMA.get()).set_u16(4, 64); }

into our main function somewhere, and then build and objdump our test binary we can see the following set of instructions added:

 80001e4:   f246 0008   movw    r0, #24584  ; 0x6008
 80001e8:   2140        movs    r1, #64 ; 0x40
 80001ea:   f2c4 0000   movt    r0, #16384  ; 0x4000
 80001ee:   8001        strh    r1, [r0, #0]

This boils down to a u16 write of 0x0040 (64) to the address 0x4006008 which is the third 32 bit word in the CPU's view of the PMA memory space (where offset 4 is the third 16bit word) which is exactly what we'd expect to see.

We can, from here, build up some functions for manipulating a BTABLE, though the most useful ones for us to take a look at are the RX counter functions:

    pub fn get_rxcount(&self, ep: usize) -> u16 {
        self.get_u16(BTABLE + (ep * 8) + 6) & 0x3ff
    }

    pub fn set_rxcount(&self, ep: usize, val: u16) {
        assert!(val <= 1024);
        let rval: u16 = {
            if val > 62 {
                assert!((val & 0x1f) == 0);
                (((val >> 5) - 1) << 10) | 0x8000
            } else {
                assert!((val & 1) == 0);
                (val >> 1) << 10
            }
        };
        self.set_u16(BTABLE + (ep * 8) + 6, rval)
    }

The getter is fairly clean and clear, we need the BTABLE base in the PMA, add the address of the USB_COUNTn_RX entry to that, retrieve the u16 and then mask off the bottom ten bits since that's the size of the relevant field.

The setter is a little more complex, since it has to deal with the two possible cases, this isn't pretty and we might be able to write some better peripheral structs in the future, but for now, if the length we're setting is 62 or less, and is divisible by two, then we put a zero in the top bit, and the number of 2-byte lumps in at bits 14:10, and if it's 64 or more, we mask off the bottom to check it's divisible by 32, and then put the count (minus one) of those blocks in, instead, and set the top bit to mark it as such.

Fortunately, when we set constants, Rust's compiler manages to optimise all this very quickly. For a BTABLE at the bottom of the PMA, and an initialisation statement of:

    unsafe { (&*usb::pma::PMA.get()).set_rxcount(1, 64); }

then we end up with the simple instruction sequence:

80001e4:    f246 001c   movw    r0, #24604  ; 0x601c
80001e8:    f44f 4104   mov.w   r1, #33792  ; 0x8400
80001ec:    f2c4 0000   movt    r0, #16384  ; 0x4000
80001f0:    8001        strh    r1, [r0, #0]

We can decompose that into a C like *((u16*)0x4000601c) = 0x8400 and from there we can see that it's writing to the u16 at 0x1c bytes into the CPU's view of the PMA, which is 14 bytes into the PMA itself. Since we know we set the BTABLE at the start of the PMA, it's 14 bytes into the BTABLE which is firmly in the EP1 entries. Specifically it's USB_COUNT1_RX which is what we were hoping for. To confirm this, check out page 651 of the datasheet. The value set was 0x8400 which we can decompose into 0x8000 and 0x0400. The first is the top bit and tells us that BL_SIZE is one, and thus the blocks are 32 bytes long. Next the 0x4000 if we shift it right ten places, we get the value 2 for the field NUM_BLOCK and multiplying 2 by 32 we get the 64 bytes we asked it to set as the size of the RX buffer. It has done exactly what we hoped it would, but the compiler managed to optimise it into a single 16 bit store of a constant value to a constant location. Nice and efficient.

Finally, let's look at what happens if we want to write a packet into the PMA. For now, let's assume packets come as slices of u16s because that'll make our life a little simpler:

    pub fn write_buffer(&self, base: usize, buf: &[u16]) {
        for (ofs, v) in buf.iter().enumerate() {
            self.set_u16(base + (ofs * 2), *v);
        }
    }

Yes, even though we're deep in no_std territory, we can still get an iterator over the slice, and enumerate it, getting a nice iterator of (index, value) though in this case, the value is a ref to the content of the slice, so we end up with *v to deref it. I am sure I could get that automatically happening but for now it's there.

Amazingly, despite using iterators, enumerators, high level for loops, function calls, etc, if we pop:

    unsafe { (&*usb::pma::PMA.get()).write_buffer(0, &[0x1000, 0x2000, 0x3000]); }

into our main function and compile it, we end up with the instruction sequence:

80001e4:    f246 0000   movw    r0, #24576  ; 0x6000
80001e8:    f44f 5180   mov.w   r1, #4096   ; 0x1000
80001ec:    f2c4 0000   movt    r0, #16384  ; 0x4000
80001f0:    8001        strh    r1, [r0, #0]
80001f2:    f44f 5100   mov.w   r1, #8192   ; 0x2000
80001f6:    8081        strh    r1, [r0, #4]
80001f8:    f44f 5140   mov.w   r1, #12288  ; 0x3000
80001fc:    8101        strh    r1, [r0, #8]

which, as you can see, ends up being three sequential halfword stores directly to the right locations in the CPU's view of the PMA. You have to love seriously aggressive compile-time optimisation :-)

Hopefully, by next time, we'll have layered some more pleasant routines on our PMA code, and begun a foray into the setup necessary before we can begin handling interrupts and start turning up on a USB port.

by Daniel Silverstone at August 30, 2017 04:15 PM

August 27, 2017

Jonathan McDowell

On my way home from OMGWTFBBQ

I started writing this while sitting in Stansted on my way home from the annual UK Debian BBQ. I’m finally home now, after a great weekend catching up with folk. It’s a good social event for a bunch of Debian folk, and I’m very grateful that Steve and Jo continue to make it happen. These days there are also a number of generous companies chipping in towards the cost of food and drink, so thanks also to Codethink and QvarnLabs AB for the food, Collabora and Mythic Beasts for the beer and Chris for the coffee. And Rob for chasing us all for contributions to cover the rest.

I was trying to remember when the first one of these I attended was; trawling through mail logs there was a Cambridge meetup that ended up at Steve’s old place in April 2001, and we’ve consistently had the summer BBQ since 2004, but I’m not clear on what happened in between. Nonetheless it’s become a fixture in the calendar for those of us in the UK (and a number of people from further afield who regularly turn up). We’ve become a bit more sedate, but it’s good to always see a few new faces, drink some good beer (yay Milton), eat a lot and have some good conversations. This year also managed to get me a SheevaPlug so I could investigate #837989 - a bug with OpenOCD not being able to talk to the device. Turned out to be a channel configuration error in the move to new style FTDI support, so I’ve got that fixed locally and pushed the one line fix upstream as well.

August 27, 2017 06:42 PM

August 02, 2017

Mick Morgan

a letter to our dear home secretary

Dear Amber

So,”real people” don’t care about privacy? All they really want is ease of use and a pretty GUI so that they can chat to all their friends on-line? Only “the enemy” (who is that exactly anyway?) needs encryption? Excuse me for asking, but what have you been smoking? Does the Home Office know about that?

I’m a real person. And I care deeply about privacy. I care enough to fund both my own Tor node and various openVPN servers dotted around the world just to get past your ludicrous attempts at gratuitous surveillance of my (and my family’s) routine use of the ‘net. I care about the security and privacy of my transactions with various commercial enterprises, including my bank (which is why I expect them to use TLS on their website). I care about privacy when I correspond with my Doctor and other professionals. I care about privacy when I use an on-line search engine (which, incidentally, is not Google). I care about privacy because privacy matters. I have the right to freedom of thought and expression. I have the right to discuss those thoughts with others of my choice – when I choose and how I choose. You may not like that, but it’s a fact of life. That doesn’t make me “the enemy”. Get over it.

Love and Kisses

Mick

(Note to readers: Aral Balkan has deconstructed Rudd’s ramblings. I commend the article to you.)

by Mick at August 02, 2017 02:56 PM

July 10, 2017

Steve Engledow (stilvoid)

An evening of linux on the desktop

Last time, I wrote about trying a few desktop environments to see what's out there, keep things fresh, and keep me from complacency. Well, as with desktop environments, so with text editors. I decided briefly that I would try a few of the more recent code editors that are around these days. Lured in by their pleasing, modern visuals and their promises of a smooth, integrated experience, I've been meaning to give these a go for a while. Needless to say, as a long-time vim user, I just found myself frustrated that I wasn't able to get things done as efficiently in any of those editors as I could in vim ;) I tried installing vim keybindings in Atom but it just wasn't the same as a very limited set of functionality was there. As for the integrated environment, when you have tmux running by default, everything's integrated anyway.

And, as with editors, so once again with desktop environments. I've decided to retract my previous hasty promise and no longer to bother with trying any other environments; i3 is more than fine :)

However, I did spend some time this evening making things a bit prettier so here are some delicious configs for posterity:

Configs

Xresources

I've switched back to xterm from urxvt because, er... dunno.

Anyway, I set some nice colours for terminals and some magic stuff that makes man pages all colourful :)

XTerm*faceName: xft:Hack:regular:size=12
*termName: xterm-256color

! Colourful man pages
*VT100.colorBDMode:     true
*VT100.colorBD:         cyan
*VT100.colorULMode:     true
*VT100.colorUL:         darkcyan
*VT100.colorITMode:     true
*VT100.colorIT:         yellow
*VT100.veryBoldColors:  518

! terminal colours
*foreground:#CCCCCC
*background:#2B2D2E

!black darkgray
*color0:    #2B2D2E
*color8:    #808080
!darkred red
*color1:    #FF0044
*color9:    #F92672
!darkgreen green
*color2:    #82B414
*color10:   #A6E22E
!darkyellow yellow
*color3:    #FD971F
*color11:   #E6DB74
!darkblue blue
*color4:    #266C98
*color12:   #7070F0
!darkmagenta magenta
*color5:    #AC0CB1
*color13:   #D63AE1
!darkcyan cyan
*color6:    #AE81FF
*color14:   #66D9EF
!gray white
*color7:    #CCCCCC
*color15:   #F8F8F2

Vimrc

Nothing exciting here except for discovering a few options I hadn't previous known about:

" Show a marker at the 80th column to encourage nice code
set colorcolumn=80
highlight ColorColumn ctermbg=darkblue

" Scroll the text when we're 3 lines from the top or bottom
set so=3

" Use browser-style incremental search
set incsearch

" Override the default background colour in xoria256 to match the terminal background
highlight Normal ctermbg=black

" I like this theme
colorscheme xoria256

i3

I made a few colour tweaks to my i3 config so I get colours that match my new Xresources. One day, I might see if it's easy enough to have them both read colour definitions from the same place so I don't have to define things twice.

The result

Here's what it looks like:

My new desktop

by Steve Engledow (steve@offend.me.uk) at July 10, 2017 10:14 PM

June 15, 2017

Steve Engledow (stilvoid)

The day of linux on the desktop

It's been a while since I last tried out a different desktop environment on my laptop and I've been using i3 for some time now so it's only fair to give other things a go ;)

To test these out, I ran another X display - keeping my original one running so I could switch back and forth to take notes - and started each environment with DISPLAY=:1 <the command to start the desktop>.

I'll start with just one today and perhaps review some others another time.

Deepin

In summary: bits of Gnome Shell, Chrome OS, and Mac OSX but not quite as polished as any of them.

The Deepin Desktop Environment (DDE - from the Deepin distribution) installed easily enough under Arch with a quick pacman -S deepin deepin-extra. It also started up easily with an unambiguous startdde.

Immediately on startup, DDE plays a slightly annoying chime presumably just to remind you of how far we've come since Windows 95. The initial view of the desktop looks similar to OSX or Chrome OS with file icons on the desktop and a launcher bar centred across the bottom of the screen.

The initial view

The first thing I tried was clicking on a button labelled "Multitasking view" only to be presented with a prompt telling me "Kindly reminder: This application can not run without window effect" and an OK button. So far, so enigmatic. So then I tried a trusty right-click on the desktop which brought up the expected context menu. In the menu was a "Display settings" option so I plumped for that, thinking that perhaps that was where I could enable the mystic "window effect". Clicking the "Display settings" button opened a dark-themed panel from the right-hand side, similar to the information panel you get in OSX. I searched through that panel for a good couple of minutes but could find no allusion to any "window effect".

The cryptic message and the settings panel

Unperturbed, I decided to press on and see what other features Deepin had to offer...

Moving the mouse around the desktop a bit, I discovered that Deepin has borrowed some ideas from Gnome shell as well as OSX and Chrome OS. Moving the mouse pointer into the top-left corner of the screen brings up an application list similar to Gnome's launcher. The bottom-right corner reveals the settings panel. The top-right does nothing and the bottom-left, wonder of wonders, brings up my old favourite, the "kindly reminder".

I poked around in the settings a bit more but didn't really see anything of interest so I fired up what looks to be the last part of Deepin left for me to explore: the file manager. It does the job and it's not very interesting although I did discover that Deepin also has it's own terminal emulator (unsurprisingly called deepin-terminal) which has a snazzy Matrix theme to it but is otherwise uninteresting.

Deepin-terminal

That's it, I'm bored. Next!

I tried Budgie and LXQT for a few minutes each at this point but they weren't immediately interesting enough to make me want to write about them just now :)

by Steve Engledow (steve@offend.me.uk) at June 15, 2017 02:41 AM

June 06, 2017

Mick Morgan

it is now

Back in January 2011, I posted a brief note about a site hosted at the domain “ismycomputeroff.com“. I have just had occasion to look again at that site and found that the domain is now definitely off. It is parked at sedo and is up for sale at the ludicrous price of 599 euros.

Tell you what, you can have my “theinternetisoff.net” domain for the bargain price of half that – after all, it only cost me about a tenner.

by Mick at June 06, 2017 02:35 PM

March 01, 2017

Brett Parker (iDunno)

Using the Mythic Beasts IPv4 -> IPv6 Proxy for Websites on a v6 only Pi and getting the right REMOTE_ADDR

So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.

I've cheated a bit, because the turning on of ProxyProtocol for the hostedpi.com addresses is currently not exposed to customers (it's on the list!), to do it without access to Mythic's backends use your own domainname (I've also got https://pi3.sommitrealweird.co.uk/ mapped to this Pi).

So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen [::1]:8080

<IfModule ssl_module>
       Listen [::1]:4443
</IfModule>

<IfModule mod_gnutls.c>
       Listen [::1]:4443
</IfModule>

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.

So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:

global
       log /dev/log    local0
       log /dev/log    local1 notice
       chroot /var/lib/haproxy
       stats socket /run/haproxy/admin.sock mode 660 level admin
       stats timeout 30s
       user haproxy
       group haproxy
       daemon

       # Default SSL material locations
       ca-base /etc/ssl/certs
       crt-base /etc/ssl/private

       # Default ciphers to use on SSL-enabled listening sockets.
       # For more information, see ciphers(1SSL). This list is from:
       #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
       ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
       ssl-default-bind-options no-sslv3

defaults
       log     global
       mode    http
       option  httplog
       option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
       errorfile 400 /etc/haproxy/errors/400.http
       errorfile 403 /etc/haproxy/errors/403.http
       errorfile 408 /etc/haproxy/errors/408.http
       errorfile 500 /etc/haproxy/errors/500.http
       errorfile 502 /etc/haproxy/errors/502.http
       errorfile 503 /etc/haproxy/errors/503.http
       errorfile 504 /etc/haproxy/errors/504.http

frontend any_http
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::80
        default_backend any_http

backend any_http
        server apache2 ::1:8080

Obviously after that you then do:

systemctl restart apache2
systemctl restart haproxy

Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:

a2enmod remoteip

And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined

And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat and add the relevant RemoteIP settings:

RemoteIPHeader X-Forwarded-For
RemoteIPTrustedProxy ::1

CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined

Now, enable the config and restart apache2:

a2enconf remoteip-logformats
systemctl restart apache2

Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.

So, you can now happily visit http://www.<your-pi-name>.hostedpi.com/, e.g. http://www.srwpi.hostedpi.com/.

Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.

/etc/dehydrated/conf.d/mail.sh:

CONTACT_EMAIL="my@email.address"

/etc/dehydrated/conf.d/domainconfig.sh:

DOMAINS_D="/etc/dehydrated/domains.d"

/etc/dehydrated/domains.d/srwpi.hostedpi.com:

HOOK="/etc/dehydrated/hooks/srwpi"

/etc/dehydrated/hooks/srwpi:

#!/bin/sh
action="$1"
domain="$2"

case $action in
  deploy_cert)
    privkey="$3"
    cert="$4"
    fullchain="$5"
    chain="$6"
    cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem
    chmod 640 /etc/ssl/private/srwpi.pem
    ;;
  *)
    ;;
esac

/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.

And finally the file /etc/dehydrated/domains.txt:

www.srwpi.hostedpi.com srwpi.hostedpi.com

Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.

Run dehydrated in cron mode (it's noisy, but meh...):

dehydrated -c

That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/www.srwpi.hostedpi.com/fullchain.pem and /var/llib/dehydrated/certs/ww.srwpi.hostedpi.com/privkey.pem files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:

a2ensite default-ssl
a2enmod ssl

And restart apache2:

systemctl restart apache2

Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:

frontend any_https
        option httplog
        option forwardfor

        acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1
        tcp-request connection expect-proxy layer4 if is_from_proxy

        bind :::443 ssl crt /etc/ssl/private/srwpi.pem

        default_backend any_https

backend any_https
        server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt

Restart haproxy:

systemctl restart haproxy

And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.

by Brett Parker (iDunno@sommitrealweird.co.uk) at March 01, 2017 06:35 PM

Ooooooh! Shiny!

Yay! So, it's a year and a bit on from the last post (eeep!), and we get the news of the Psion Gemini - I wants one, that looks nice and shiny and just the right size to not be inconvenient to lug around all the time, and far better for ssh usage than the onscreen keyboard on my phone!

by Brett Parker (iDunno@sommitrealweird.co.uk) at March 01, 2017 03:12 PM

October 18, 2016

MJ Ray

Rinse and repeat

Forgive me, reader, for I have sinned. It has been over a year since my last blog post. Life got busy. Paid work. Another round of challenges managing my chronic illness. Cycle campaigning. Fun bike rides. Friends. Family. Travels. Other social media to stroke. I’m still reading some of the planets where this blog post should appear and commenting on some, so I’ve not felt completely cut off, but I am surprised how many people don’t allow comments on their blogs any more (or make it too difficult for me with reCaptcha and the like).

The main motive for this post is to test some minor upgrades, though. Hi everyone. How’s it going with you? I’ll probably keep posting short updates in the future.

Go in peace to love and serve the web. 🙂

by mjr at October 18, 2016 04:28 AM

May 30, 2016

Wayne Stallwood (DrJeep)

UPS for Octopi or Octoprint

So it only took one mid print power cut to realise I need a UPS for my 3D printer.

it's even worse for a machine like mine with a E3D all metal head as it requires active cooling to stop damage to the head mount or prevent a right mess of molten filament inside the heatbreak.

See below for instructions on setting up an APC UPS so that it can send a command to octopi to abort the print and start cooling the head before the batteries in the UPS are exhausted.

I used a APC BackUPS Pro 550, which seems to be about the minimum spec I can get away with, on my printer this gives me approximately 5 minutes of print time without power, or 40 minutes of the printer powered but idle, other UPS's would work but APC is the only type tested with these instructions

Test this throughly and make sure you have enough runtime to cool the head before the batteries are exhausted, the only way to do this properly is to set up a test print and pull the power.

Once you have installed the power leads to and from the UPS and got the printer powered through it (not forgetting the Rpi or whatever you have running octoprint also needs power...mine is powered via the printer PSU ) You need to install acpupsd, it's in the default repo for raspian so just install it with apt.

sudo apt-get install apcupsd

Now we need to tweak apcupsd's configuration a bit

Edit the apcupsd configuration as follows, you can find it at /etc/apcupsd/apcupsd.conf, just use your favourite editor.

Find and change the following lines

UPSCABLE smart

UPSTYPE usb

DEVICE (this should be blank)

BATTERYLEVEL 50

MINUTES 5

You might need to tweak BATTERYLEVEL and MINUTES for your printer and UPS. this is the percentage of power left before the shutdown will trigger or the minutes of runtime, whichever one happens first

Remember this is minutes as calculated whilst the printer is still running. Once the print is stopped the runtime will be longer as the heaters will be off, so setting 5 minutes here would in my case give me 20 minutes of runtime once the print has aborted for the hot-end to cool

Plug the USB cable from the UPS into a spare port on the Rpi

Now activate the service by editing /etc/default/apcupsd and changing the following line

ISCONFIGURED=yes

Now start the service, it will start by itself on the next boot

sudo service apcupsd start

If all is well typing acpaccess at the prompt should get you some stats from the UPS, battery level etc

If that's all good then apcupsd is configured, now for the script that aborts your print

First go into the octoprint settings from the web interface, make sure API access is turned on and record the API key carefully

Back on the rpi go to the home directory

cd ~

Now download my custom shutdown script with wget

wget http://www.digimatic.co.uk/media/doshutdown sudo cp doshutdown /etc/apcupsd cd /etc/apcupsd

Set the permissions so the script can run

chmod 755 doshutdown

Don't be tempted to rename the file, leave it as this name

Now edit the script and change the variable at the top API_KEY to the API key you got from your copy of octoprint earlier

That should be it, the script does 3 things when the power fails and the battery goes below one of the trigger points

Prints a warning on the printer's LCD screen

Records the current printer status and print file position to a file in /home/pi, so that maybe you can work out how to slice the reminder of the model and save the print

Aborts the print

This hasn't had a massive amount of testing and there are a few bugs, if you have a really big layer going on when the power goes you might not have enough power to make it to the end, octoprint only aborts at specific points in the print, same if you are at the first stages and are heating the bed, octoprint will wait until the bed is up to temp before running the next command (abort).

The sleep at the end of the script stops the rpi from shutting down, we need to wait here and make sure the printer has taken the abort command before killing the pi so that's an unknown amount of time so I leave it running by sleeping indefinitely here

If I get time I will make a proper octoprint plugin for all this

May 30, 2016 08:13 PM

April 02, 2016

Wayne Stallwood (DrJeep)

Simple USB2 Host Switch

Initially created for the BigBox 3D printer to allow use of both the Internal Raspberry Pi running Octoprint and the rear mounted USB port for diagnostic access. The Rumba has only one USB port and can only be attached to one of these at a time.

However this circuit will work in any other scenario where you want to be able to switch between USB Hosts.

Plug a Host PC or other host device into port X1 and the device you want to control into Port X3, everything should work as normal.

Plug an additional powered Host PC or other host device into Port X2 and and the host plugged into Port X1 should be disconnected in preference to this device which should now be connected to the device plugged into port X3.

Please note, in many cases, particularly with devices that are bus powered like memory sticks, the device will not function if there is no powered host PC plugged into port X1

April 02, 2016 07:38 PM

June 11, 2015

MJ Ray

Mick Morgan: here’s why pay twice?

http://baldric.net/2015/06/05/why-pay-twice/ asks why the government hires civilians to monitor social media instead of just giving GC HQ the keywords. Us cripples aren’t allowed to comment there (physical ability test) so I reply here:

It’s pretty obvious that they have probably done both, isn’t it?

This way, they’re verifying each other. Politicians probably trust neither civilians or spies completely and that makes it worth paying twice for this.

Unlike lots of things that they seem to want not to pay for at all…

by mjr at June 11, 2015 03:49 AM

March 09, 2015

Ben Francis

Pinned Apps – An App Model for the Web

(re-posted from a page I created on the Mozilla wiki on 17th December 2014)

Problem Statement

The per-OS app store model has resulted in a market where a small number of OS companies have a large amount of control, limiting choice for users and app developers. In order to get things done on mobile devices users are restricted to using apps from a single app store which have to be downloaded and installed on a compatible device in order to be useful.

Design Concept

Concept Overview

The idea of pinned apps is to turn the apps model on its head by making apps something you discover simply by searching and browsing the web. Web apps do not have to be installed in order to be useful, “pinning” is an optional step where the user can choose to split an app off from the rest of the web to persist it on their device and use it separately from the browser.

Pinned_apps_overview

”If you think of the current app store experience as consumers going to a grocery store to buy packaged goods off a shelf, the web is more like a hunter-gatherer exploring a forest and discovering new tools and supplies along their journey.”

App Discovery

A Web App Manifest linked from a web page says “I am part of a web app you can use separately from the browser”. Users can discover web apps simply by searching or browsing the web, and use them instantly without needing to install them first.

Pinned_apps_discovery

”App discovery could be less like shopping, and more like discovering a new piece of inventory while exploring a new level in a computer game.”

App Pinning

If the user finds a web app useful they can choose to split it off from the rest of the web to persist it on their device and use it separately from the browser. Pinned apps can provide a more app-like experience for that part of the web with no browser chrome and get their own icon on the homescreen.

Pinned_apps_pinning

”For the user pinning apps becomes like collecting pin badges for all their favourite apps, rather than cluttering their device with apps from an app store that they tried once but turned out not to be useful.”

Deep Linking

Once a pinned app is registered as managing its own part of the web (defined by URL scope), any time the user navigates to a URL within that scope, it will open in the app. This allows deep linking to a particular page inside an app and seamlessly linking from one app to another.

Pinned_apps_linking

”The browser is like a catch-all app for pages which don’t belong to a particular pinned app.”

Going Offline

Pinning an app could download its contents to the device to make it work offline, by registering a Service Worker for the app’s URL scope.

Pinned_apps_offline

”Pinned apps take pinned tabs to the next level by actually persisting an app on the device. An app pin is like an anchor point to tether a collection of web pages to a device.”

Multiple Pages

A web app is a collection of web pages dedicated to a particular task. You should be able to have multiple pages of the app open at the same time. Each app could be represented in the task manager as a collection of sheets, pinned together by the app.

Pinned_app_pages

”Exploding apps out into multiple sheets could really differentiate the Firefox OS user experience from all other mobile app platforms which are limited to one window per app.”

Travel Guide

Even in a world without app stores there would still be a need for a curated collection of content. The Marketplace could become less of a grocery store, and more of a crowdsourced travel guide for the web.

Pinned_apps_guide

”If a user discovers an app which isn’t yet included in the guide, they could be given the opportunity to submit it. The guide could be curated by the community with descriptions, ratings and tags.”

3 Questions

Pinnged_apps_pinned

What value (the importance, worth or usefulness of something) does your idea deliver?

The pinned apps concept makes web apps instantly useful by making “installation” optional. It frees users from being tied to a single app store and gives them more choice and control. It makes apps searchable and discoverable like the rest of the web and gives developers the freedom of where to host their apps and how to monetise them. It allows Mozilla to grow a catalogue of apps so large and diverse that no walled garden can compete, by leveraging its user base to discover the apps and its community to curate them.

What technological advantage will your idea deliver and why is this important?

Pinned apps would be implemented with emerging web standards like Web App Manifests and Service Workers which add new layers of functionality to the web to make it a compelling platform for mobile apps. Not just for Firefox OS, but for any user agent which implements the standards.

Why would someone invest time or pay money for this idea?

Users would benefit from a unique new web experience whilst also freeing themselves from vendor lock-in. App developers can reduce their development costs by creating one searchable and discoverable web app for multiple platforms. For Mozilla, pinned apps could leverage the unique properties of the web to differentiate Firefox OS in a way that is difficult for incumbents to follow.

UI Mockups

App Search

Pinned_apps_search

Pin App

Pin_app

Pin Page

Pin_page

Multiple Pages

Multiple_pages

App Directory

App_directory

Implementation

Web App Manifest

A manifest is linked from a web page with a link relation:

  <link rel=”manifest” href=”/manifest.json”>

A manifest can specify an app name, icon, display mode and orientation:

 {
   "name": "GMail"
   "icons": {...},
   "display": "standalone",
   "orientation": “portrait”,
   ...
 }

There is a proposal for a manifest to be able to specify an app scope:

 {
   ...
   "scope": "/"
   ...
 }

Service Worker

There is also a proposal to be able to reference a Service Worker from within the manifest:

 {
   ...
   service_worker: {
     src: "app.js",
     scope: "/"
   ...
 }

A Service Worker has an install method which can populate a cache with a web app’s resources when it is registered:

 this.addEventListener('install', function(event) {
  event.waitUntil(
    caches.create('v1').then(function(cache) {
     return cache.add(
        '/index.html',
        '/style.css',
        '/script.js',
        '/favicon.ico'
      );
    }, function(error) {
        console.error('error populating cache ' + error);
    };
  );
 });

So that the app can then respond to requests for resources when offline:

 this.addEventListener('fetch', function(event) {
  event.respondWith(
    caches.match(event.request).catch(function() {
      return event.default();
    })
  );
 });

by tola at March 09, 2015 03:54 PM

December 11, 2014

Ben Francis

The Times They Are A Changin’ (Open Web Remix)

In the run up to the “Mozlandia” work week in Portland, and in reflection of the last three years of the Firefox OS project, for a bit of fun I’ve reworked a Bob Dylan song to celebrate our incredible journey so far.

Here’s a video featuring some of my memories from the last three years, with Siobhan (my fiancée) and me singing the song at you! There are even lyrics so you can sing along ;)

“Keep on rockin’ the free web” — Potch

by tola at December 11, 2014 11:26 AM

July 10, 2014

James Taylor

SSL / TLS

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


July 10, 2014 02:09 PM

Cloud Computing Deployments … Revisited.

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


July 10, 2014 02:09 PM

June 12, 2014

Paul Tansom

Beginning irc

After some discussion last night at PHP Hants about the fact that irc is a great facilitator of support / discussion, but largely ignored because there is rarely enough information for a new user to get going I decided it may be worth putting together a howto type post so here goes…

What is irc?

First of all, what on earth is it? I’m tempted to describe it as Twitter done right years before Twitter even existed, but I’m a geek and I’ve been using irc for years. It has a long heritage, but unlike the ubiquitous email it hasn’t made the transition into mainstream use. In terms of usage it has similarities to things like Twitter and Instant Messaging. Let’s take a quick look at this.

Twitter allows you to broadcast messages, they get published and anyone who is subscribed to your feed can read what you say. Everything is pretty instant, and if somebody is watching the screen at the right time they can respond straight away. Instant Messaging on the other hand, is more of a direct conversation with a single person, or sometimes a group of people, but it too is pretty instantaneous – assuming, of course, that there’s someone reading what you’ve said. Both of these techonologies are pretty familiar to many. If you go to the appropriate website you are given the opportunity to sign up and either use a web based client or download one.

It is much the same for irc in terms of usage, although conversations are grouped into channels which generally focus on a particular topic rather than being generally broadcast (Twitter) or more specifically directed (Instant Messaging). The downside is that in most cases you don’t get a web page with clear instructions of how to sign up, download a client and find where the best place is to join the conversation.

Getting started

There are two things you need to get going with irc, a client and somewhere to connect to. Let’s put that into a more familiar context.

The client is what you use to connect with; this can be an application – so as an example Outlook or Thunderbird would be a mail client, or IE, Firefox, Chrome or Safari are examples of clients for web pages – or it can be a web page that does the same thing – so if you go to twitter.com and login you are using the web page as your Twitter client. Somewhere to connect to can be compared to a web address, or if you’ve got close enough to the configuration of your email to see the details, your mail server address.

Let’s start with the ‘somewhere to connect to‘ bit. Freenode is one of the most popular irc servers, so let’s take a look. First we’ll see what we can find out from their website, http://freenode.net/.

freenode

There’s a lot of very daunting information there for somebody new to irc, so ignore most of it and follow the Webchat link on the left.

webchat

That’s all very well and good, but what do we put in there? I guess the screenshot above gives a clue, but if you actually visit the page the entry boxes will be blank. Well first off there’s the Nickname, this can be pretty much anything you like, no need to register it – stick to the basics of letters, numbers and some simple punctuation (if you want to), keep it short and so long as nobody else is already using it you should be fine; if it doesn’t work try another. Channels is the awkward one, how do you know what channels there are? If you’re lucky you’re looking into this because you’ve been told there’s a channel there and hopefully you’ve been given the channel name. For now let’s just use the PHP Hants channel, so that would be #phph in the Channels box. Now all you need to do is type in the captcha, ignore the tick boxes and click Connect and you are on the irc channel and ready to chat. Down the right you’ll see a list of who else is there, and in the main window there will be a bit of introductory information (e.g. topic for the channel) and depending on how busy it is anything from nothing to a fast scrolling screen of text.

phph

If you’ve miss typed there’s a chance you’ll end up in a channel specially created for you because it didn’t exist; don’t worry, just quit and try again (I’ll explain that process shortly).

For now all you really need to worry about is typing in text an posting it, this is as simple as typing it into the entry box at the bottom of the page and pressing return. Be polite, be patient and you’ll be fine. There are plenty of commands that you can use to do things, but for now the only one you need to worry about is the one to leave, this is:

/quit

Type it in the entry box, press return and you’ve disconnected from the server. The next thing to look into is using a client program since this is far more flexible, but I’ll save that for another post.

by Paul Tansom at June 12, 2014 04:27 PM

May 06, 2014

Richard Lewis

Refocusing Ph.D

Actual progress on this Ph.D revision has been quite slow. My current efforts are on improving the focus of the thesis. One of the criticisms the examiners made (somewhat obliquely) was that it wasn&apost very clear exactly what my subject was: musicology? music information retrieval? computational musicology? And the reason for this was that I failed to make that clear to myself. It was only at the writing up stage, when I was trying to put together a coherent argument, that I decided to try and make it a story about music information retrieval (MIR). I tried to argue that MIR&aposs existing evaluation work (which was largely modelled on information retrieval evaluation from the text world) only took into account the music information needs of recreational users of MIR systems, and that there was very little in the way of studying the music information seeking behaviour of "serious" users. However, the examiners didn&apost even accept that information retrieval was an important problem for musicology, nevermind that there was work to be done in examining music information needs of music scholarship.

So I&aposm using this as an excuse to shift the focus away from MIR a little and towards something more like computational musicology and music informatics. I&aposm putting together a case study of a computational musicology toolkit called music21. Doing this allows me to focus in more detail on a smaller and more distinct community of users (rather than attempting to studying musicologists in general which was another problematic feature of the thesis), it makes it much clearer what kind of music research can be addressed using the technology (all of MIR is either far too diverse or far too generic, depending on how you want to spin it), and also allows me to work with the actually Purcell Plus project materials using the toolkit.

May 06, 2014 11:16 PM

March 27, 2014

Richard Lewis

Taking notes in Haskell

The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we&aposre using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we&aposre working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don&apost even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that&aposs what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

Here&aposs what I typed:

module MusicalDocuments where

import Data.Maybe

-- A document comprises some number of openings (double page spreads)
data Document = Document [Opening]

-- An opening comprises one or two pages (usually two)
data Opening = Opening (Page, Maybe Page)

-- A page comprises multiple systems
data Page = Page [System]

-- Each part is the line for a particular voice
data Voice = Superius | Discantus | Tenor | Contratenor | Bassus

-- A part comprises a list of musical sybmols, but it may span mutliple systems
--(including partial systems)
data Part = Part [MusicalSymbol]

-- A piece comprises some number of sections
data Piece = Piece [Section]

-- A system is a collection of staves
data System = System [Staff]

-- A staff is a list of atomic graphical symbols
data Staff = Staff [Glyph]

-- A section is a collection of parts
data Section = Section [Part]

-- These are the atomic components, MusicalSymbols are semantic and Glyphs are
--syntactic (i.e. just image elements)
data MusicalSymbol = MusicalSymbol
data Glyph = Glyph

-- If this were real, Image would abstract over some kind of binary format
data Image = Image

-- One of the important properties we need in order to be able to construct pieces
-- from the scanned components is to be able to say when objects of the some of the
-- types are strictly contiguous, i.e. this staff immediately follows that staff
class Contiguous a where
  immediatelyFollows :: a -> a -> Bool
  immediatelyPrecedes :: a -> a -> Bool
  immediatelyPrecedes a b = b `immediatelyFollows` a

instance Contiguous Staff where
  immediatelyFollows :: Staff -> Staff -> Bool
  immediatelyFollows = undefined

-- Another interesting property of this data set is that there are a number of
-- duplicate scans of openings, but nothing in the metadata that indicates this,
-- so our workflow needs to recognise duplicates
instance Eq Opening where
  (==) :: Opening -> Opening -> Bool
  (==) a b = undefined

-- Maybe it would also be useful to have equality for staves too?
instance Eq Staff where
  (==) :: Staff -> Staff -> Bool
  (==) a b = undefined

-- The following functions actually represent the workflow

collate :: [Document]
collate = undefined

scan :: Document -> [Image]
scan = undefined

split :: Image -> Opening
split = undefined

paginate :: Opening -> [Page]
paginate = undefined

omr :: Page -> [System]
omr = undefined

segment :: System -> [Staff]
segment = undefined

tokenize :: Staff -> [Glyph]
tokenize = undefined

recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol
recogniseMusicalSymbol = undefined

part :: [Glyph] -> Maybe Part
part gs =
  if null symbols then Nothing else Just $ Part symbols
  where symbols = mapMaybe recogniseMusicalSymbol gs

alignable :: Part -> Part -> Bool
alignable = undefined

piece :: [Part] -> Maybe Piece
piece = undefined

I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that&aposs just a label for a list or pair.

I haven&apost written much Haskell code before, and given that I&aposve only implemented one function here, I still haven&apost written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

March 27, 2014 11:13 PM

February 06, 2014

Adam Bower (quinophex)

I finally managed to beat my nemesis!

I purchased this book http://www.amazon.co.uk/dp/0738206679 (Linked, by Barabasi) on the 24th of December 2002, I had managed to make 6 or 7 aborted attempts at reading it to completion where life had suddenly got busy and just took over. This meant that I put the book down and didn't pick it up again until things were less hectic some time later and I started again.

Anyhow, I finally beat the book a few nights ago, my comprehension of it was pretty low anyhow but at least it is done. Just shows I need to read lots more given how little went in.




comment count unavailable comments

February 06, 2014 10:40 PM

February 01, 2014

Adam Bower (quinophex)

Why buying a Mio Cyclo 305 HC cycling computer was actually a great idea.

I finally made it back out onto the bike today for the first time since September last year. I'd spent some time ill in October and November which meant I had to stop exercising and as a result I've gained loads of weight over the winter and it turns out also become very unfit which can be verified by looking at the Strava ride from today: http://www.strava.com/activities/110354158

Anyhow, a nice thing about this ride is that I can record it on Strava and get this data about how unfit I have become, this is because last year I bought a Mio Cyclo 305 HC cycle computer http://eu.mio.com/en_gb/mio-cyclo-305-hc.htm from Halfords reduced to £144.50 (using a British Cycling discount). I was originally going to get a Garmin 500 but Amazon put the price up from £149.99 the day I was going to buy it to £199.99.

I knew when I got the Mio that it had a few issues surrounding usability and features but it was cheap enough at under £150 that I figured that even if I didn't get on with it I'd at least have a cadence sensor and heart rate monitor so I could just buy a Garmin 510 when they sorted out the firmware bugs with that and the price came down a bit which is still my longer term intention.

So it turns out a couple of weeks ago I plugged my Mio into a Windows VM when I was testing USB support and carried out a check for new firmware. I was rather surprised to see a new firmware update and new set of map data was available for download. So I installed it think I wasn't going to get any new features from it as Mio had released some new models but it turns out that the new firmware actually enables a single feature (amongst other things, they also tidied up the UI and sorted a few other bugs along with some other features) that makes the device massively more useful as it now also creates files in .fit format which can be uploaded directly to Strava.

This is massively useful for me as although the Mio always worked in Linux as the device is essentially just a USB mass storage device but you would have to do an intermediate step of having to use https://github.com/rhyas/GPXConverter to convert the files from the Mio-centric GPX format to something Strava would recognise. Now I can just browse to the folder and upload the file directly which is very handy.

All in it turns out that buying a Mio which reading reviews and forums were full of doom and gloom means I can wait even longer before considering replacement with a garmin.

comment count unavailable comments

February 01, 2014 02:11 PM

January 01, 2014

John Woodard

A year in Prog!


It's New Year's Day 2014 and I'm reflecting on the music of past year.

Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 01, 2014 07:56 PM

December 01, 2013

Paul Tansom

Scratch in a network environment

I have been running a Code Club at my local Primary School for a while now, and thought it was about time I put details of a few tweaks I’ve made to the default Scratch install to make things easier. So here goes:

With the default install of Scratch (on Windows) projects are saved to the C: drive. For a network environment, with pupils work stored on a network drive so they always have access whichever machine they sit at, this isn’t exactly helpful. It also isn’t ideal that they can explore the C: drive in spite of profile restrictions (although it isn’t the end of the world as there is little they can do from Scratch).

save-orig

After a bit of time with Google I found the answer, and since it didn’t immediately leap out at me when I was searching I thought I’d post it here (perhaps my Google Fu was weak that day). It is actually quite simple, especially for the average Code Club volunteer I should imagine; just edit the scratch.ini file. This is, as would be expected, located in:

C:\Program Files\Scratch\Scratch.ini

Initially it looks like this:

ini-orig

Pretty standard stuff, but unfortunately no comments to indicate what else you can do with it. As it happens you can add the following two lines (for example):

Home=U:
VisibleDrives=U:

To get this:

ini-new

They do exactly what is says on the tin. If you click on the Home button in a file dialogue box then you only get the drive(s) specified. You can also put a full path in if you want to put the home directory further down the directory structure.

save-new1

The VisibleDrives option restricts what you can see if you click on the Computer button in a file dialogue box. If you want to allow more visible drives then separate them with a comma.

save-new2

You can do the same with a Mac (for the home drive), just use the appropriate directory format (i.e. no drive letter and the opposite direction slash).

There is more that you can do, so take a look at the Scratch documentation here. For example if you use a * in the directory path it is replaced by the name of the currently logged on user.

Depending on your network environment it may be handy for your Code Club to put the extra resources on a shared network drive and open up an extra drive in the VisibleDrives. One I haven’t tried yet it is the proxy setting, which I hope will allow me to upload projects to the Scratch website. It goes something like:

ProxyServer=[server name or IP address]
ProxyPort=[port number]

by Paul Tansom at December 01, 2013 07:00 PM

February 22, 2013

Joe Button

Sampler plugin for the baremetal LV2 host

I threw together a simpler sampler plugin for kicks. Like the other plugins it sounds fairly underwhelming. Next challenge will probably be to try plugging in some real LV2 plugins.

February 22, 2013 11:22 PM

February 21, 2013

Joe Button

Baremetal MIDI machine now talks to hardware MIDI devices

The Baremetal MIDI file player was cool, but not quite as cool as a real instrument.

I wired up a MIDI In port along the lines of This one here, messed with the code a bit and voila (and potentially viola), I can play LV2 instrument plugins using a MIDI keyboard:

When I say "LV2 synth plugins", I should clarify that I'm only using the LV2 plugin C API, not the whole .ttl text file shebangle. I hope to get around to that at some point but it will be a while before you can directly plug LV2s into this and expect them to just work.

February 21, 2013 04:05 PM

January 16, 2013

John Woodard

LinuxMint 14 Add Printer Issue


 LinuxMint 14 Add Printer Issue



 

I wanted to print from my LinuxMint 14 (Cinnamon) PC via a shared Windows printer on my network. Problem is it isn’t found by the printers dialog in system settings. I thought I’d done all the normal things to get samba to play nice like rearranging the name resolve order in /etc/samba/smb.conf to a more sane bcast host lmhosts wins. Having host and wins, neither of which I’m using first in the order cocks things up some what. Every time I tried to search for the printer in the system setting dialog it told me “FirewallD is not running. Network printer detection needs services mdns, ipp, ipp-client and samba-client enabled on firewall.” So much scratching of the head there then, because as far as I can tell there ain’t no daemon by that name available!

It turns out thanks to /pseudomorph this has been a bug since LinuxMint12 (based on Ubuntu 11.10). It’s due to that particular daemon (Windows people daemon pretty much = service) being Fedora specific and should have no place in a Debian/Ubuntu based distribution. Bugs of this nature really should be ironed out sooner.

Anyway the simple fix is to use the more traditional approach using the older printer dialog which is accessed by inputting system-config-printer at the command line. Which works just fine so why the new (over a year old) printer config dialog that is inherently broken I ask myself.

The CUPS web interface also works apparently http://localhost:631/ in your favourite browser which should be there as long as CUPS is installed which it is in LinuxMint by default.

So come on Minty people get your bug squashing boots on and stamp on this one please.

Update

Bug #871985 only affects Gnome3 so as long as its not affecting Unity that will be okay Canonical will it!

by BigJohn (aka hexpek) (noreply@blogger.com) at January 16, 2013 12:39 AM

August 20, 2012

David Reynolds

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

On Music

Lately, (well I say lately, I think it’s been the same for a few years now) I have been finding that it is very rare that an album comes along that affects me in a way that music I heard 10 years ago seem to. That is not to say that I have not heard any music that I like in that time, it just doesn’t seem to mean as music that has been in my life for years. What I am trying to work out is if that is a reflection on the state of music, of how I experience music or just me.

Buying

Buying music was always quite an experience. I would spend weeks, months and sometimes longer saving up to buy some new music. Whether I knew exactly what I wanted or just wanted “something else by this artist” I would spend some time browsing the racks weighing up what was the best value for my money. In the days before the internet, if you wanted to research an artist’s back catalogue, you were generally out of luck unless you had access to books about the artists. This lead to the thrill of finding a hidden gem in the racks that you didn’t know existed or had only heard rumours about. The anticipation of listening to the new music would build even more because I would have to wait until I had travelleled home before I could listen to my new purchases.

Nowadays, with the dizzying amount of music constantly pumped into our ears through the internet, radio, advertising and the plethora of styles and genres, it is difficult to sift through and find artists and music that really speak to you. Luckily, there are websites available to catalogue releases by artists so you are able to do thorough research and even preview your music before you purchase it. Of course the distribution methods have changed massively too. No longer do I have to wait until I can make it to a brick and mortar store to hand over my cash. I can now not only buy physical musical releases on CD or Vinyl online and have it delivered to my door, I can also buy digital music through iTunes, Amazon or Bandcamp or even stream the music straight to my ears through services like Spotify or Rdio. Whilst these online sales avenues are great for artists to be able to sell directly to their fans, I feel that some of the magic has been removed from the purchasing of music for me.

Listening

Listening to the music used to be an even greater event than purchasing it. After having spent the time saving up for the purchase, then the time carefully choosing the music to buy and getting it home, I would then sit myself down and listen to the music. I would immerse myself totally in the music and only listen to it (I might read the liner notes if I hadn’t exhausted them on the way home). It is difficult to imagine doing one thing for 45+ minutes without the constant interruptions from smartphones, tablet computers, games consoles and televisions these days. I can’t rememeber the last time I listened to music on good speakers or headphones (generally I listen on crappy computers speakers or to compressed audio on my iPhone through crappy headphones) without reading Twitter, replying to emails or reading copiuous amounts of information about the artists on Wikipedia. This all serves to distract from the actual enjoyment of just listening to the music.

Experience

The actual act of writing this blog post has called into sharp focus the main reason why music doesn’t seem to affect me nowadays as much as it used to - because I don’t experience it in the same way. My life has changed, I have more resposibilities and less time to just listen which makes the convenience and speed of buying digital music online much more appealing. You would think that this ‘instant music’ should be instantly satisfying but for some reason it doesn’t seem to work that way.

What changed?

I wonder if I am the only one experiencing this? My tastes in music have definitely changed a lot over the last few years, but I still find it hard to find music that I want to listen to again and again. I’m hoping I’m not alone in this, alternatively I’m hoping someone might read this and recommend some awesome music to me and cure this weird musical apathy I appear to me suffering from.

August 20, 2012 03:33 PM

June 25, 2012

Elisabeth Fosbrooke-Brown (sfr)

Black redstarts

It's difficult to use the terrace for a couple of weeks, because the black redstart family is in their summer residence at the top of a column under the roof. The chicks grow very fast, and the parents have to feed them frequently; when anyone goes out on the terrace they stop the feeding process and click shrill warnings to the chicks to stay still. I worry that if we disturb them too often or for too long the chicks will starve.

Black redstarts are called rougequeue noir (black red-tail) in French, but here they are known as rossignol des murailles (nightingale of the outside walls). Pretty!

The camera needs replacing, so there are no photos of Musatelier's rossignols des murailles, but you can see what they look like on http://fr.wikipedia.org/wiki/Rougequeue_noir.

by sunflowerinrain (noreply@blogger.com) at June 25, 2012 08:02 AM

June 16, 2012

Elisabeth Fosbrooke-Brown (sfr)

Roundabout at Mirambeau

Roundabouts are taken seriously here in France. Not so much as traffic measures (though it has been known for people to be cautioned by the local gendarmes for not signalling when leaving a roundabout, and quite rightly too), but as places to ornament.

A couple of years ago the roundabout at the edge of  Mirambeau had a make-over which included an ironwork arch and a carrelet (fishing hut on stilts). Now it has a miniature vineyard as well, and roses and other plants for which this area is known.

Need a passenger to take photo!

by sunflowerinrain (noreply@blogger.com) at June 16, 2012 12:06 PM

September 04, 2006

Ashley Howes

Some new photos

Take a look at some new photos my father and I have taken. We are experimenting with our new digital SLR with a variety of lenses.

by Ashley (noreply@blogger.com) at September 04, 2006 10:42 AM

August 30, 2006

Ashley Howes

A Collection of Comments

This is a bit of fun. A collection of comments found in code. This is from The Daily WTF.

by Ashley (noreply@blogger.com) at August 30, 2006 01:13 AM