Ubuntu on Mac

I recently rebuilt a Mac Mini to work as the forth screen in my workflow. I googled around and pieced together what I needed to do by cherrypicking from various guides, and everything was running well until I updated to a new kernel and rebooted. I spent the better part of two nights trying to get the machine to boot.

Unfortunately, it happened just after I blacklisted a module to work around a USB bug that was causing one of my drives to go haywire occasionally, and it took a while before I finally figured out it wasn’t a problem with my change but the kernel itself. Mac’s use EFI for booting — which requires a cryptographically signed kernel. I was finally able to boot up by following the first half of the instructions on this Ask Ubuntu answer. Essentially, do a manual boot via grub and make sure that I pick the secure kernel.

I noticed that I only had a signed image for an older version of the Kernel. I dropped by #ubuntu-kernel and was pointed to the linux-signed-generic package. What happened is none of the guides I read had mentioned this package or its significance. Any time the kernel images are updated, the signed version is also updated, except you won’t get that image by default. The machine was trying to boot off an unsigned kernel, causing the boot sequence to freeze (with no indication as to why).

sudo apt-get update
sudo apt-get install linux-signed-generic
sudo reboot

A thorn with an easy fix. Install the meta package, which will pull in the current signed image, and reboot.

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


American Serenade

Back in 2009, we were winding down our life in the U.S.. We drastically culled things we didn’t want or need, and put the rest into storage for the eventual move to Canada.

Andrea was home in August 2009; I was there on and off, until I was officially issued a visitor record in November 2010 and filed for permanent resident status. On that day, I entered the country with one large suitcase and a backpack containing some notebooks and my laptop. That’s it.

This weekend, my brother-in-law and I had an epic 30 hour adventure: driving a U-Haul cargo van on a round-trip from Ontario to Aurora, IL and back. Our choice of weekend was suspect from the start; most people thought it was crazy to do this on a weekend when terror alerts were escalated and traffic was sure to be horrible, but I had a plan.

We crossed the border Friday night around 9:30. It was John’s first time in the states, and he was greeted almost immediately by fireworks, like a dignitary being being greeted with festivities. After a minor detour that almost put us in Toledo, OH, we were back on schedule, driving across Michigan. Traffic was light, only hindered by various police-directed road closures. We got to my mom’s place around 4AM, and were back out the door around 9:30.

I’d done some research on bringing our stuff back to Canada. By all accounts, a straightforward task. We unpacked almost everything in the storage unit, numbering the boxes and documenting their contents in a notebook. It was a lot of work on a sunny July day. I wore a tube sock as a bandana, to keep the sweat from my eyes, and we drank so much water that I had to run for refills at one point.

Andrea was pretty skeptical that we’d be able to fit everything into the cargo van, but it was exponentially more expensive to rent and drive a larger truck that far. Sure enough, we fit everything in the truck, minus bed linens that didn’t fit any bed we own and a few other things that had long since been replaced.

It was about 4pm by the time we left the storage unit. I did what any good host would do and continued giving John a tour of the things Andrea and I were used to. A trip to the massive warehouse that is Woodman Food (part of the side-quest to find Vanilla Coke), and then a winding trek through downtown Aurora and the suburbs.

Our wanderings were accompanied by fireworks in every direction, from Aurora to the Indiana border. A good and proper showing for anyone’s first visit.

Another side-quest was to introduce him to as many different foods as possible, and specifically from places we couldn’t find in Ontario. Considering the short time we had, we focused on smaller meals. A Steak ‘n Shake at midnight, so busy that we had to wait for a table. IHOP, which despite international being in its name is far from it. Portillo’s for lunch; nothing beats a hot dog and cheese fries. Sonic for a drink and tater tot snack.

So it comes to the accounting of things; the detritus of a life lived in privilege.

Most of the boxes we hauled were full of books. Reference books, text books, hard cover and paperback books, some of which I acquired in my early teens. Books by my instructors at Clarion 2010, and a few spoils from Comic Con San Diego that same year.

All sorts of miscellanea; ham radios to tools to crochet and quilted blankets. Cookware. Clothes. LEGO, Star Trek models, my childhood (and not so childhood) Transformer toys, and more nerdy/geeky knickknacks than anyone has a right to. Also, sixteen short boxes of comic books.

I even found a 20+ year old model of a Klingon Bird of Prey, which we put on the dashboard as a sort of mascot.

Clearly, we are adult-sized children with a love for the fantastic. Which saved our asses when we reached the Canadian border.

Apparently, I was supposed to have filed a B-4 form when I landed in Canada in late 2010, documenting what I was bringing with me and what I would be bringing over at some future date. No one informed me of this, and as a result, we faced paying import duties on everything brought with us, as if it were new.

We pulled in for secondary inspection around 3:30AM. Four officers approach, one to my open window. He reiterates the issue of not having filed a B-4, and the potentially heavy penalty. And then things got interesting. At mention of the comic books, he asked which series we had. And then what series of fantasy novel. And which generation of Transformers toys, and US or Japanese issue.

This went on for ten minutes, an exploration of genre cornucopia. And then the strangest thing happened. The guard that initially met us in secondary exception says, “Ok, that’s enough. You win,” and we were sent on our way. No duties. No fines. Just relief that this adventure was coming to a close.

We’re home now, with boxes filling our dining and living room. We’ll be spending the rest of the month unpacking and figuring out where to put all this stuff. A minor collection of the best of our childhood (and later) memorabilia. Forgotten memories of good and bad times. And another to-read pile to add to the existing one. Shit.

All kidding aside, this was one of the best experiences. A road trip, with all of that entails (including a minor brush with the police in Michigan because I forgot to use a turn signal on an otherwise empty street), with someone I genuinely consider to be a friend. A massive item removed from the TODO list and the erasure of a constant stress of “what if” with most of our worldly belongings a country and three states away.

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


Announcing Benchmarking with Juju

Benchmarking and performance are interesting problems, especially in today’s growing cloud-based microservice scene. It used to be a question of “how does this hardware compare to that hardware,” but as computing and service-oriented architectures grow the question has evolved. How does my cloud and application stack handle this? It’s no longer enough to run PTS on your web server and call it a day.

Measuring every microservice in your stack, from backend to frontend, is a complex task. We started thinking about how you would model a system to benchmark all of these services. It’s not just a matter of measuring the performance of one service, but also its interactions with other services. Now multiply that by every config option for every service, like PostgreSQL, which has hundreds of options that can affect performance.

Juju has been modeling service orchestration since 2010. It’s done a great job of taking complex scenarios that are now booming, such as containerization, service oriented architectures and hyperscale, and condensing those ideas down into composable, reusable, pieces. Today we’re adding benchmarking. The ability not just to define the relationships between these services, but how they should be measured in relation to each other.

As an example, monitoring the effect of adjusting the cache in nginx is a solved problem. What we’re going after is what happens when you adjust any service in your stack in relation to every other service. Turn every knob programmatically and measure it at any scale, on any cloud. Where exactly will you get the best performance: your application, the cache layer, or the backend database? Which configuration of that database stack is most performant? Which microservice benefits from faster disk I/O? These are the kinds of questions we want answered.

With Juju Actions, we can now encapsulate tasks to run against a single unit or service in a repeatable, reliable, and composable way. Benchmarking is a natural extension of Actions, allowing authors to encapsulate the best practices for measuring the performance of a service and serve those results — in a standard way — that any user or tool can digest.

We’re announcing charm-benchmark, a library written in Python that includes bash scripts so you can write benchmarks in any language. It uses action-set under the covers to create a simple schema that anyone can use and parse.

While we may intimately know a few services, we’re by no means the experts. We’ve created benchmarks for some of popular services in the charm store, such as mongodb, cassandra, mysql and siege, in order to provide a basic set of examples. Now we’re looking for community experts who are interested in benchmarking in order to fill the gap of knowledge. We’re excited about performance and how Juju can be used to model performance validation. We need more expertise on how to stress a service or workload to measure that performance.

For example, here’s what a benchmark for siege would look like:

  description: Standard siege benchmark.
      description: The number of simultaneous users to stress the web server with.
      type: integer
      default: 25
      description: The time to run the siege test for.
      type: string
      default: "1M"
      description: |
        Delay each simulated user for a random number of seconds between
        one and DELAY seconds.
      type: integer
      default: 3


set -eux

# Make sure charm-benchmark is installed
if ! hash benchmark-start 2&>/dev/null; then
    apt-get install -y python-pip
    pip install -U charm-benchmark

runtime=`action-get time`
concurrency=`action-get concurrency`
delay=`action-get delay`
run=`date +%s`

mkdir -p /opt/siege/results/$run


# Run your benchmark
siege -R $CHARM_DIR/.siegerc -t ${runtime:-1M} -c ${concurrency:-25} -d ${delay:-3} -q --log=/opt/siege/results/$run/siege.log

# Grep/awk/parse the results

benchmark-data transactions $transactions hits desc
benchmark-data transaction_rate $hits “hits/sec” desc
benchmark-data transferred $transferred MB desc
benchmark-data response_time $response ms asc

# Set the composite, which is the single most important score
benchmark-composite transaction_rate $hits hits/sec desc

benchmark-finish || true

We’ll be covering benchmarking in the next Juju Office Hours on July 9th at 1600 EDT/20:00 UTC and we’d love to help anyone who wants to get started, you can find me, Adam Israel (aisrael), Marco Ceppi (marcoceppi), and Tim Van Steenburgh (tvansteenburgh) on #juju on Freenode and on the Juju mailing list.

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


Making OS X, Go, and Brew play happy

GO and OS X

I’m doing a little hacking with juju actions before they land in a stable release but I ran into some hurdles getting Go working with the brew-installed version. Trying to install Go packages failed with a bunch of ‘unrecognized import path’ errors. Here’s how I fixed it.


Even though you can install Go via brew, there’s more to be done to get it working. Go relies on two environment variables: GOPATH, and GOROOT. GOROOT is the path where Go is installed, and GOPATH is the directory you’ve created for your code workspace (which I’ve defaulted to $HOME/go).  We then need to tell our shell where to find these installed executable and run them first1.

cat << EOF > ~/.bash_profile
# Go go gadget Go!
GOVERSION=$(brew list go | head -n 1 | cut -d '/' -f 6)
export GOPATH=$HOME/go
export GOROOT=$(brew --prefix)/Cellar/go/$GOVERSION/libexec

Now you can run something like to have easier access to docs:

$ go get
$ godoc gofmt

    Gofmt formats Go programs. It uses tabs (width = 8) for indentation and
    blanks for alignment.

    Without an explicit path, it processes the standard input. Given a file,
    it operates on that file; given a directory, it operates on all .go
    files in that directory, recursively. (Files starting with a period are
    ignored.) By default, gofmt prints the reformatted sources to standard


    gofmt [flags] [path ...]

    The flags are:

        Do not print reformatted sources to standard output.
        If a file's formatting is different than gofmt's, print diffs
        to standard output.
        Print all (including spurious) errors.
        Do not print reformatted sources to standard output.
        If a file's formatting is different from gofmt's, print its name
        to standard output.
    -r rule
        Apply the rewrite rule to the source before reformatting.
        Try to simplify code (after applying the rewrite rule, if any).
        Do not print reformatted sources to standard output.
        If a file's formatting is different from gofmt's, overwrite it
        with gofmt's version.

    Debugging support:

    -cpuprofile filename
        Write cpu profile to the specified file.

    The rewrite rule specified with the -r flag must be a string of the

    pattern -> replacement

    Both pattern and replacement must be valid Go expressions. In the
    pattern, single-character lowercase identifiers serve as wildcards
    matching arbitrary sub-expressions; those expressions will be
    substituted for the same identifiers in the replacement.

    When gofmt reads from standard input, it accepts either a full Go
    program or a program fragment. A program fragment must be a
    syntactically valid declaration list, statement list, or expression.
    When formatting such a fragment, gofmt preserves leading indentation as
    well as leading and trailing spaces, so that individual sections of a Go
    program can be formatted by piping them through gofmt.


    To check files for unnecessary parentheses:

    gofmt -r '(a) -> a' -l *.go

    To remove the parentheses:

    gofmt -r '(a) -> a' -w *.go

    To convert the package tree from explicit slice upper bounds to implicit

    gofmt -r 'α[β:len(α)] -> α[β:]' -w $GOROOT/src/pkg

    The simplify command

    When invoked with -s gofmt will make the following source
    transformations where possible.

    An array, slice, or map composite literal of the form:
        []T{T{}, T{}}
    will be simplified to:
        []T{{}, {}}

    A slice expression of the form:
    will be simplified to:

    A range of the form:
        for x, _ = range v {...}
    will be simplified to:
        for x = range v {...}


   The implementation of -r is a bit slow.

Homebrew Gotchas

Homebrew installs the go formula with a bin/ directory, which symlinks to the go and gofmt binaries in libexec/. Other binaries, such as godoc, will be installed to libexec but are not symlinked to bin/. Adding go/$GOVERSION/libexec, instead of go/$GOVERSION/bin, to PATH makes sure we’re looking in the right place, and this setup will survive a version upgrade.

1: It would probably be better to create a script that would toggle the PATH to include/exclude my $GOPATH/bin in $PATH. I’m using this to run the latest cutting edge version of juju, but I can see the need to switch back to using the released version of juju, without having to hack my ~/.bash_profile

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


A brief introduction to Juju

I had some concerns about how I was going to integrate posts of a technical nature with my blog, which has been predominantly writing-oriented for several years. What I failed take into account is that many of us who write Science Fiction are armchair technologists. We look at gadgets, scientific breakthroughs and tech policy, and make conjecture about what might come next.

What I talk about is less important than how I talk about it. It’ll be interesting, or not, but no self-rejection.


In one of my previous jobs, I ran a cluster of servers responsible for serving upwards of 1.5 Billion ads/day. I had a half dozen racks of hardware sitting in a data center in Chicago. Some of those servers were from the early days, while others were a few years newer.

When business was good, we’d buy more equipment — servers, racks, switches, electricity, and bandwidth — to handle the traffic. The new business justified the fixed and recurring costs (to buy and lease hardware, and to host the equipment), locked in to a 1-3 year contract.

When business dropped off, and it inevitably did, we were still paying the bills for all of that extra hardware and the associated services.

There’s also an ebb and flow to internet traffic, an inevitable tidal force. We might serve twice as many ads after 9AM EST as we did at 3AM. So you beefed up hardware to handle the daily peaks and pay for the idle costs otherwise.

Almost everyone in the modern world today carries a cell phone. Maybe you buy the latest and greatest smartphone, at a subsidized price, and are locked into a contract, paying every month for the privilege, even for the services you never use. Or you buy your phone outright and pay as you go, only responsible for what you use.

This is where the cloud comes in. You can almost see the Jedi hand wavy motion being made when someone says, “it’s in the cloud”. What is this ethereal thing and where does it live?

The simplified version is that the cloud is simply a large cluster of computers sitting in a data center somewhere. It might be massive, power-consuming supercomputers. It could be a ton of off-the-shelf hardware stringed together. And all of that gear is pieced together with software to run virtual computers, which those companies will the lease out to people like you and me.

There’s no question that the future of business computing involves the cloud. It’s super cost-effective. In may ways, though, it’s still in its infancy.

Here’s where I get to the point, and talk about Juju.

Back when I was managing that cluster of ad servers, we’d cobbled together a mix of shell scripts using ssh and puppet to automate the deployment and management of those dozens of computers. It worked, but was far from ideal, and only worked with our hardware.

Juju is a system that lets you automate the deployment of software, via bundled instructions called Charms, to servers across multiple Clouds, like EC2, Azure, HP, Digital Ocean, or even your own hardware.

Say your awesome website is suddenly getting linked to by the Neil Gaiman and John Scalzi’s of the world, and your site is being crushed under the load. Problem?

No problem. You tell juju you want two more servers, or five or ten. A few minutes later, they’re online and so’s your website. When the slashdot effect has worn off, you can remove those extra servers. Only paying for the time use you needed them.

Scalability and affordability, in a nutshell. And juju is there to help you manage that.

TL;DR: Juju is a cloud orchestration toolkit, to aid in the deployment and manage of services across a variety of cloud providers.

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


New job!

I am delighted — tickled, in fact — to report that as of last Monday I am employed by Canonical, the company behind Ubuntu Linux.

I’ve joined the Ecosystem Engineering team, part of Cloud Development and Operations, as a software engineer. More specifically, I’m working on Juju, the cloud orchestration tool chain. I’ll be writing charms and documentation, working on optimizations, and helping to make a cool product even cooler.


[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


Q&A: Why is Scrivener using my old contact information?

For the past few years, I’ve had to manually update the contact information in the header of every Scrivener project I’ve created. It was defaulting to an old email and physical address, but somehow had the correct phone number.

Scrivener can pull your contact information from the OS X application Contacts, if you add the string “(Scrivener:UseMe)” to the notes of your contact card. As it turns out, I had done that already but my card has all of my email addresses (work and home) as well as my current and past physical addresses. In that case, Scrivener just uses the first phone, email, and physical address it finds.

The solution is simple, and doubly useful if you write under a pseudonym. Create a new contact card with the information you want in your manuscript’s cover page. Don’t forget to add “(Scrivener:UseMe)” to the notes section of your new contact, and remove it from the old.

The next time you create a project in Scrivener, it will use your new contact.

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


SFWA, Accessiblity and Diversity

There’s have been many kerfluffles involving the Science Fiction & Fantasy Writers Association (SFWA). The latest one begin when a former member began a petition over recent changes to the staff and policy of the organization’s flagship publication, the Bulletin. As a result of the current back and forth between factions, one member — a vocal minority — made the suggestion that the bar for membership should be raised. There’s a lot I could say about the current debate(s) going on, but I want to specifically address the idea of accessibility and diversity.

Membership requirements, in general, are a good thing for an organization but should be recognized for what they are: exclusion. To what degree they exclude depends on the type of organization, its goals, philosophies, etc. Billing itself as a :”professional organization for authors of science fiction, fantasy, and related genres”, one would assume the requirements are imposed to limit membership to anyone who has a professional interest in writing science fiction, fantasy, and related genres. Seems simple, but there’s always fine print.

The argument made by Brad R. Torgersen is that to be a more professional organization, SFWA needs to be more exclusionary, with the goal of eliminating “non-professional” writers, thereby raising rates enough money through dues that the organization can then use to hire administration staff and increase benefits to its members.

…impose an annual fiction writing income floor, below which members cannot fall without being placed on the inactive list, and therefore losing the ability to vote and/or participate in the org.

Anyone capable and willing to contributing $500 or even $1,000 U.S. dollars (or more) per year, is unlikely to be an amateur, or a pro-am.

I will say, flat out, this is a bad idea. It’s too exclusionary, and would decrease diversity. In fact, I would argue that SFWA should lower its membership requirements.

For active writers, there are two membership tiers: Active and Associate, both of which require prose sales at a minimum rate of $0.05/word. I would like to see a third tier, for writers who have not yet made a sale to a market able to pay those rates but have demonstrated a commitment to their craft, such as 3 sales at a semi-pro rate, or a cumulative revenue total. Give this tier some limited benefits, such as access to the forum and the bulletin, but not all of the benefits of the higher tiers. Perhaps offer it at a lower yearly rate to adjust for the different benefits.

Or, as has been pointed out to me on Twitter (thanks John and Tim), use the Romance Writers Association (RWA) as a model or inspiration for how to include “non-professional” writers.

By being less exclusionary, the organization will become more accessible to a diverse group of people across income levels, gender, orientation, social classes, etc. The organization would gain new, interesting, and previously under-represented voices in building a future.

Many writers toiling in the semi-pro ranks treat their work with the same professionalism, if not more so, than those currently qualified by SFWA definitions to call themselves such. The previous SFWA administration, under John Scalzi, and the new helmed by Steven Gould, have made great strides in improving the organization as a whole. It should be recognized just how much work it is to retrofit a monolithic steam engine with maglev. I expect the diversification will continue, but I would love to see a bigger change to allow.

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


Hark, an update!

Hark! Inconsistent blogger has returned with news!

I am pleased to announce that I’ve sold “Aye of the Hagfish” to Goldfish Grimm’s Spicy Fiction Sushi. It should be appearing online early 2014. This will be my second appearance in the magazine (the first being Control, in their debut issue).

I’m down to one story in circulation, and no new short stories finished this year, but for good reason! I finished the first draft, first read-through, and have begun developmental edits on the novel tentatively titled (but almost guaranteed to be renamed) “Black Mirror”.

We’re settling in for a long winter here at casa de Israel-Redman. The cupboards are stocked with tea, coffee, and non-perishable foodstuffs. Candles are lit, the fireplace channel is giving us the proper ambiance, and we’re getting busy with the making of art and stuff. Come spring, we’ll come out of our self-imposed hibernation with some fun new things to show off.

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]


111 Weeks

To be exact, it’s been 783 days since we filed for my Canadian Permanent Residence and I am happy to announce that it is official done. We have just walked out of the Immigration Centre in Windsor, Ontario, Social Insurance Number in hand.

I guess this makes me an expatriate; an American Citizen permanently living abroad, which is kind of cool. I’ve been thinking a lot about getting a tattoo to commemorate the experience. More on that later.

There’s been a lot of stress involved around this process, most notably the difficulty traveling back to the US. In a few weeks, when I have the official card in hand, I’ll be free to cross the border without fear of being turned away and having to restart the immigration process. That’s going to be a cathartic experience, finally going back to visit my family and friends.

Now that I’m all official, we can start thinking about normal, grown-up things like buying a house, and getting all of our stuff out of storage back in Illinois.

[Crossposted from Adam Israel. If you'd like to comment, you can do so either here or there.]