I recently attended Snappy Sprint Heidelberg, the first Snappy sprint focused on upstream and cross-distribution collaboration.
Snappy is a technology with an interesting history: initially started to provide App Store-like semantics (atomicity, declarative security) for the Ubuntu Phone project, it has since expanded to be a platform for desktop application deployment (e.g. VLC), as well as server applications and the IoT space.
There were a number of productive discussions with people working on Snappy itself, as well as folks from Fedora, elementary OS, KDE, and elsewhere.
At the start of the week, Snappy was technically usable in several different distributions, but only shipped fully-featured (in the main distribution repositories, with confinement, etc) in Ubuntu. Some great progress was made on AppArmor confinement in Arch Linux, and there is currently beta support for confinement via SELinux.
Providing a full-featured Snappy experience in Debian has its challenges, mostly relating to the lack of a default LSM. While AppArmor in Debian is supported and there's desire to have it be the default in "buster", Ubuntu carries a number of patches that add additional functionality not yet present upstream. I'm not sure whether pursuing getting those patches merged is more viable than waiting for SELinux support in snapd, however.
I've agreed to co-maintain the snapd package in Debian, and am excited to see intentions to support building snaps on a variety of distribution bases. While I do not expect Snappy (or Flatpak, or AppImage) to replace distribution-maintained software in the foreseeable future, nor do I feel that's a desirable outcome, I do think offering users freedom to choose to use software via these systems in a safe manner is critical.
Luke W. Faraone
30 July 2016
28 March 2015
Key transition
I'm migrating PGP keys from 0xF9FDD506 to 0x0C14A470. If you signed my old key, I would appreciate you signing my new key as well. Feel free to ping me with questions.
Accordingly, I've published a transition statement signed by both keys.
01 April 2014
Now at Dropbox!
Dropbox has acquired Zulip, the business instant messaging startup I've worked at since August 2012. Its been a great ride the past one and a half years — I definitely have loved working with this amazing team. We're incredibly excited about working with an awesome group of people on a problem with huge scale, at a company that's as passionate as we are about helping people work together efficiently.
Here's looking forward to the future at Dropbox!
02 January 2014
Unstandardized standards are the worst: sendmail
Implementing software to replace legacy systems is always a challenge, especially when you're dealing with a system with as much legacy as sendmail, which was first introduced as delivermail in 1979.ref
Each UNIX vendor, it seems, rewrote or heavily customized sendmail. This has lead to sometimes conflicting implementations.
However, sometimes you don't want to have to fiddle with command-line parameters; you've already written a perfectly fine message with headers.
-t is generally passed to sendmail when you want to build a message envelope from an already-formatted message, with headers, etc. For example, if you had a file foo.txt with a body like this:
you could send the message with a simple invocation of cat foo.txt | sendmail -t. The system would take care of ensuring a Message-id was appended if appropriate, and queue the message to be sent. However, it is when you do slightly more complex invocations of sendmail that things get ambiguous.
It turns out that implementations differ on what exactly it means when you use -t in combination with naming destination addresses after the arguments to sendmail. exim4's documentation describes the situation in greater detail:
According to some Sendmail documentation (Sun, IRIX, HP-UX), if any addresses are present on the command line when the -t option is used to build an envelope from a message’s To:, Cc: and Bcc: headers, the command line addresses are removed from the recipients list. This is also how Smail behaves. However, other Sendmail documentation (the O’Reilly book) states that command line addresses are added to those obtained from the header lines. When extract_addresses_remove_arguments is true (the default), Exim subtracts argument headers. If it is set false, Exim adds rather than removes argument addresses.
Thus, there's basically no mechanism for a program to know which behaviour to expect. God forbid two programs are installed on a system that expect different behaviours!
It appears that the default behaviour of Ruby is the opposite of what exim4 (Debian's default mail client) expects. This has resulted in numerous bug reports. Some replies suggest changing exim4's defaults, while others advocate overriding ActionMailer and friends to use sendmail -i instead, without -t.
That said, its not really clear who's wrong here; at no point does there appear to have been a definitive specification for sendmail, and as such we can hope for defined behaviour by common custom at best, and a sea of incompatibility bugs at worst. Amusingly, POSIX standards have nothing to say on this subject of sendmail at all; it defines that a mailx command must exist, but says that its sending mode may be implementation-specific.
As Matthew Garrett writes, there's not enough gin in the world.
Each UNIX vendor, it seems, rewrote or heavily customized sendmail. This has lead to sometimes conflicting implementations.
Case in point: -t
Normally, you invoke sendmail(8) with a series of arguments indicating the subject of a message, the recipients, etc. When invoked this way, the command expects a message on standard input, waits for EOF, and then sends your message along.However, sometimes you don't want to have to fiddle with command-line parameters; you've already written a perfectly fine message with headers.
-t is generally passed to sendmail when you want to build a message envelope from an already-formatted message, with headers, etc. For example, if you had a file foo.txt with a body like this:
From: Luke FaraoneTo: John Smith Subject: Hello, world! Hi there.
you could send the message with a simple invocation of cat foo.txt | sendmail -t. The system would take care of ensuring a Message-id was appended if appropriate, and queue the message to be sent. However, it is when you do slightly more complex invocations of sendmail that things get ambiguous.
It turns out that implementations differ on what exactly it means when you use -t in combination with naming destination addresses after the arguments to sendmail. exim4's documentation describes the situation in greater detail:
extract_addresses_remove_ arguments | Use: main | Type: boolean | Default: true |
Thus, there's basically no mechanism for a program to know which behaviour to expect. God forbid two programs are installed on a system that expect different behaviours!
It appears that the default behaviour of Ruby is the opposite of what exim4 (Debian's default mail client) expects. This has resulted in numerous bug reports. Some replies suggest changing exim4's defaults, while others advocate overriding ActionMailer and friends to use sendmail -i instead, without -t.
That said, its not really clear who's wrong here; at no point does there appear to have been a definitive specification for sendmail, and as such we can hope for defined behaviour by common custom at best, and a sea of incompatibility bugs at worst. Amusingly, POSIX standards have nothing to say on this subject of sendmail at all; it defines that a mailx command must exist, but says that its sending mode may be implementation-specific.
As Matthew Garrett writes, there's not enough gin in the world.
10 November 2013
Why I use my bank's mobile site on my desktop
(or, cutting out bloat by using a platform where bloat won't fly)
Let me start off by saying I'm generally a huge fan of my bank, USAA. Their offerings are free of hidden fees, their phone support excellent, and the perks they provide are competitive. They don't have the best savings interest rates, but you can always find a better deal online to park money not actively in your checking account.
However, USAA's website is a behemoth. My account page took about 8 seconds to fully load, downloading 1.4MiB of content.
It is frequently buggy; whenever I log in via Google Chrome on Ubuntu 12.04 I land on a page with a URL beginning with "https://www.usaa.com/inet/gas_bank/AccountBannerAjax" and a bunch of GET parameters like "currentaccountkey" and "accnumber" with values like "encrypted12a1f4dd1[…]". The server returns a 200 OK, promises a Content-Length of 20, but then actually returns zero bytes. After navigating to the homepage and clicking a button, I end up getting logged in, but I wonder what percentage of their userbase are experiencing this problem?
Let me start off by saying I'm generally a huge fan of my bank, USAA. Their offerings are free of hidden fees, their phone support excellent, and the perks they provide are competitive. They don't have the best savings interest rates, but you can always find a better deal online to park money not actively in your checking account.
However, USAA's website is a behemoth. My account page took about 8 seconds to fully load, downloading 1.4MiB of content.
The "My Accounts" page you're redirected to after logging in. |
For some strange reason, I get a lot of checks. It appears that nobody else informed the banking system that it's 2013, and the easiest mechanism for people to send money without paying fees is still on paper. To its credit, USAA made remote deposit of checks available to all customers in 2006, when it was mostly an offering limited to businesses. However, it seems like they haven't updated their web workflow since then.
Using it on the web still requires using a signed Java applet (itself discouraged by CMU's CERT) that does the incredibly complex task of… letting you select a file from your computer and upload it to their servers. At least, that's what I think it does, because any time I chose "Run", my browser complained a few minutes later that the tab had stopped responding. Regardless of functionality, you can accomplish almost anything their site could currently be doing with HTML5 and a third party service if they want to crop images locally.
USAA's mobile app for Android has another host of problems; I haven't been able to log into it for 2 weeks, and when I chatted with someone today I was told they were "doing some maintenance this weekend", so I should try again in a few hours once that's finished.
I googled around a bit for some way to perhaps make the applet work in Ubuntu (which admittedly is not a supported platform), and came upon a Facebook thread where a rep suggested using the mobile web site.
I loaded it in my browser, and was amazed at how well it functioned. Obviously designed for higher-end devices (It didn't even load in one WAP emulator I tried), the mobile web interface was a refreshing breath of fresh air. It scaled well to a full-screen device (see below), loaded quickly, and gave me all the information I would have wanted out of the normal web interface.
Most notably: remember the whole "upload a check" workflow that required a buggy Java applet on the main website? We get bog-standard HTML form fields, no additional magic. There goes any theories about the Java client doing some magic validation or prep of the image; here, all they're getting is the images and my session cookie.
I'm still shocked at whoever thought a My Yahoo!-style homepage was the best layout for a bank, but props to the web developers who managed to make a mobile interface that was both pretty and allowed me to work around broken functionality in their implementations on every other platform I had access to.
But why was the mobile web interface the least bloated? Easy. On the desktop, you generally have a nice pipe, or if not, the user knows it and won't be too upset if your site is just as slow as other sites similarly situated. On mobile, the user downloaded all the code already, so the only latency should be the API requests against the server, right?
On the mobile web users have come to expect relatively speedy mobile-optimised sites and there's less screen real estate to do fancy things that get in the way of content. For many sites, that's a huge improvement. Of course, it would be really nice if more banks supported open protocols for interactions (USAA has a read-only, limited-duration OFX feed), but I would settle for a better web interface.
So tl;dr: USAA, please make www.usaa.com redirect to m.usaa.com, kthxbai.
Spinning after logging in on Android |
I googled around a bit for some way to perhaps make the applet work in Ubuntu (which admittedly is not a supported platform), and came upon a Facebook thread where a rep suggested using the mobile web site.
A breath of fresh air |
Most notably: remember the whole "upload a check" workflow that required a buggy Java applet on the main website? We get bog-standard HTML form fields, no additional magic. There goes any theories about the Java client doing some magic validation or prep of the image; here, all they're getting is the images and my session cookie.
I'm still shocked at whoever thought a My Yahoo!-style homepage was the best layout for a bank, but props to the web developers who managed to make a mobile interface that was both pretty and allowed me to work around broken functionality in their implementations on every other platform I had access to.
But why was the mobile web interface the least bloated? Easy. On the desktop, you generally have a nice pipe, or if not, the user knows it and won't be too upset if your site is just as slow as other sites similarly situated. On mobile, the user downloaded all the code already, so the only latency should be the API requests against the server, right?
On the mobile web users have come to expect relatively speedy mobile-optimised sites and there's less screen real estate to do fancy things that get in the way of content. For many sites, that's a huge improvement. Of course, it would be really nice if more banks supported open protocols for interactions (USAA has a read-only, limited-duration OFX feed), but I would settle for a better web interface.
So tl;dr: USAA, please make www.usaa.com redirect to m.usaa.com, kthxbai.
20 July 2013
Joining the Debian FTPTeam
I'm pleased to say that I have joined the Debian FTPTeam as of the Friday before last. See Joerg Jaspert's announcement on debian-devel-announce.
The FTPTeam is responsible for maintaining the Debian software archive, and ensures that new software in Debian is high-quality and compliant with our policies.
As an "ftpassistant", I (along with Paul, Scott, Gergely, and others) will be helping to process the NEW queue, which is currently at a whopping 297 packages. Here's hoping we'll be able to get that number down over the coming weeks!
The FTPTeam is responsible for maintaining the Debian software archive, and ensures that new software in Debian is high-quality and compliant with our policies.
As an "ftpassistant", I (along with Paul, Scott, Gergely, and others) will be helping to process the NEW queue, which is currently at a whopping 297 packages. Here's hoping we'll be able to get that number down over the coming weeks!
03 April 2013
Teaching free/open source to high school students
A few weeks ago I taught a class on Open Source: Contributing to free culture (catalog entry) for Spark, a one-day program put on by the student-run MIT Educational Studies Program. I was fortunate to have two helpful co-teachers, Tyler Hallada and Jacob Hurwitz, who assisted with the lesson plan and the in class lecture.
We ended up teaching 3 sessions of the 1hr 50min class that Saturday, with about 10 students in each session.
I was pretty impressed by the quality of the students; a number of them had used GNU/Linux before, but even those who hadn't were able to gain something from the experience. The class was broken up into three segments:
We ended up teaching 3 sessions of the 1hr 50min class that Saturday, with about 10 students in each session.
I was pretty impressed by the quality of the students; a number of them had used GNU/Linux before, but even those who hadn't were able to gain something from the experience. The class was broken up into three segments:
- Lecture on a brief history of open source and the free software movement
- Small research project on an open source project
- Lab where students could work through OpenHatch's training missions
The point was to mix up what could otherwise be a very boring lecture.
I think we might have missed the mark on the last bit, as I get the feeling that we didn't end up giving the students good actionables. While the quality of OpenHatch is high and the organization's campus outreach programs are amazing, skills practice only goes so far without clear direction to apply said skills. I'll be following up with the class participants to see how they're progressing on their own open source contributor journey, and will post updates if I have any.
While not an OpenHatch event, if this sort of thing interests you, OpenHatch runs a series of events like this one and has a mailing list for discussing planning and sharing best practices. Subscribe and say hi!
The presentation is enclosed below, and of course is licensed under CreativeCommons Attribution-ShareAlike 3.0. [PDF]
I think we might have missed the mark on the last bit, as I get the feeling that we didn't end up giving the students good actionables. While the quality of OpenHatch is high and the organization's campus outreach programs are amazing, skills practice only goes so far without clear direction to apply said skills. I'll be following up with the class participants to see how they're progressing on their own open source contributor journey, and will post updates if I have any.
While not an OpenHatch event, if this sort of thing interests you, OpenHatch runs a series of events like this one and has a mailing list for discussing planning and sharing best practices. Subscribe and say hi!
The presentation is enclosed below, and of course is licensed under CreativeCommons Attribution-ShareAlike 3.0. [PDF]
Subscribe to:
Posts (Atom)