<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Posts tagged 'technology' — Typing with mittens on]]></title><description><![CDATA[Rachel Evans writes about tech, Denmark, and probably other stuff]]></description><link>https://rachelevans.org/blog/tag/technology/</link><generator>RSS for Node</generator><lastBuildDate>Wed, 18 Feb 2026 09:07:06 GMT</lastBuildDate><atom:link href="https://rachelevans.org/blog/tag/technology/rss/" rel="self" type="application/rss+xml"/><pubDate>Wed, 18 Feb 2026 09:07:06 GMT</pubDate><copyright><![CDATA[Copyright 2026 Rachel Evans]]></copyright><language><![CDATA[en-gb]]></language><managingEditor><![CDATA[Rachel Evans]]></managingEditor><webMaster><![CDATA[Rachel Evans]]></webMaster><ttl>180</ttl><item><title><![CDATA[rspec and exceptions]]></title><description><![CDATA[A surprising way in which an exception doesn't always cause a test to fail. Or indeed have run at all...]]></description><link>https://rachelevans.org/blog/rspec-and-exceptions/</link><guid isPermaLink="false">5c998fbe11b34b000133e705</guid><category><![CDATA[technology]]></category><category><![CDATA[Ruby]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Tue, 12 Sep 2017 08:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Today I learnt that these two tests are, not only non-functionally different, but functionally different too:</p><pre><code>Test A:

    it &quot;should run without raising an exception&quot; do
      some_code_to_test
    end

    Test B:

    it &quot;should run without raising an exception&quot; do
      expect {
        some_code_to_test
      }.not_to raise_error
    end</code></pre><p>Like JUnit, rspec reacts to tests which raise exceptions by failing the test,  right? So these two tests should (functionally) behave identically?</p><p>Wrong!</p><p>Well, <em>sometimes</em> wrong.</p><p>The difference (or rather, at least one of the differences — perhaps there are more) lies in <code>SystemExit</code>. If <code>some_code_to_test</code> raises this, typically by calling <code>Kernel#exit</code>,  then the test runner stops, treats this test as successful, doesn’t  show the output of this test, and silently skips all subsequent tests:</p><pre><code>it &quot;runs test 1&quot; do
    end

    it &quot;should run without raising an exception (2)&quot; do
      some_code_to_test
    end

    it &quot;runs test 3&quot; do
    end

    // and then run it:

    rachel@shinypig test$ bundle exec rspec --format doc spec/my_spec.rb

    Kernel
      runs test 1

    Finished in 0.00089 seconds (files took 0.08703 seconds to load)
    2 examples, 0 failures</code></pre><p>which reports that it ran 2 examples, but only shows one of them, and doesn’t even mention the one that it skipped completely.</p><p>On the other hand, if we wrap the code being tested using “expect … not_to raise_error”:</p><pre><code>rachel@shinypig test$ bundle exec rspec --format doc spec/my_spec.rb

    Kernel
      runs test 1
      should run without raising an exception (2) (FAILED - 1)
      runs test 3

    Failures:

    1) Kernel should run without raising an exception (2)
         Failure/Error: expect { some_code_to_test }.not_to raise_error

    expected no Exception, got #&amp;lt;SystemExit: exit&amp;gt; with backtrace:
             # ./spec/my_spec.rb:4:in `exit&#x27;
             # ./spec/my_spec.rb:4:in `some_code_to_test&#x27;
             # ./spec/my_spec.rb:11:in `block (3 levels) in &amp;lt;top (required)&amp;gt;&#x27;
             # ./spec/my_spec.rb:11:in `block (2 levels) in &amp;lt;top (required)&amp;gt;&#x27;
         # ./spec/my_spec.rb:11:in `block (2 levels) in &amp;lt;top (required)&amp;gt;&#x27;

    Finished in 0.01099 seconds (files took 0.07536 seconds to load)
    3 examples, 1 failure

    Failed examples:

    rspec ./spec/my_spec.rb:10 # Kernel should run without raising an exception (2)</code></pre><p>So now it runs all three tests, showing all three results, including one failure.</p><p>To  be honest I’m surprised that the rspec test runner doesn’t deal with this natively: if a test raises an error, it should always be a failure (to my mind anyway), including <code>SystemExit</code>. Maybe there’s a bug / pull request for rspec somewhere discussing this, where the idea was rejected. I might go digging…</p>]]></content:encoded></item><item><title><![CDATA[Firefox add-ons, 15 years on]]></title><description><![CDATA[After 15 years of using Firefox, and with the approaching release of the extension-breaking Firefox 57, I reflect on which extensions I use, which I can do without, and how things are moving on.]]></description><link>https://rachelevans.org/blog/firefox-add-ons-15-years-on/</link><guid isPermaLink="false">56e0fefd748c</guid><category><![CDATA[technology]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Sun, 03 Sep 2017 11:29:41 GMT</pubDate><content:encoded><![CDATA[<div><p>This is going to be a bit of a ramble. If you&#x27;re interested: great!</p><p>It&#x27;s just a bunch of stuff that might be of interest to you if you use Firefox, and it&#x27;s too much to tweet in a giant thread. So here we are.</p><h2>The long haul</h2><p>I&#x27;m pretty sure that I&#x27;ve been using Firefox <a href="https://en.wikipedia.org/wiki/History_of_Firefox" rel="noopener ugc nofollow">since it was first released, back in 2002</a> — since it was hailed as the brave new lean and super-fast thing that came after Netscape Navigator, and then Mozilla, and then along came Firefox, to counteract Mozilla&#x27;s perceived bloat. When it comes to my main development machine (which used to be Linux, now Mac OSX), I&#x27;ve used Firefox continuously and almost exclusively since then, with just occasional forays into Chrome or Safari or Opera, and then back to Firefox.</p><p>So over the years I&#x27;ve seen a few changes, got used to how things work, developed my way of using the browser, which add-ons and settings I like, and so forth.</p><p>And it&#x27;s easy go get into a habit with that, and never really step back and look at what you&#x27;re doing … until something <em>makes</em> you do so. Like a data-loss hardware failure, or a major shift in the browser itself.</p><h2>The changing world</h2><p>Enter “<a href="https://developer.mozilla.org/en-US/Add-ons/WebExtensions" rel="noopener ugc nofollow">Web Extensions</a>”, a <a href="https://blog.mozilla.org/addons/2017/06/14/webextensions-firefox-55/" rel="noopener ugc nofollow">change in the way that Firefox extensions work</a>. This landed with Firefox version 55, and the big challenge is that from version 57 (scheduled for November 2017) onwards, older extensions — ones that aren&#x27;t written using the Web Extension mechanism — will be disabled. No longer work. Kaput, dead, gone, no more.</p><p>Exciting, huh? 😬</p><p>You can tell which extensions (of the ones you have installed) are going to suffer this fate, because from Firefox 55, non-WebExt add-ons are <a href="https://support.mozilla.org/en-US/questions/1171829" rel="noopener ugc nofollow">marked as “Legacy”</a> in about:addons. <a href="https://www.ghacks.net/2017/04/30/firefox-nightly-marks-legacy-add-ons/" rel="noopener ugc nofollow">More about that</a>.</p><p>So, this seems like a good opportunity for me to take a look at what add-ons I&#x27;m using, whether I really need each one, and if so, how I might be able to continue to get the functionality I need in a post-FF-57 world.</p><h2>Multi-process</h2><p>But wait: there&#x27;s one more thing. Last night with I was reading about <a href="https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox" rel="noopener ugc nofollow">Multi-Process Firefox</a>, how it&#x27;s good for performance and stability and security. And yes, this sounds like a very good thing indeed. But here&#x27;s the rub: for now, Firefox (or is it each FF window?) can operate either in single-process, or multi-process mode. Single-process bad, multi-process good. And you can tell which is which by looking at about:support, under “Application Basics”, look for the “Multiprocess Windows” bit. Mine said: 0/1 (Disabled by add-ons). 🙁</p><p>There doesn&#x27;t appear to be a way of seeing <em>which</em> add-ons are doing the disabling, so as far as I could tell, it was a matter of working through the add-ons I had, disabling and enabling things until I found a minimal set of things to disable that got me the magic words, “Multiprocess Windows 1/1 (Enabled by default)”. \ø/</p><h2>Add-ons</h2><p>So which add-ons was<em> </em>I using before, and how did they fare? Do I still even need them?</p><p>An incomplete list, roughly categorised:</p><p>For security and privacy: NoScript, RequestPolicy Continued, CookieCuller, LastPass</p><p>For development: Web Developer, RestClient, Firebug, Markdown Viewer.</p><p>General: Multifox, TabMixPlus, Stylish, Popup ALT Attribute, GreaseMonkey, Flash Video Downloader.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/noscript/" rel="noopener ugc nofollow">NoScript</a></h3><p>Essential, IMO. Thankfully despite being a “legacy” add-on, it still works, and doesn&#x27;t cause FF to go to single-process mode. I&#x27;m not sure but I think the FF developers may have whitelisted this specific add-on.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/requestpolicy-continued/" rel="noopener ugc nofollow">RequestPolicy Continued</a></h3><p>I used this one to block third-party requests as a way of improving performance and privacy (e.g. prevent a site from loading ads from a third-party). Trouble is, an awful lot of the modern web <em>does</em> use third-party requests (CDNs, separate domains for “static” or “media” resources, etc). I&#x27;d set this add-on to block by default, which meant that it defaulted to “safe”, i.e. “massively inconvenient”.</p><p>I haven&#x27;t really found an up-to-date alternative for this add-on yet, but given that it was very inconvenient for questionable benefit anyway, I&#x27;ve just gone without this “functionality” for now. Maybe <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-badger17/" rel="noopener ugc nofollow">Privacy Badger</a>?</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/cookieculler/" rel="noopener ugc nofollow">CookieCuller</a></h3><p>Deletes non-whitelisted cookies on browser startup. I&#x27;ve switched to <a href="https://addons.mozilla.org/en-US/firefox/addon/cookie-autodelete/" rel="noopener ugc nofollow">Cookie Auto-Delete</a> instead. It seems to work well: enable “Active Mode” in the preferences (as far as I can tell the add-on is completely dormant without this). In fact it&#x27;s probably better than CookieCuller anyway, since it deletes cookies when tabs are closed, rather than when the browser is restarted; and since I quite rarely restart my browser, CookieCuller didn&#x27;t often get to do its job, whereas hopefully Cookie Auto-Delete will.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/lastpass-password-manager/" rel="noopener ugc nofollow">LastPass</a></h3><p>Legacy, but still works. Good.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/web-developer/" rel="noopener ugc nofollow">Web Developer</a></h3><p>Turns out I never actually used any of the functionality of this add-on. Zap, gone.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/restclient/" rel="noopener ugc nofollow">RestClient</a></h3><p>I&#x27;m trying out <a href="https://addons.mozilla.org/en-US/firefox/addon/rested/" rel="noopener ugc nofollow">RESTED</a>. As long as it can do GET/POST/PUT/DELETE, with custom headers (mainly Content-Type), and can handle X509 client certificate authentication, we&#x27;ll get along just fine.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/firebug/" rel="noopener ugc nofollow">Firebug</a></h3><p>Still works, even though it&#x27;s legacy, and marked as “not compatible with your version of Firefox”. Hmm.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/markdown-viewer/" rel="noopener ugc nofollow">Markdown Viewer</a></h3><p>I used this to preview local markdown files before I pushed them to github. I haven&#x27;t found a replacement for this yet 🙁 . But, I very rarely need this functionality, and it&#x27;s not exactly a show-stopper for me not to have this, so whatever. Disabled.</p><h3>Multifox</h3><p>I did briefly use this add-on, but then disabled it again. And now it seems to have been pulled from addons.mozilla.org. Basically, the functionality it aimed to provide was to allow each tab to have its own space of cookies, authentication, etc. — so for example you could open three different tabs and log into three different accounts on the same web site.</p><p>Turns out, this is a lot simpler these days: more on this later.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/tab-mix-plus/" rel="noopener ugc nofollow">TabMixPlus</a></h3><p>Every now and then I experiment with “improved” tab managers. I never quite get on with any of them, I find. In the past I have occasionally used TabMixPlus; right now, I&#x27;m trying out <a href="https://addons.mozilla.org/en-US/firefox/addon/tree-tabs/" rel="noopener ugc nofollow">TreeTabs</a>. meh.</p><h3><a href="https://addons.mozilla.org/en-GB/firefox/addon/stylish/" rel="noopener ugc nofollow">Stylish</a></h3><p>Write your own custom CSS for various sites, or install custom CSS that others have written and shared via <a href="https://userstyles.org/" rel="noopener ugc nofollow">userstyles.org</a>. Except these days that site seems to be a lot more geared towards “skinning” (“Facebook, but with Lionel Messi in the background”. I kid you not), and less towards what I&#x27;m after, which is UI tweaks, overriding things for improved accessibility, ad suppression, and so forth.</p><p>I&#x27;ve installed <a href="https://addons.mozilla.org/en-US/firefox/addon/custom-style-script/" rel="noopener ugc nofollow">Custom Style Script</a> but haven&#x27;t started using it yet. Instead, I&#x27;m experimenting with doing without this functionality. I&#x27;ll be interested to see how much I miss it.</p><h3><a href="https://addons.mozilla.org/en-US/firefox/addon/popup-alt-attribute/" rel="noopener ugc nofollow">Popup ALT Attribute</a></h3><p>Good for accessibility awareness. Still works.</p><h3><a href="https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/" rel="noopener ugc nofollow">GreaseMonkey</a></h3><p><em>Almost</em> essential. Still works for now, even though it&#x27;s “legacy”. Will have to keep a close eye on this one.</p><h3><a href="https://addons.mozilla.org/en-US/firefox/addon/flash-video-downloader/" rel="noopener ugc nofollow">Flash Video Downloader</a></h3><p>Sorry, YouTube.</p><h2>userContext</h2><p>But back to MultiFox. The use case, remember, was to be able to log into the same site more than once in the same browser (well, more than twice, in fact: you could sort of do two already: one in a main window, one in a “Private” window. But only two). But MultiFox used to be somewhat clunky and unreliable, and it always seemed daft that FF didn&#x27;t natively support something like this.</p><p>Thankfully, it seems that it now does. But it&#x27;s not enabled by default. Also, it&#x27;s <a href="https://support.mozilla.org/en-US/kb/containers-experiment" rel="noopener ugc nofollow">experimental</a>.</p><p>Enter <a href="https://wiki.mozilla.org/Security/Contextual_Identity_Project/Containers" rel="noopener ugc nofollow">Containers</a> aka contextual identities aka userContext (as I understand it). As far as I can tell it&#x27;s built into Firefox, but hidden. But you can enable it: go to “about:config” and enable both <em>privacy.userContext.enabled</em> and <em>privacy.userContext.ui.enabled</em>.</p><p>This gives you a new preference section (Preferences &gt; Privacy &gt; Container Tabs), and a new menu entry (File &gt; New Container Tab). But for a bit more slickness — new containers on the fly, what I really want — I&#x27;m trying out <a href="https://addons.mozilla.org/en-US/firefox/addon/containers-on-the-go/" rel="noopener ugc nofollow">Containers On The Go</a>, which seems to work well so far.</p><h2>et voilà</h2><p>So there we have it: check about:support for “Multiprocess windows” (multi-process is good); if it says “disabled by add-ons”, disable add-ons until you work out which ones are getting in the way. Enable <em>privacy.userContext.ui.enabled </em>then play with containers. Wheeee!</p></div>]]></content:encoded></item><item><title><![CDATA[Not helpful, “aws s3 sync”]]></title><description><![CDATA[The aws s3 sync "--metadata-directive" option: what does it do? Does it work? AWS themselves aren't clear on the matter...]]></description><link>https://rachelevans.org/blog/not-helpful-aws-s3-sync/</link><guid isPermaLink="false">b8c55e82c0e8</guid><category><![CDATA[technology]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Mon, 24 Apr 2017 08:54:24 GMT</pubDate><content:encoded><![CDATA[<p>I have a requirement to change the Content-Type / Cache-Control headers of a load of objects in S3. At the API level, there&#x27;s no way of modifying the metadata of an existing object — rather, you create a new object with the desired metadata. Of course, if this new object is in the same bucket and has the same key as the old object, it&#x27;ll effectively overwrite it. You don&#x27;t have to re-upload your data if you don&#x27;t want to — you can copy the data from the old object to the new one.</p><p>Instead of using the API directly, various tools already exist which encapsulate this behaviour. For example, the aws command line offers “aws s3 sync”. So I&#x27;m wondering if “aws s3 sync” might be the tool for the job.</p><p>But then we come to this gem in the help text:</p><blockquote>--metadata-directive (string) Specifies whether the metadata is copied from the source object or replaced with metadata provided when copying S3 objects. Note that if the object is copied over in parts, the source object&#x27;s metadata will not be copied over, no matter the value for --metadata-directive, and instead the desired metadata values must be specified as parameters on the command line. Valid values are COPY and REPLACE. If this parameter is not specified, COPY will be used by default. If REPLACE is used, the copied object will only have the meta- data values that were specified by the CLI command. Note that if you are using any of the following parameters: --content-type, content-lan- guage, --content-encoding, --content-disposition, --cache-control, or --expires, you will need to specify --metadata-directive REPLACE for non-multipart copies if you want the copied objects to have the speci- fied metadata values.</blockquote><p>Apart from being horrible to read, there&#x27;s a big problem with this. Note the phrases “Note that if the object is copied over in parts” and “for non-multipart copies”: the behaviour varies depending on whether or not multipart copies are in use.</p><p>So, <em>are</em> multipart copies in use?</p><p>Well, we&#x27;re not told. The S3 maximum size for non-multipart uploads is 5GB, so we know that for objects over 5GB, multipart uploads <em>must</em> be used, because that&#x27;s the only option. But for smaller objects?</p><p>¯\_(ツ)_/¯</p><p>So the help text explaining <code>--metadata-directive</code> tells us that the behaviour of this option can vary, depending on an implementation detail which is not revealed to us.</p><p>Here&#x27;s my attempt to reword that help text to be (a) clearer, and (b) more honest:</p><pre>--metadata-directive (string)

Valid values are COPY (which is the default), and REPLACE. Specifies
whether the metadata is copied from the source object (&quot;COPY&quot;), or
replaced with metadata provided on the command line (&quot;REPLACE&quot;) when
copying S3 objects.

Note that &quot;COPY&quot; does not work if multipart uploads are used, which is
definitely the case for objects larger than 5GB, and might be the case
for smaller objects too — good luck!</pre><p>Not helpful.</p>]]></content:encoded></item><item><title><![CDATA[The AWS S3 Inventory Service: don't end the destination prefix with “/”]]></title><description><![CDATA[If you end the destination prefix with "/", then you'll end up with an unusable manifest.]]></description><link>https://rachelevans.org/blog/the-aws-s3-inventory-service-dont-end-the-destination-prefix-with-slash/</link><guid isPermaLink="false">8630dfb13c88</guid><category><![CDATA[technology]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Fri, 21 Apr 2017 10:00:16 GMT</pubDate><content:encoded><![CDATA[<div><p>This started out as a longer blog post, but then a lot of it boiled down to “read the fine documentation, Rachel”. So here&#x27;s the short version.</p><p>Launched in December 2016, S3&#x27;s Inventory Service is an alternative to using the ListObjects / ListObjectsV2 APIs for enumerating the objects in a bucket. You put an inventory configuration to your bucket (broadly speaking: which bit of S3 to list, where to put the results, and how often to do it), then sit back and wait for S3 itself to do all the hard work, so you don&#x27;t have to. Great!</p><p>The <a href="http://docs.aws.amazon.com/AmazonS3/latest/dev/storage-inventory.html#storage-inventory-location" rel="noopener ugc nofollow" target="_blank">documentation states where the inventory output goes</a>:</p><pre><span><em>destination-prefix</em>/<em>source-bucket</em>/<em>config-ID</em>/<em>YYYY-MM-DDTHH-MMZ</em>/manifest.json<br/><em>destination-prefix</em>/<em>source-bucket</em>/<em>config-ID</em>/<em>YYYY-MM-DDTHH-MMZ</em>/manifest.checksum</span></pre><p>And for the sake of brevity, let&#x27;s cut to the chase: if you end your prefix with a “/” (either accidentally, or because like me you think you&#x27;re being smart whereas in fact you simply haven&#x27;t read the docs — good going, Rach), then due to a bug in the S3 Inventory service, your inventory will not be usable.</p><p>Specifically, I ended up with objects in S3 with keys like this:</p><pre>s3-inventories//media/rachel-test-inventory/data/6eabc318-5ee0-41d9-b32b-a12b40a6f271.csv.gz
s3-inventories//media/rachel-test-inventory/data/b7dff5ea-c83d-4879-bc2a-0d0ced298356.csv.gz</pre><p>whereas the manifest I got contained this (line breaks added for clarity):</p><pre>{
  &quot;files&quot;: [
    {
      &quot;key&quot;: &quot;s3-inventories/media/rachel-test-inventory/
                data/6eabc318-5ee0-41d9-b32b-a12b40a6f271.csv.gz&quot;,
      &quot;size&quot;: 16486333,
      &quot;MD5checksum&quot;: &quot;3c94f6eed1fc3c2d057c098f355afffc&quot;
    },
    {
      &quot;key&quot;: &quot;s3-inventories/media/rachel-test-inventory/
                data/b7dff5ea-c83d-4879-bc2a-0d0ced298356.csv.gz&quot;,
      &quot;size&quot;: 20147436,
      &quot;MD5checksum&quot;: &quot;f0b39e0d85f0f5fb11bc5be73ecc26cf&quot;
    }
  ]
}</pre><p>The problem being that those double-slashes in the keys have become single slashes. On a Linux-ish filesystem, this would make no difference; on S3, it makes all the difference. The keys given in the manifest simply do not exist.</p><p>tl;dr: There&#x27;s a bug in the S3 Inventory service which means that manifests are broken if the destination prefix ends with “/”. Solution: don&#x27;t end your destination prefixes with “/”.</p></div>]]></content:encoded></item><item><title><![CDATA[SQL report fun]]></title><description><![CDATA[This is just a little anecdote. Don't expect it to be deep or meaningful. I don't know, maybe there's something in here about feedback loops or focussing development effort in the right place, whatever.]]></description><link>https://rachelevans.org/blog/sql-report-fun/</link><guid isPermaLink="false">f2be42fbfad</guid><category><![CDATA[technology]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Wed, 09 Dec 2015 23:55:54 GMT</pubDate><media:content url="https://rachelevans.org/blog/content/images/2015/12/tractor-feed-printer.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rachelevans.org/blog/content/images/2015/12/tractor-feed-printer.jpg" alt="SQL report fun"><p>This is just a little anecdote. Don&#x27;t expect it to be deep or meaningful. I don&#x27;t know, maybe there&#x27;s something in here about feedback loops or focussing development effort in the right place, whatever.</p><p>Way back when, when I was just about fresh out of university and in my first programming job (1996), I got a bit of a reputation at being quite good at making our database and its applications go fast, including SQL query optimisation. This was using Ingres databases, hosted on VMS.</p><p>One day I was asked to investigate this particular stock report (I even remember the report id, “AGR48”, though I confess I have no idea now what the report was actually for). The report was too slow, I was told.</p><p>So I dug.</p><p>The report consisted essentially of a sequence of 8 giant SQL statements, each of which was quite slow. Lots of creation of temporary tables and the like. I think I rewrote the report and did make it faster, but that&#x27;s not the point.</p><p>The point is: the report took about 4 hours; but due to a bug in the eighth and final SQL statement, the report output was always empty.</p><p>This report was effectively a four-hour no-op.</p><p>Unsurprisingly, the users didn&#x27;t use that report.</p>]]></content:encoded></item><item><title><![CDATA[Save money and be tidy with s3-upload-cleaner]]></title><description><![CDATA[Amazon Web Services (AWS) S3 is a popular, highly-scalable object storage service. It's used by a lot of big companies, including the one I work for. But it's very easy gradually to accumulate billable "invisible" storage.]]></description><link>https://rachelevans.org/blog/save-money-and-be-tidy-with-s3-upload-cleaner/</link><guid isPermaLink="false">7043b8b5332e</guid><category><![CDATA[technology]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Tue, 01 Dec 2015 12:56:02 GMT</pubDate><content:encoded><![CDATA[<p>Amazon Web Services (AWS) S3 is a popular, highly-scalable object storage service. It&#x27;s used by a lot of big companies, <a href="http://www.computerweekly.com/news/2240219866/Case-study-How-the-BBC-uses-the-cloud-to-process-media-for-iPlayer" rel="noopener ugc nofollow">including the one I work for</a>.</p><p>Getting data — especially large files — into S3 uses a mechanism called Multipart Uploads. For example, to upload a multi-gigabyte file to S3, you might make a sequence of calls like so:</p><ol><li>CreateMultipartUpload</li><li>UploadPart (1 .. n times)</li><li>CompleteMultipartUpload</li></ol><p>On the “complete” call, S3 assembles your parts together to form a single object, that then appears in the bucket. Or, you can call “AbortMultipartUpload” to abandon it, and throw away the parts.</p><p>So what&#x27;s the catch?</p><p>The catch is that it&#x27;s very easy to forget to ever call either CompleteMultipartUpload or AbortMultipartUpload. And if you neither complete nor abort the upload, then any parts you have uploaded just sit around in S3, waiting. Forever. It&#x27;s relatively hard to <em>see</em> those parts, mind — they don&#x27;t show up in the regular bucket listing. But they are there, and they are costing you money.</p><p>So what&#x27;s the solution?</p><p>Enter <code>s3-upload-cleaner</code>. Simply put, it scans your buckets looking for stale (that is, started a long time ago) incomplete multipart uploads — the premise being, if you haven&#x27;t completed an upload after, say, a week, then you never will — and aborts them. Thus, periodically running s3-upload-cleaner keeps your account&#x27;s multipart uploads under control, and helps keep your bill down.</p><p>(I&#x27;m a little surprised that this isn&#x27;t a native feature of S3, and to be honest, I expect that one day, it will be.)</p><p>Here it is running for a single bucket, and finding nothing to clean:</p><pre>$ sudo apt-get install nodejs npm
$ npm install s3-upload-cleaner aws-sdk
$ export AWS_ACCESS_KEY_ID=…
$ export AWS_SECRET_ACCESS_KEY=…
$ nodejs ./node_modules/s3-upload-cleaner/example/minimal.js
Running cleaner
Clean bucket my-bucket-name
Bucket my-bucket-name is in location eu-west-1
Bucket my-bucket-name is in region eu-west-1
Running cleaner for bucket my-bucket-name
$</pre><p>The code comes with a minimal bootstrap script, though you are encouraged to use your own if you wish.</p><p>To call out of a few of its features:</p><ul><li>it&#x27;s multi-region aware (it will attempt to process all of your buckets, no matter what region they are in);</li><li>it can be configured to process only some buckets, or only some regions, or only some keys;</li><li>the threshold for what counts as “stale” is configurable — the minimal bootstrap script uses 1 week as the cutoff age;</li><li>when a stale upload is found, it emits logging data in json form;</li><li>it can be run in “dry run” mode, where all the scanning and logging is performed, but the abort itself is not.</li></ul><p>Finally, here&#x27;s an example of one of its log entries:</p><pre>[
  {
    &quot;event_name&quot;: &quot;s3uploadcleaner.clean&quot;,
    &quot;event_timestamp&quot;: &quot;1448495889.529&quot;,
    &quot;bucket_name&quot;: &quot;my-bucket-name&quot;,
    &quot;upload_key&quot;: &quot;bigfile.mpg&quot;,
    &quot;upload_initiated&quot;: &quot;1447888220000&quot;,
    &quot;upload_storage_class&quot;: &quot;STANDARD&quot;,
    &quot;upload_initiator_id&quot;: &quot;arn:aws:iam::123456789012:user/SomeUser&quot;,
    &quot;upload_initiator_display&quot;: &quot;SomeUser&quot;,
    &quot;part_count&quot;: &quot;135&quot;,
    &quot;total_size&quot;: &quot;2831189760&quot;,
    &quot;dry_run&quot;: &quot;true&quot;
  }
]</pre><p>s3-upload-cleaner typically only takes a few seconds to run, and doesn&#x27;t need to be run very often, so this makes it perfect to run via a scheduled AWS Lambda function.</p><p>You can find the <a href="https://github.com/rvedotrc/node-s3-upload-cleaner" rel="noopener ugc nofollow">code on github</a> and the <a href="https://www.npmjs.com/package/s3-upload-cleaner" rel="noopener ugc nofollow">package on npm</a>.</p>]]></content:encoded></item><item><title><![CDATA[Cisco and their IPv6 DNS]]></title><description><![CDATA[cisco.com is now resolvable over IPv6. But something is still amiss..]]></description><link>https://rachelevans.org/blog/cisco-and-their-ipv6-dns/</link><guid isPermaLink="false">c2ec0668d672</guid><category><![CDATA[technology]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Sat, 14 Mar 2015 12:12:22 GMT</pubDate><content:encoded><![CDATA[<p>Following on from “<a href="https://rachelevans.org/blog/on-dns-and-ipv6/">On DNS and IPv6</a>” I happened to check up on cisco.com&#x27;s setup again.</p><blockquote><p>$ dig @$( dig +short ns com. | head -n1 ) aaaa cisco.com.<br/>; &lt;&lt;&gt;&gt; DiG 9.8.3-P1 &lt;&lt;&gt;&gt; @k.gtld-servers.net. aaaa cisco.com.<br/>;; AUTHORITY SECTION:<br/>cisco.com. 172800 IN NS ns1.cisco.com.<br/>cisco.com. 172800 IN NS ns2.cisco.com.<br/>cisco.com. 172800 IN NS ns3.cisco.com.</p><p>;; ADDITIONAL SECTION:<br/>ns1.cisco.com. 172800 IN A 72.163.5.201<br/>ns2.cisco.com. 172800 IN A 64.102.255.44<br/>ns3.cisco.com. 172800 IN A 173.37.146.41<br/>ns3.cisco.com. 172800 IN AAAA 2001:420:1101:6::a<br/>ns3.cisco.com. 172800 IN AAAA 2001:420:1201:7::a<br/>ns3.cisco.com. 172800 IN AAAA 2001:420:2041:5000::a</p></blockquote><p>Since last time, Cisco have added a third nameserver, “ns3.cisco.com”, and this server has three IPv6 addresses (and an IPv4 address). So cisco.com is now resolvable to IPv6-only clients, via this nameserver. Hooray!</p><p>But hang on. Those IPv6 addresses look familiar. Two of them are the same addresses as we saw back in January, in the earlier article.</p><h2>Identity ambiguity</h2><p>For the IPv4 setup, everything matches as you&#x27;d expect: the glue records name the three nameservers, and each nameserver has an IPv4 address, and if you do a reverse lookup on those addresses (PTR), you get the names again:</p><blockquote><p><em>$ dig +short -x 72.163.5.201<br/>ns1.cisco.com.<br/></em>$ dig +short -x 64.102.255.44<br/>ns2.cisco.com.<br/>$ dig +short -x 173.37.146.41<br/>ns3.cisco.com.</p></blockquote><p>However for the IPv6 setup, ns1 and ns2 don&#x27;t have an IPv6 address (according to the glue records), but according to the nameservers they <em>do</em>, and six nameservers (three IPv4, three IPv6) all agree that ns1 is 2001:420:1101:6::a, ns2 is 2001:420:2041:5000::a, and ns3 is 2001:420:1201:7::a.</p><p>Reverse lookups of the three IPv6 addresses show that the three IPv6 addresses are ns1, ns2, and ns3 also:</p><blockquote><p>$ for a in $( dig @$( dig +short ns com. | head -n1 ) ns cisco.com. | egrep -w &#x27;A|AAAA&#x27; | awk ‘{print $5}&#x27; ) ; do echo $( dig +short -x $a ) $a ; done<br/>ns1.cisco.com. 72.163.5.201<br/>ns2.cisco.com. 64.102.255.44<br/>ns3.cisco.com. 173.37.146.41<br/>ns1.cisco.com. 2001:420:1101:6::a<br/>ns3.cisco.com. 2001:420:1201:7::a<br/>ns2.cisco.com. 2001:420:2041:5000::a</p></blockquote><p>So, basically Cisco have accidentally named all three nameservers <em>in the glue records</em> as ns3.</p><h2>So what?</h2><p>Is that a problem? Well, a tiny one. It does mean there are differing opinions out there on the net as to what the addresses of the three servers are, which could cause confusion, especially when debugging problems. In theory, ns1 and ns2 could appear to have an IPv6 address one moment, and not the next; and ns3 could sometimes have three addresses, and sometimes only one.</p><p>The worst case is that Cisco <em>might</em> find that most DNS lookups (over IPv6) for cisco.com end up going to 2001:420:1201:7::a (ns3).</p><p>(A resolver <em>might</em> find that the only nameserver is “ns3”, find “all three” IPv6 addresses of the of ns3, then ask the cisco nameservers the same question (“what is the IPv6 address of ns3?”) and then get only one result; thus, there&#x27;s only one nameserver, and it has only one IPv6 address).</p><p>It&#x27;ll probably work. It <em>might</em> compromise resiliency.</p><p>Of course, the fix is easy: just fix the glue records to read “ns1, ns2, ns3” instead of “ns3” three times.</p><p>Lesson: remember to take care when changing DNS setting, everyone :-)</p>]]></content:encoded></item><item><title><![CDATA[Diversity at QCon London 2015]]></title><description><![CDATA[Diversity in the technology sector remains a challenge, with much work to do. How did QCon London 2015 measure up?]]></description><link>https://rachelevans.org/blog/diversity-at-qcon-london-2015/</link><guid isPermaLink="false">bc9eec2ad59f</guid><category><![CDATA[Denmark]]></category><category><![CDATA[technology]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Sun, 08 Mar 2015 11:20:45 GMT</pubDate><media:content url="https://rachelevans.org/blog/content/images/2015/03/qcon-audience-empty-stage.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rachelevans.org/blog/content/images/2015/03/qcon-audience-empty-stage.jpg" alt="Diversity at QCon London 2015"><p>Last week I attended <a href="http://qconlondon.com/" rel="noopener ugc nofollow">QCon London</a>, a “Conference for Professional Software Developers” run by <a href="http://www.infoq.com/" rel="noopener ugc nofollow">InfoQ</a>. Three days of keynotes, presentations, facilitated discussion, and general open mingling with other delegates and and speakers.</p><p>It&#x27;s the first time I&#x27;ve been to this conference. I went because last year my colleague Stephen went, and I could tell from his experience there, and from the online videos of the talks published later, that this was worth going to. So I signed up for this year at the earliest opportunity.</p><p>It&#x27;s well known that the <a href="https://www.google.co.uk/search?q=technology+diversity&amp;tbm=nws" rel="noopener ugc nofollow">technology sector has a diversity problem</a>. Go to almost any technology event in Europe / North America, and you&#x27;ll see overwhelmingly <a rel="noopener" href="https://rachelevans.org/blog/amazon-web-services-fails-at-diversity/">white male faces</a>.</p><p>Today is <a href="http://www.internationalwomensday.com/" rel="noopener ugc nofollow">International Women&#x27;s Day</a> 2015: so what have I learnt of the diversity aspect of QCon, and about the inclusion of women in particular?</p><p>Do I expect I&#x27;ll go again next year?</p><h2>The speakers</h2><p>For those used to technical conferences, you might be interested in taking a quick look at the diversity on show amongst the <a href="http://qconlondon.com/speakers" rel="noopener ugc nofollow">speakers at QCon London </a>this week.</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2015/03/qcon-speakers.jpg" class="kg-image" alt="Screenshot of part of QCon&#x27;s web site, showing 15 speakers, with headshots, names, and 1-line summary. The faces show a variety of skin colours and gender expressions."/><figcaption>A randomly-chosen subset of the speakers from the 2015 QCon speaker list</figcaption></figure><p>When I checked and did a rough count, there were 123 speakers listed, and quite a range of faces.</p><p>Or, so it might appear at first glance.</p><p>I very roughly counted males and females, white and non-white. By my count, this year&#x27;s QCon speaker line-up included:</p><ul><li>3 (2.4%) non-white women;</li><li>5 (4.1%) non-white men;</li><li>24 (19.5%) white women;</li><li>91 (74.0%) white men.</li></ul><p>Only 74% white men? For the technology sector, that&#x27;s actually pretty good! So, that&#x27;s like, 26% of “doing diversity”!</p><p>Make no mistake: it&#x27;s a lot better than most conferences, by which I mean that it&#x27;s closer to being more representative of the population as a whole.</p><p>But that&#x27;s still <strong>78% male</strong>, and the world simply isn&#x27;t like that. (In case it needs saying: it is, of course, 50% female, 50% male, give or take). And this lineup, while being more diverse than we&#x27;ve come to expect, is still <strong>93.5% white,</strong> compared to 87% in the UK population as a whole.</p><p>Which doesn&#x27;t sound so bad, but does mean that if you&#x27;re attending QCon, and you&#x27;re non-white, and hoping to see non-white speakers, then instead of the 16-ish speakers you should expect to see, you in fact see only 8; only half of what it should be.</p><p><strong>78% male. 93.5% white.</strong></p><p>In this sector, however, 22% female counts as so far above the norm that it won&#x27;t have happened by accident, which implies that the organisers took deliberate steps to include more women, which implies that they care about diversity — which is great. But at the same time we all need to recognise that it&#x27;s still not enough, and we should demand more.</p><h2>The audience</h2><p>The organisers don&#x27;t collect the diversity profile of the audience, so that information is harder to assess quantitatively.</p><p>I tried two different (but similar) ways of measuring the audience diversity, and I admit, both methods are highly unscientific. Firstly, as I sat waiting for the keynote, I just looked around at the people sat nearest to me. Secondly, I took a photo of a larger area of the audience, and counted faces later.</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2015/03/qcon-audience-dots.png" class="kg-image" alt="Anonymised representation of the audience. White dots and red dots on a black background."/><figcaption>(Partial) QCon keynote audience. Red dot = white male, white dot = anyone else.</figcaption></figure><p>Doing a quick sample of the 50 people sat around me while we waited for Friday&#x27;s keynote, it looks like about 16% female, 84% male. Which again, is a <em>long</em> way away from 50/50, and it&#x27;s notably even more skewed than the speaker line-up.</p><p>As for white / non-white: again, based on the people sat around me: roughly 88% white, 12% non-white. So actually, on this highly unscientific sample, this one&#x27;s pretty much on-target, matching the figures for the UK population as a whole.</p><p>The count of faces in the photo came to 82 white male, 5 non-white male, 2 white female, 3 non-white female. That is, 95% male, 91% white.</p><h2>On inclusion</h2><p>I noticed before the conference started that <a href="http://qconlondon.com/code-conduct" rel="noopener ugc nofollow">QCon publishes a code of conduct</a>, and it&#x27;s nice and clear, concise, and good to see. And with one exception, I neither witnessed nor heard of any harassment or anything else that would violate the code of conduct.</p><p>We&#x27;ll come to the exception later.</p><p>People with special mobility requirements were pretty well catered for (but it was hard to work out how to get to the 6th floor). Several people noted how the speakers would be using “she” or “her” more, to refer to actors in their stories (such as: “Your CTO says &lt;x&gt; and she knows what she&#x27;s talking about…”), instead of just thoughtlessly going with male as the default. Speakers&#x27; slides would include little stick figures to represent people — and those stick figures were often women.</p><p>All in all, a huge step (compared to tech industry norms) in the right direction. Sadly, the free tee-shirts being given away by the sponsors were obviously not made for women.</p><p>And, then… the exception.</p><h2>On-stage transphobia and its effects</h2><p>Just before the keynote on the last day, one of the track hosts, John T Davies, made transphobic remarks whilst on stage. It only took a few seconds, but in that moment, much of the good work that QCon had done was very quickly undone: I no longer felt completely welcome, or safe, or included. I felt threatened. At risk.</p><div class="tweet"><div class="tweet-author"><div class="tweet-author-words"><div class="tweet-author-name">Steve Marshall</div></div></div><div class="tweet-text">Incredibly disappointing to hear a track host (@jtdavies) be transphobic on stage at #qconlondon. +@qconlondon</div><div class="tweet-footer">Mar 6, 2015</div></div><p>During the next presentation, I saw the official twitter account <a href="https://twitter.com/qconlondon/status/573795750481641472" rel="noopener ugc nofollow">post an apology</a>.</p><div class="tweet"><div class="tweet-author"><div class="tweet-author-words"><div class="tweet-author-name">QCon London</div><div class="tweet-author-handle">qconlondon</div></div></div><div class="tweet-text">We officially apologize that our code of conduct was violated this morning on stage. #qconlondon #qcon</div><div class="tweet-footer"><a href="https://twitter.com/qconlondon/status/573795750481641472" rel="noopener ugc nofollow">11:42 AM · Mar 6, 2015</a></div></div><p>In the mid-morning break, I went to speak to the organisers on a different matter, but ended up talking to them about the on-stage comment. I spoke to Silke D&#x27;Alessandro, and we were joined by Floyd and Roxanne, founders of InfoQ, and Nitin Bharti; and it was very reassuring to see their obvious concern over the incident, and to see their commitment to diversity and inclusion. They said that they&#x27;d spoken to Mr Davies about the incident, and that ahead of his own talk this afternoon, he&#x27;d make an on-stage apology.</p><p>This conversation was great to have, but at the same time, it&#x27;s not why I came to QCon: I was here for the conference. So I felt frustrated that, because of what was said on stage, I felt unable to participate in this part of the conference, because of this unwanted distraction.</p><p>I joined the next session late (having missed the first half); had lunch, ate alone, my head still full of these distractions. I went for a walk outside, just to forget this, to be an anonymous tourist for half an hour.</p><p>Back at the conference for the afternoon, and the next two talks went well. For the third afternoon slot, I popped along to see Mr Davies give his on-stage apology (before I would then quickly switch rooms to go to the talk I actually wanted to hear). Unfortunately, for whatever reason, his apology came across as diminishing and insincere.</p><div class="tweet"><div class="tweet-author"><div class="tweet-author-words"><div class="tweet-author-name">Nov 19th, actually</div></div></div><div class="tweet-text">Very disappointing non-apology from @jtdavies at #qconlondon: &quot;a little joke can offend people&quot;. Frankly you might as well not &gt; #qconlondon</div><div class="tweet-footer">Mar 6, 2015</div></div><div class="tweet"><div class="tweet-author"><div class="tweet-author-words"><div class="tweet-author-name">Nov 19th, actually</div></div></div><div class="tweet-text">have apologised at all. Listen, understand *why* you need apologise, then be sincere. I think @jtdavies just failed on all 3 #qconlondon</div><div class="tweet-footer">Mar 6, 2015</div></div><p>I went to talk to the organisers again, to point out that the apology was not good enough. In fact, the apology itself was harmful. And all credit to them: once again, Floyd, Roxanne and Silke said all the right things, completely understood the problem, and said that they&#x27;d be taking further action.</p><p>I thanked them, and left. But by this point I was furious. I went to hide, to vent, to calm down. As a result of just a few seconds of offensive content on stage this morning, I was missing a significant proportion of the conference.</p><p>I managed to catch to the last talk of the day, and then, there was just one more item on the schedule: “Meet the speakers”, where the conference hosts and speakers are encouraged to mingle with the other delegates, and chat, share ideas, be creative. And it occurs to me: one of those other speakers is Mr Davies. Am I ready for that?</p><p>I seriously consider just going home. After all, there are no more talks. I could just slip away: many other people are doing so. I could too.</p><p>But I shouldn&#x27;t have to. I came for the conference. This is why I&#x27;m here.</p><h2>Meeting the speakers — and more</h2><p>So I went. I grabbed a beer. I mingled.</p><p>Unlike the other “networking opportunities” — the coffee breaks, lunch, and so forth — where the mingling seems quite random, here it seemed far from random.</p><p>A lovely lady named Vanessa came up to me, to talk about this morning&#x27;s incident. We compared notes about this conference, and past conferences, and how women are treated in the industry.</p><p>Then Roy Rapoport (of Netflix; this morning&#x27;s keynote speaker) found me, and again, we&#x27;re talking about the transphobia, comparing notes. About how he heard the comments, just before he was due to go on stage, so has to choose: make reference to it on stage, or act as if nothing happened? (He chose the latter, and I don&#x27;t blame him). About he felt offended by the remarks too.</p><p>Then Floyd Marinescu (InfoQ) again, this time asking if I&#x27;m prepared to meet with Mr Davies. I know it could well be constructive to do so; and I know it&#x27;s also fine if I say no. Even as I weigh up the decision, right there and then, I can feel the emotion, the anger, the frustration, the fear welling up in me: I&#x27;m still far too emotional about it, and so I decline.</p><p>Then I do some proper mingling: back to what the conference is <em>meant</em> to be about. At last.</p><p>As things are thinning out, I met with someone — who I shan&#x27;t name — involved with organising the conference, who offered an opinion on Mr Davies&#x27; prospects of being invited back again. Enough said.</p><p>Then, finally, just as I&#x27;m leaving, another lady (whose name I didn&#x27;t get) comes up to me, and again, it&#x27;s to ask about the incident this morning. So we chat, I do my best to explain what happened, and about some of the following chain of events.</p><p>And then, the conference is over. Finally, it&#x27;s time to go home.</p><h2>Fallout</h2><p>InfoQ&#x27;s commitment to diversity at QCon is clear, and is to be congratulated. I wish that more conferences and events were like this.</p><p>But, transphobia was on stage. On show.</p><p><strong>Make no mistake: incidents like this are <em>exactly</em> why diversity struggles in STEM fields. It </strong><a href="http://azdailysun.com/business/national-and-international/why-are-women-leaving-the-tech-industry-in-droves/article_82c3cfa2-bdee-5cff-bba1-29ff5abdbaf0.html" rel="noopener ugc nofollow"><strong>drives people away</strong></a><strong>.</strong></p><p>What frustrates me so much about this is the uneven effect that such incidents have: I&#x27;m assuming that of the 1400 or so people attending, approximately 1399 of them didn&#x27;t then spend a significant proportion of the day <em>not</em> focusing on the conference because of this.</p><p><a href="http://whatever.scalzi.com/2012/05/15/straight-white-male-the-lowest-difficulty-setting-there-is/" rel="noopener ugc nofollow">Straight White Male is the lowest difficulty setting there is</a>.</p><p><strong>The closer you are to being a cis straight white male — the more the existing biases of the technology sector pander to <em>you </em>— then the less likely you are to find something that offends you, the more likely you can just get on with your job.</strong></p><p>So the transphobia, on the whole, won&#x27;t have upset non-trans people, and they&#x27;d have been able to get on with their day as normal. In fact, they might not have even noticed that there was a problem.</p><p>In contrast, as a direct result of the incident on stage, I missed a huge chunk of two talks, was greatly distracted from the others, missed several of the breaks and other opportunities to network, and even the “meet the speakers” mostly consisted of people wanting to talk to <em>me</em> about the transphobia — not about technology, which is nominally what we&#x27;re all there for.</p><h2>What next?</h2><p>The diversity at QCon, compared to the industry as a whole, was good, so it&#x27;s clear that InfoQ are trying. At the same time, I believe they can try harder: 78% male is still too high, and they are in a position to change that.</p><p>I&#x27;ll admit, I was disappointed to see so few people speak up about the transphobia. <strong>If you see it, call it out. Speak up. Take action.</strong> Even if the offence isn&#x27;t directly <em>at</em> you, it <em>affects</em> you, because it negatively affects diversity, and we all know that diversity — that is, having the make-up of the people in the industry reflect the make-up of the population as a whole — is a good thing.</p><p>Will I go back to QCon in future? Probably. The conference was well-run, with good content, and InfoQ&#x27;s commitment to diversity is clear to see, not only from the speaker line-up, but also from their reaction to the unfortunate event of Friday morning.</p><p>But in future, can we get women&#x27;s tee-shirts too? That&#x27;d be just great :-)</p><hr/><p>Update, Monday 9th March: On Friday, Floyd sent me Mr Davies&#x27; written apology, asking for my thoughts. I&#x27;m sad to say that this written apology was deeply troubling in its tone — &quot;completely unacceptable&quot; is I think also accurate.</p><hr/><p><i>March 8th is <a href="http://www.internationalwomensday.com/" rel="noopener ugc nofollow">International Women&#x27;s Day</a>, celebrating the achievements of <a href="http://www.advocate.com/politics/transgender/2014/03/07/googles-international-womens-day-doodle-includes-trans-women" rel="noopener ugc nofollow">all women</a> and calling for greater equality. Together we can <a href="https://twitter.com/search?q=%23makeithappen&amp;src=typd" rel="noopener ugc nofollow">make it happen</a>.</i></p><hr/><p class="imageCredit">Source for the speaker list: the QCon web site</p>]]></content:encoded></item><item><title><![CDATA[On DNS and IPv6]]></title><description><![CDATA[In 2012, we had ”World IPv6 Launch Day.” What was it all about?]]></description><link>https://rachelevans.org/blog/on-dns-and-ipv6/</link><guid isPermaLink="false">9d0638091e67</guid><category><![CDATA[technology]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Wed, 14 Jan 2015 20:23:32 GMT</pubDate><media:content url="https://rachelevans.org/blog/content/images/2015/01/internet-connectivity.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rachelevans.org/blog/content/images/2015/01/internet-connectivity.jpg" alt="On DNS and IPv6"><p>Recently I&#x27;ve been repaying some <a href="http://en.wikipedia.org/wiki/Technical_debt" rel="noopener ugc nofollow" target="_blank">tech debt</a> that I had built up at home. I&#x27;ve been sorting out old files, arranging automated off-site backups, committing uncommitted files to git (including finally getting myself some private git hosting), simplifying config, upgrading Debian, upgrading my Internet connection, and adding IPv6 support to my home network. It&#x27;s been a fun few weeks, actually.</p><p>The last one is interesting: adding IPv6. What does “add IPv6” really mean, and why should we do it?</p><h2>Say “AAAA”</h2><p>Assuming you&#x27;ve already got a working IPv4 setup, then adding IPv6, at its simplest level, is:</p><ul><li>ensure each host or service has an IPv6 address;</li><li>ensure those IPv6 addresses are discoverable via <a href="http://en.wikipedia.org/wiki/Domain_Name_System" rel="noopener ugc nofollow" target="_blank">DNS</a>, by adding “AAAA” records.</li></ul><p>Back in January 2011, there was “World IPv6 Day”: some big names (Google, Facebook, and quite a few others) temporarily added IPv6 addresses to their systems, and added “AAAA” records to DNS: just for a day, to see what happened. In essence, it was successful: some of their visitor traffic was carried over IPv6, and nothing broke. Then, in June 2012, there was “World IPv6 Launch Day”: basically the same again, but with more participants (e.g. Wikipedia), and this time the configuration wasn&#x27;t removed afterwards: it was left in for good.</p><p>Because I&#x27;ve recently been adding IPv6 to my little part of the Internet, I&#x27;ve been repeating the exercise myself. Once I&#x27;d got myself an IPv6 allocation, and my home network was handing out IPv6 addresses, I could “ping6 google.com”, for example. I&#x27;d solved the first part: adding addresses. Now for the second: discoverability in DNS.</p><h2>Servers and Nameservers</h2><p>First, the easy part: each of my servers has an “A” (IPv4) record in my DNS records, so that the server can be found without having to remember IPv4 addresses. So all I had to do was add corresponding “AAAA” (IPv6) records. One quick visit to my provider&#x27;s DNS control panel later, and that&#x27;s that job done: I can now resolve my server name to an IPv6 address, and thus connect to it. So, for example, “ssh -6 myserver.rachelevans.org.uk.” now works.</p><p>This is where things start to go downhill a bit.</p><p>What did we just do here? I allowed my server to be addressed via IPv6, and I added its IPv6 address to DNS.</p><p>Does that mean that, if had an IPv6 Internet connection and <em>didn&#x27;t</em> have an IPv4 connection, I could connect to my server? <strong>No, it doesn&#x27;t.</strong> At this point, I&#x27;m <strong>still dependent on IPv4.</strong></p><p>To understand why, we need to understand a little more about how DNS works.</p><h2>Zones and Referrals</h2><p>To resolve a computer name to an IP address, we use DNS. It&#x27;s a distributed system, whereby lookups start at the “root” zone, and each zone can delegate authority to sub-zones. For example, “.com” is a sub-zone of the root, and “example.com” is a sub-zone of “.com”. Each zone has a set of nameservers.</p><p>Fundamentally, the problem is this: for DNS lookups over IPv6 to be successful, each zone has to provide at least one nameserver with an IPv6 address, and those IPv6 addresses have to be in DNS — otherwise that zone will be inaccessible to the IPv6 Internet.</p><p>(Exactly the same thing is true of IPv4, by the way: if none of a zone&#x27;s nameservers have IPv4 addresses, that zone won&#x27;t be accessible to the IPv4 Internet. However, so far everyone seems to get it right for IPv4, so it&#x27;s less interesting).</p><p>So for our web site to be reachable to clients purely via IPv6, what do we have to do?</p><p>Say our web site is www.example.com, and our DNS zone is <em>example.com</em>. So we have to:</p><ul><li>give our server an IPv6 address;</li><li>add an entry mapping its name to its IPv6 address as an “AAAA” record, in our DNS zone;</li><li>ensure our nameservers are reachable via IPv6.</li></ul><p>How exactly do we do the last step? Simple: we repeat the whole process again. Say for example the nameservers of the <em>example.com</em> zone are <em>ns1.big.hosting.co</em> and <em>ns2.big.hosting.co</em> — we repeat the process for those two servers (or, more typically, we have to talk to the <em>bighosting.co</em> in question to check that they have already done this).</p><p>And then, in turn, <em>bighosting.co</em> itself has nameservers, and the whole process repeats.</p><h2>An example</h2><p>Back to what I was doing over Christmas: I was adding IPv6 support to my network, and then adding the IPv6 “AAAA” records to DNS. But, there&#x27;s no point doing that unless the nameservers are also reachable via IPv6, which got me to thinking about this whole area.</p><p>So I wrote a tool I called “<a href="https://github.com/rvedotrc/dns-checker" rel="noopener ugc nofollow" target="_blank">dns-dependency-walker</a>”: it analyses the DNS hosting of a given domain, and finds its dependencies, and analyses, <em>them</em>, and so forth. It then draws a directed acyclic graph illustrating the relationships between the various zones and nameservers involved.</p><p>For example, here&#x27;s the picture for <em>rachelevans.org.uk</em>:</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2015/01/dns-rachelevans.png" class="kg-image" alt=""/></figure><p>(Nameservers or zones that are not reachable by IPv6 are shaded in grey. See my note to the side of the diagram for a detailed explanation.)</p><p>My domain is <em>rachelevans.org.uk</em>, so I depend upon the parent zones, <em>org.uk</em>, <em>uk</em>, and the root. The nameservers of my zone are in the domain <em>buddyns.com</em>, so I&#x27;m dependent upon that zone too. In turn, <em>buddyns.com</em> is dependent upon the <em>.com</em> zone. And so forth.</p><p>The moral of the story is: the success of your IPv6 support could depend on more than you might expect.</p><h2>Let&#x27;s do this!</h2><p>So back to <a href="http://www.worldipv6launch.org/" rel="noopener ugc nofollow" target="_blank">World IPv6 Launch Day</a>. Their web site lists the companies that participated, adding support for IPv6.</p><p>Let&#x27;s pick a few of the big names and see how they&#x27;re doing.</p><h3>Akamai</h3><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2015/01/dns-akamai.png" class="kg-image" alt=""/></figure><p>Not all of akamai.com&#x27;s nameservers support IPv6, but overall, it&#x27;s reachable. So far so good.</p><h3>Microsoft</h3><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2015/01/dns-microsoft.png" class="kg-image" alt=""/></figure><p>Slightly better for microsoft.com: <em>all</em> of that zone&#x27;s nameservers support IPv6. It doesn&#x27;t matter that some of the zones that they depend on have some IPv4-only nameservers, because they also have IPv6-capable nameservers. So again, the result is: it&#x27;s reachable.</p><h3>Facebook</h3><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2015/01/dns-facebook.png" class="kg-image" alt=""/></figure><p>Here&#x27;s where it starts to go wrong. Although <em>www.facebook.com</em> does indeed have an IPv6 address, the <em>facebook.com </em>zone&#x27;s nameservers don&#x27;t. So if you, or more specifically your DNS resolver, don&#x27;t have an an IPv4 address, then you can&#x27;t find out what <em>www.facebook.com</em>&#x27;s IPv6 address is — therefore you can&#x27;t connect.</p><h3>Google</h3><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2015/01/dns-google.png" class="kg-image" alt=""/></figure><p>Exactly the same situation at Google: <em>www.google.com</em> does indeed have an IPv6 address, but you can&#x27;t find it without an IPv4 address, because none of the <em>google.com</em> nameservers have IPv6 addresses.</p><h3>Cisco</h3><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2015/01/dns-cisco.png" class="kg-image" alt=""/></figure><p>At first glance, Cisco <em>appears</em> to have the same problem as Facebook and Google — lack of IPv6 addresses on the nameservers. But actually, it&#x27;s a bit more complicated.</p><p>The nameservers <em>do have</em> IPv6 addresses, as we can see from this query:</p><blockquote><p>rachel@dudley ~$ dig +short aaaa ns1.cisco.com<br/>2001:420:1101:6::a<br/>rachel@dudley ~$ dig +short aaaa ns2.cisco.com<br/>2001:420:2041:5000::a</p></blockquote><p>So what have Cisco got wrong, exactly?</p><p>The answer lies in DNS “glue” records: whenever the name of a DNS server (“ns1.cisco.com”) lies within the zone that that same server is trying to serve (“cisco.com”), you have to add the nameserver&#x27;s IP addresses — the “A” and “AAAA” records — to the parent zone. These are called “glue” records.</p><p>Here&#x27;s microsoft, getting it right (for the domain where their nameservers live, <em>msft.net</em>):</p><blockquote><p>rachel@dudley ~$ dig @a.gtld-servers.net. a www.msft.net.</p><p>;; QUESTION SECTION:<br/>;www.msft.net. IN A</p><p>;; AUTHORITY SECTION:<br/>msft.net. 172800 IN NS ns3.msft.net.<br/>msft.net. 172800 IN NS ns1.msft.net.<br/>msft.net. 172800 IN NS ns2.msft.net.<br/>msft.net. 172800 IN NS ns4.msft.net.</p><p>;; ADDITIONAL SECTION:<br/>ns3.msft.net. 172800 IN A 193.221.113.53<br/>ns3.msft.net. 172800 IN AAAA 2620:0:34::53<br/>ns1.msft.net. 172800 IN A 208.84.0.53<br/>ns1.msft.net. 172800 IN AAAA 2620:0:30::53<br/>ns2.msft.net. 172800 IN A 208.84.2.53<br/>ns2.msft.net. 172800 IN AAAA 2620:0:32::53<br/>ns4.msft.net. 172800 IN A 208.76.45.53<br/>ns4.msft.net. 172800 IN AAAA 2620:0:37::53</p></blockquote><p>The response tells us which servers handle the zone <em>msft.net, </em>and then also tells us their IPv4 and IPv6 addresses.</p><p>By way of contrast, here&#x27;s Cisco getting it wrong:</p><blockquote><p>rachel@dudley ~$ dig @a.gtld-servers.net. a www.cisco.com.</p><p>;; QUESTION SECTION:<br/>;www.cisco.com. IN A</p><p>;; AUTHORITY SECTION:<br/>cisco.com. 172800 IN NS ns1.cisco.com.<br/>cisco.com. 172800 IN NS ns2.cisco.com.</p><p>;; ADDITIONAL SECTION:<br/>ns1.cisco.com. 172800 IN A 72.163.5.201<br/>ns2.cisco.com. 172800 IN A 64.102.255.44</p></blockquote><p>The response includes the IPv4 addresses of the nameservers, but not the IPv6 addresses. (The <em>cisco.com</em> zone <em>does</em> contain the nameservers&#x27; IPv6 addresses, but that&#x27;s no use unless you can find the nameservers in the first place).</p><h2>Why have these big companies not got it right yet?</h2><p>So with all of this — multiple dependencies between zones and nameservers, referrals, glue records — does that mean that adding IPv6 support in DNS is somehow <em>hard</em>? That is, harder than IPv4?</p><p>Not at all —when it comes to DNS, the rules for IPv6 are <em>exactly the same</em> as they are for IPv4.</p><p>So why do so many companies — even the ones who promoted World IPv6 Launch Day, back in 2012 — get it wrong? What exactly <em>was</em> the point of it all?</p><p>To answer that, let&#x27;s consider the following scenarios:</p><p>If <em>everyone</em> on the Internet has an IPv4 address already, is there any benefit to me if I add IPv6? Well, a little: addressability without NAT, for example.</p><p>But, what if <em>not everyone</em> on the Internet has an IPv4 address — what if some <em>only</em> have an IPv6 address? Is the result just that they can&#x27;t find a lot of sites (including Google, Facebook, etc) because they can&#x27;t get the DNS lookups to work?</p><p>Maybe. But there is a mitigating factor.</p><p>Often, if your computer (PC/laptop/phone) needs to do a DNS lookup, it won&#x27;t do the full lookup itself — rather, it&#x27;ll ask another computer to do it. Internet Service Providers will often provide such a service, technically called a <em>Recursive</em> <em>Resolver</em> or <em>Full</em> <em>Resolver (</em>but often lazily, and confusingly, called just a DNS Server). As long as your computer can reach <em>that</em> one using IPv6, and as long the Resolver <em>does</em> have an IPv4 address, then that also works. In other words: <em>your</em> computer might be able to manage without an IPv4 address, as long as it can reach a Resolver that does have one. But that&#x27;s a hack.</p><p>But what if even the DNS Resolver doesn&#x27;t have an IPv4 address? Well, then you&#x27;re out of luck: you&#x27;ll only be able to find those servers whose DNS zones are reachable via IPv6; and additionally, you&#x27;ll only be able to <em>see</em> those web sites where the web site itself <em>also</em> has IPv6.</p><p>If either piece of the puzzle is missing, you miss out.</p><h2>Why IPv6?</h2><p>So back to my original questions: What does “add IPv6” really mean, and why should we do it?</p><p>Adding IPv6 <em>isn&#x27;t</em> just assigning an IPv6 address, then bunging that it DNS.</p><p>It <em>is</em> ensuring that you support IPv6 at least to parity with IPv4, and ensuring that all the services that you depend on (usually provided by other companies and vendors) do the same.</p><p>Adding IPv6 in parallel with IPv4 does add some value in and of itself; but as you&#x27;ll probably have heard, one of the big selling points behind IPv6 is that IPv4 addresses have run out (or “will soon run out”, depending on your definition), and when we&#x27;re out of IPv4 address, newcomers to the Internet will <em>only</em> get IPv6.</p><p>The key reason therefore to add IPv6 support is for the benefit of people in the future. Increasingly, the IPv6 Internet will be <em>the</em> Internet, and as more and more of the people that you want to talk to are on IPv6, you&#x27;ll be cutting yourself out of the picture if you haven&#x27;t joined the party yet.</p><p>Some of the efforts so far to encourage IPv6 adoption have, perhaps necessarily, omitted some of the technical details. Companies have declared “We&#x27;re on IPv6,” without really telling us what they mean — perhaps because they themselves didn&#x27;t know.</p><h2>Testing, Testing, I P 6</h2><p>As you add IPv6 support to your Internet presence — your web site, email systems, and so forth — it&#x27;s highly advisable to check that it&#x27;s reachable <em>even to people without IPv4</em>.</p><p>Set yourself up a test area, on its own Internet connection, <em>only</em> with IPv6. Make sure you <em>don&#x27;t</em> use your ISP&#x27;s (or corporate) DNS Resolver service – because if you <em>do</em>, you might accidentally be depending on IPv4 without realising it. Reboot all the devices in the test area, to clear any DNS caches.</p><p>Congratulations: you&#x27;ve weaned yourself off of IPv4. Now, can you still access your web site? Can you still communicate with your company via email?</p><p>If you can, you&#x27;re way ahead of the curve. Welcome to the future.</p>]]></content:encoded></item><item><title><![CDATA[openssl and the default cert file]]></title><description><![CDATA[openssl s_client appeared to be claiming that a certificate was expired, when it wasn't.]]></description><link>https://rachelevans.org/blog/openssl-and-the-default-cert-file/</link><guid isPermaLink="false">28774759e19c</guid><category><![CDATA[technology]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Wed, 10 Dec 2014 23:44:06 GMT</pubDate><media:content url="https://rachelevans.org/blog/content/images/2014/12/old-keys.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rachelevans.org/blog/content/images/2014/12/old-keys.jpg" alt="openssl and the default cert file"><div><p>Here&#x27;s a thing I came across today that confused me. <code>openssl s_client</code>appeared to be claiming that a certificate was expired, when it wasn&#x27;t.</p><p>(For reference, this is “OpenSSL 1.0.1j 15 Oct 2014″ on OSX Yosemite, as installed by Homebrew).</p><pre><code>$ openssl s_client -connect ssl.bbc.co.uk:443
…
Verify return code: 20 (unable to get local issuer certificate)</code></pre><p>So far so good. Let&#x27;s see the certificate chain:</p><pre><code>$ openssl s_client -connect ssl.bbc.co.uk:443 -showcerts
…
0 s:/C=GB/ST=London/L=London/O=British Broadcasting Corporation/CN=*.bbc.co.uk
  i:/C=BE/O=GlobalSign nv-sa/CN=GlobalSign Organization Validation CA — SHA256 — G2
1 s:/C=BE/O=GlobalSign nv-sa/CN=GlobalSign Organization Validation CA — SHA256 — G2
  i:/C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA</code></pre><p>This chain leads back to the “GlobalSign Root CA” cert, which I&#x27;ve already got downloaded as it happens:</p><pre><code>$ openssl s_client -connect ssl.bbc.co.uk:443 -showcerts -CAfile ~/GlobalSignRootCA.pem
…
Verify return code: 10 (certificate has expired)</code></pre><p>Expired? Hmm. Let&#x27;s dig into that.</p><pre><code>depth=2 C = BE, O = GlobalSign nv-sa, OU = Root CA, CN = GlobalSign Root CA
verify error:num=10:certificate has expired
notAfter=Jan 28 12:00:00 2014 GMT</code></pre><p>Today is December 10th 2014, so yes that&#x27;s in the past. So which cert is it claiming is expired?</p><p>Let&#x27;s inspect each cert in turn using <code>openssl x509 -noout -subject -issuer -startdate -enddate</code> — the first two using the output of<code>s_client</code>, and the last using the cert I already have a local copy of:</p><pre><code>subject= /C=GB/ST=London/L=London/O=British Broadcasting Corporation/CN=*.bbc.co.uk
issuer= /C=BE/O=GlobalSign nv-sa/CN=GlobalSign Organization Validation CA — SHA256 — G2
notBefore=Jun 2 09:56:02 2014 GMT
notAfter=Aug 19 13:50:57 2015 GMT

subject= /C=BE/O=GlobalSign nv-sa/CN=GlobalSign Organization Validation CA — SHA256 — G2
issuer= /C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
notBefore=Feb 20 10:00:00 2014 GMT
notAfter=Feb 20 10:00:00 2024 GMT

subject= /C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
issuer= /C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
notBefore=Sep 1 12:00:00 1998 GMT
notAfter=Jan 28 12:00:00 2028 GMT</code></pre><p>None of those are expired. None of those have an expiry date of “Jan 28 12:00:00 2014 GMT”, as claimed by <code>s_client</code>.</p><p>Several hours and some digging into the openssl source code later, I think that what&#x27;s going on is this:</p><p>In certain circumstances (but not always), openssl will try to perform certificate verification. For example, when you specify the <code>-CApath</code> and/or <code>-CAfile</code> options to <code>s_client</code>.</p><p>When you do this, openssl can <strong>also</strong> load the certificates given by <code>$SSL_CERT_FILE</code> (default: OPENSSLDIR + “/cert.pem”, which for me means “/usr/local/etc/openssl/cert.pem”); and I <em>think</em> also those given by <code>$SSL_CERT_DIR</code> (default: OPENSSLDIR + “/certs”, which for me means “/usr/local/etc/openssl/certs”).</p><p>Therefore, even though I&#x27;ve explicitly told openssl only <strong>one extra cert</strong> to use, it&#x27;s <strong>also</strong> using the certs in the default <code>cert.pem</code> file.</p><p>So what&#x27;s in that file? For me, 229 certs, that&#x27;s what. Including this one:</p><pre><code>subject= /C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
issuer= /C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
notBefore=Sep 1 12:00:00 1998 GMT
notAfter=Jan 28 12:00:00 2014 GMT</code></pre><p>The GlobalSign Root CA, which expired back in January. Bingo.</p><p>As a workaround, we can use the <code>SSL_CERT_FILE</code> environment variable to suppress loading of the default cert bundle:</p><pre><code>$ env SSL_CERT_FILE=/dev/null openssl s_client -connect ssl.bbc.co.uk:443 \
  -CAfile ~/GlobalSignRootCA.pem
…
Verify return code: 0 (ok)</code></pre></div>]]></content:encoded></item><item><title><![CDATA[Managing AWS CloudFormation templates using stack-fetcher]]></title><description><![CDATA[At the AWSUKUG meetup in September I talked about Video Factory, and a tool we've created for managing stack templates.]]></description><link>https://rachelevans.org/blog/managing-aws-cloudformation-templates-using-stack-fetcher/</link><guid isPermaLink="false">4d798d406fd0</guid><category><![CDATA[technology]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Tue, 28 Oct 2014 17:14:47 GMT</pubDate><media:content url="https://rachelevans.org/blog/content/images/2014/10/bridge-building.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rachelevans.org/blog/content/images/2014/10/bridge-building.jpg" alt="Managing AWS CloudFormation templates using stack-fetcher"><p>Last month at the <a href="http://www.meetup.com/AWSUGUK/events/194314272/" rel="noopener ugc nofollow">AWSUKUG meetup</a> I talked about Video Factory, and there was a little section there where I spoke about<a href="http://www.slideshare.net/rvedotrc/bbc-iplayer-bigger-better-faster/53" rel="noopener ugc nofollow">the tooling that we use</a> to manage all of our components. One of the tools, “stack-fetcher”, generated quite a bit of interest from the audience, and there was interest in open-sourcing it. I definitely want to do this — but we&#x27;re not quite there yet.</p><p>For now, though, I can talk about where stack-fetcher is right now, and what direction I want to take it in.</p><h2>The problem space</h2><p>“<a href="http://aws.amazon.com/cloudformation/" rel="noopener ugc nofollow">AWS CloudFormation</a> gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion,” says the documentation. As a developer, you do this by creating a template (JSON which defines one or more desired <a href="http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html" rel="noopener ugc nofollow">resources</a>), then submitting that template to CloudFormation — either via the API, or via something which wraps the API (e.g. the <a href="https://console.aws.amazon.com/cloudformation/home" rel="noopener ugc nofollow">web console</a>). Then CloudFormation goes and creates or updates your stack to match your template.</p><p>As a developer who loves automation and consistency, this leaves you with several problems:</p><ul><li>How do I generate the template JSON?</li><li>How do I generate the other JSON required by the stack (e.g. parameter values)?</li><li>If I was to push that JSON to CloudFormation — i.e. apply the change — how do I know what changes I&#x27;m actually pushing?</li><li>Can I push some changes but not others?</li><li>Once I know what I want to push, how do I do so?</li></ul><h2>A little BBC Media Services history</h2><p>To put all of the above into a specific story: in BBC Media Services, we found during the development of Video Factory that we were managing more and more stacks, and by the start of this year we had something like 100 stacks to manage in each of our three environments.</p><p>By January 2014, we had a system for generating the JSON, but different people ran the relevant tools in different ways, therefore sometimes yielding differing results. And once the JSON had been generated, we had no way of knowing in what way it was different from the stack&#x27;s existing template, so we didn&#x27;t know what we were actually changing. And finally, we had no consistent approach for actually updating the stacks with the new template — mostly we were using the web console, but not always in the same way. And even then: it&#x27;s a <em>web console</em>, so that&#x27;s just awful from a productivity and automation point of view.</p><p>Thus, stack-fetcher was created, to address all of the above problems.</p><h2>The workflow</h2><p>Once you&#x27;ve updated your source files, the workflow to update a stack consists of three steps:</p><ul><li>Run “stack-fetcher”. This generates a set of three files: <em>current</em>, <em>generated</em>, and <em>next</em>.</li><li>Use your favourite diff/merge tool to compare the <em>current</em>, <em>generated</em> and <em>next </em>files, making whatever changes you wish to <em>next</em>.</li><li>Run “stack-updater” to push <em>next</em> into CloudFormation.</li></ul><h2>The workflow in action</h2><p>Here&#x27;s a demo of a simple change, illustrating the basic workflow, and some of stack-fetcher&#x27;s strengths.</p><p>Before running stack-fetcher, We have two stacks, “resource” and “component”. The first diff has already been applied: a queue was added to the resource stack. These screenshots show the second diff being applied: to modify the IAM policy defined in the “component” stack, such that access is granted to the queue in the resource stack.</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/stack-fetcher-before.png" class="kg-image" alt="screenshot of a terminal session"/><figcaption>Before running stack-fetcher</figcaption></figure><p>We then run <em>stack-fetcher </em>(in this example, “int” is the environment in question — integration). <em>stack-fetcher</em> retrieves the existing stack, generates the desired template, and compares the two. The summary shows “resource: same” (all in sync), and “component: DIFFERENT (20 lines)” (there are 20 lines of differences).</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/stack-fetcher-output.png" class="kg-image" alt="screenshot of a terminal session"/><figcaption>The output of stack-fetcher</figcaption></figure><p>stack-fetcher has generated three template files per stack: <em>current, generated, </em>and <em>next</em>. Here we see the three files compared, using vimdiff:</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/stack-fetcher-three-files-top.png" class="kg-image" alt="screenshot of a terminal session"/></figure><p>and the bottom half of the same files:</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/stack-fetcher-three-files-bottom.png" class="kg-image" alt="screenshot of a terminal session"/></figure><p>You can see that “generated” (in the middle column) has some sections that “current” doesn&#x27;t — these is for the policy change we&#x27;re trying to make. But you can also see that “current” has some lines that “generated” doesn&#x27;t. This is because in this example, the stack in CloudFormation started off not in sync with our local copy (for example, maybe someone applied a change but neglected to commit the corresponding source).</p><p>So now we modify “next” (the right-hand file) to match whatever changes we want to apply. In this example we choose to pull in the new lines, but elect not to remove the extra, unexpected ones:</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/stack-fetcher-merge.png" class="kg-image" alt="screenshot of a terminal session"/><figcaption>Merging the desired template into “next”, in the right-hand column</figcaption></figure><p>After saving these changes (remember, we didn&#x27;t modify “current” or “generated” — only “next”), we run <em>stack-updater</em>:</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/stack-fetcher-apply-1.png" class="kg-image" alt="screenshot of a terminal session"/><figcaption>Running stack-updater (first time)</figcaption></figure><p><em>stack-updater</em> now warns us that it has detected a new parameter on the template (“MattressFailQueueArn” in this example): it adds this parameter, with the default value from the template, to the description file; then invites us to check this and edit the description file if we wish.</p><p>In this case the default is fine, so we just run <em>stack-updater</em> again:</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/stack-fetcher-apply-2.png" class="kg-image" alt="screenshot of a terminal session"/><figcaption>Running stack-updater (second time)</figcaption></figure><p>Now <em>stack-updater</em> very clearly shows us the diffs between <em>current</em> and <em>next</em>: that is, if we elect to proceed, <em>these are the changes that we&#x27;re actually about to make</em>.</p><p>After confirming that we&#x27;re OK with this, <em>stack-updater</em> applies these changes, using the CloudFormation UpdateStack API:</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/stack-fetcher-apply-3.png" class="kg-image" alt="screenshot of a terminal session"/><figcaption>Applying the changes using stack-updater</figcaption></figure><p><em>stack-updater</em> polls the stack&#x27;s status, waiting for it to reach a terminal state (i.e. not “in progress”). The stack events are displayed as they occur.</p><p>In this case the stack update completes successfully, and <em>stack-updater</em>&#x27;s work is done.</p><h2>In more detail</h2><p>stack-fetcher is a name given to a collection of scripts, one of which is itself called “stack-fetcher”. The other script that is intended to be manually invoked is “stack-updater”. There are other scripts, but one of the goals of stack-fetcher is to invoke and orchestrate those other scripts so that the user doesn&#x27;t generally have to think about them.</p><h3>stack-fetcher</h3><p>stack-fetcher&#x27;s job is to generate a set of three outputs:</p><ul><li><em>current</em> is the existing stack, fetched from CloudFormation</li><li><em>generated</em> is the stack that you want, generated from your codebase</li><li><em>next</em> is what you&#x27;re going to push back to CloudFormation using “stack-updater”</li></ul><p>When stack-fetcher runs, <em>next</em> is generated simply as a copy of <em>current</em> — that is, if you don&#x27;t edit the <em>next</em> file, then you won&#x27;t push any changes.</p><p>To make <em>generated</em>, stack-fetcher runs a series of scripts. Currently, this step is rather BBC-specific: we invoke <em>./generate-templates</em> with PYTHON_LIB set to point to part of the stack-fetcher codebase; if there&#x27;s a <em>transform</em> script, then the json is then filtered through this; then there&#x27;s a <em>cosmos-cloudformation-postproc</em> script which post-processes the json in various ways — primarily, providing defaults for the stack&#x27;s parameters.</p><p>To make <em>current</em>, stack-fetcher needs to know what stack name it should work with — and again, currently calculating this stack name is fairly BBC-specific. Once entered, the stack name is remembered via the <em>./stack_names.json</em> file, so you don&#x27;t have to calculate or enter it again. Once the stack name is known, the existing stack template and descriptor are fetched, and saved as <em>current</em>.</p><p>After this, stack-fetcher <em>normalises</em> both <em>current</em> and <em>generated</em>. The purpose of the normalisation is partly to make the files more readable, but also to get rid of differences that are meaningless. As well as whitespace reformatting and sorting object keys, the normalisation also includes CloudFormation-specific elements, such as sorting parameters, tags and outputs; removing empty arrays, if that would mean the same thing; and even re-ordering statements within IAM Policies.</p><p><em>next </em>always starts off as a copy of <em>current</em>, so that by default no changes are pushed.</p><p>Finally, stack-fetcher compares <em>current</em> and <em>generated</em> and shows a simple summary: they&#x27;re either the “SAME” or “DIFFERENT” (or, if the stack doesn&#x27;t exist yet, “NEW”); then shows some help text describing what to do next.</p><h3>diff/merge</h3><p>The help text displayed by stack-fetcher suggests using <em>vimdiff</em> to compare and edit the files, but of course you can use whatever tools you wish. The goal of this step is to update <em>next</em> to reflect what you want pushed back into CloudFormation (whilst leaving the <em>current</em> and <em>generated</em> files unchanged).</p><p>You may wish to simply review that <em>generated</em> is exactly what you want, then copy <em>generated</em> over <em>next</em> (this is probably what you want, ideally); or, you can cherry-pick, and perform more complex merges.</p><h3>stack-updater</h3><p>Once you&#x27;ve updated <em>next</em> to be as desired, you invoke <em>stack-updater</em>, with exactly the same arguments as you did for <em>stack-fetcher</em>.</p><p>If there are any differences between the set of parameters declared in the stack template, and the set of parameters passed in the stack descriptor, then stack-updater shows those differences (e.g. “You&#x27;re passing a parameter called X but it doesn&#x27;t exist”), automatically applies corrections (e.g. removing the no-longer-existent parameter), then stops, so that you can check its changes before re-running stack-updater.</p><p>Assuming the stack already exists, then stack-updater now diffs <em>current</em> against <em>next</em> — that is, it shows you the changes you&#x27;re about to push. It also displays the differences between the stack&#x27;s parameter defaults, and the actual parameter values you&#x27;re passing, so you can check which ones you&#x27;re overriding. (If the stack doesn&#x27;t currently exist, then this step is skipped, and the confirmation step up next reminds you that you&#x27;re about to create the stack).</p><p>It then asks for confirmation to proceed, and if you say yes, then the change is pushed using the CloudFormation “update stack” (or “create stack”) API, and then stack-updater polls the stack status, waiting for completion.</p><p>Finally there&#x27;s another BBC-specific step, wherein the stack can be registered in Cosmos, our deployment manager.</p><h2>Dependencies</h2><p>stack-fetcher is written in ruby, and uses the aws-sdk gem.</p><h2>Benefits</h2><p>By using this tool, we have realised several benefits:</p><ul><li>speed: Using this tool is much quicker than using the other (several) tools that we used before. There are fewer commands to type, with fewer options to remember. And probably most importantly, you never have to leave your terminal.</li><li>consistency: By automating more of the process, and by normalising the output, we now achieve more consistency: by which I mean between developers, between environments, and between components.</li><li>understanding: This tool makes it very obvious what changes you&#x27;re about to apply to live (or whatever environment you&#x27;re updating) — no more blind pasting of a load of json and hoping for the best — which means fewer mistakes.</li></ul><p>All of which means: this tool has helped us to be more productive.</p><h2>Next steps</h2><p>We need to separate out the BBC-specific parts from the rest, so that we can offer this tool out to a wider audience.</p><p>I&#x27;d like to make the “generation” phase more uniform: run a series of executables (bash, ruby, whatever — the tool should not care), where the first executable receives null input, and each subsequent tool filters the output of the previous one. So for example you might have filters which do: make the basic template; customise it for this environment; fill in parameter defaults.</p><p>I don&#x27;t have any news yet of <em>when</em> this might happen, but I certainly <em>want</em> it to happen. Please drop me a line via a comment or <a href="https://twitter.com/rvedotrc" rel="noopener ugc nofollow">on twitter</a> if you have thoughts on this — I&#x27;d love to hear your feedback.</p>]]></content:encoded></item><item><title><![CDATA[Personal highlights from the AWS Enterprise Summit]]></title><description><![CDATA[Yesterday I attended the AWS Enterprise Summit in London — I've chosen my highlights, and reflected on the summit as a whole.]]></description><link>https://rachelevans.org/blog/personal-highlights-from-the-aws-enterprise-summit/</link><guid isPermaLink="false">e491a8cdc60a</guid><category><![CDATA[technology]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Wed, 22 Oct 2014 22:32:59 GMT</pubDate><content:encoded><![CDATA[<div><p>Yesterday I attended the <a href="https://aws.amazon.com/aws-summit-2014/enterprise-summit-oct/" rel="noopener ugc nofollow">AWS Enterprise Summit in London</a>. I&#x27;ve already written about how <a rel="noopener" href="https://rachelevans.org/blog/amazon-web-services-fails-at-diversity/">it was very poor, from a diversity perspective</a>. But, it wasn&#x27;t all bad: some of the content was rather good...</p><h2>All hail the snail</h2><p>The first customer presentation was given by <a href="https://twitter.com/jodbod" rel="noopener ugc nofollow">John O&#x27;Donovan</a> of the <a href="http://www.ft.com/" rel="noopener ugc nofollow">Financial Times</a>. He told a fascinating and engaging story of the changing world in which they found themselves: with <a href="http://www.slideshare.net/AmazonWebServices/going-cloud-first-at-the-ft/6" rel="noopener ugc nofollow">print distribution in decline</a>, they needed to refocus on the net — and on future platforms and <a href="http://www.slideshare.net/AmazonWebServices/going-cloud-first-at-the-ft/10" rel="noopener ugc nofollow">devices yet to come, whatever they are</a>. John&#x27;s presentation was had a great balance of <a href="http://www.slideshare.net/AmazonWebServices/going-cloud-first-at-the-ft/26" rel="noopener ugc nofollow">information</a>, <a href="http://www.slideshare.net/AmazonWebServices/going-cloud-first-at-the-ft/20" rel="noopener ugc nofollow">insight</a>, and <a href="http://www.slideshare.net/AmazonWebServices/going-cloud-first-at-the-ft/16" rel="noopener ugc nofollow">humour</a>.</p><p>A particular highlight for me — and by the reaction from the audience, I&#x27;m going to guess for many other engineers in the audience — was <a href="http://www.slideshare.net/AmazonWebServices/going-cloud-first-at-the-ft/32" rel="noopener ugc nofollow">Chaos Snail</a>. “Like Chaos Monkey, but more chilled”, its job is to slow down I/O on certain instances, to test how software reacts to such degraded conditions. I asked John later if this tool has already been, or will be, open sourced — he says they&#x27;ve had a few requests for this, so yes they will. Good news!</p><p>John also talked about <a href="http://www.slideshare.net/AmazonWebServices/going-cloud-first-at-the-ft/35" rel="noopener ugc nofollow">Tagbot</a>, which locates and terminates untagged instances (“My team loves turning stuff off”, he said). Sounds like a blend between Chaos Monkey and Conformity Monkey.</p><h2>Maximum support</h2><p>After lunch we heard from <a href="https://twitter.com/brentjaye" rel="noopener ugc nofollow">Brent Jaye</a>, VP of AWS Support. He emphasised the value of Trusted Advisor as a way of identifying problems, and how they&#x27;re keen on building quick fix facilities into the web console. (For example: if a volume hasn&#x27;t been backed up for a long time, then highlight this as a potential problem, and show a “backup” button right there).</p><p>“We&#x27;re in the business of you spending less money with us”, he said — which has a nice ring to it.</p><p>Brent also spoke of the value of integrating AWS and the customer&#x27;s support system together; and of using Trusted Advisor and AWS Support not just via the console, but by their respective APIs. (John O&#x27;Donovan would I&#x27;m sure agree: earlier on he said “We don&#x27;t buy a product unless it has an API”. +1 on that).</p><p>Finally Brent spoke of the importance of engaging with AWS Support <em>early</em>, not just when there&#x27;s a problem.</p><h2>Auntie adapts</h2><p>Next up, <a href="https://twitter.com/rob_shield" rel="noopener ugc nofollow">Robert Shield</a> from <a href="http://www.bbc.co.uk/iplayer/" rel="noopener ugc nofollow">BBC iPlayer</a> spoke about Video Factory: how it uses AWS, the benefits realised over the previous platform, and how the BBC&#x27;s Operations function has adapted with the use of the cloud.</p><p>(I work with Robert, on the same team — I presented <a href="http://www.slideshare.net/rvedotrc/bbc-iplayer-bigger-better-faster" rel="noopener ugc nofollow">the Video Factory story</a> to the AWS UK User Group last month. So of course it should be assumed that I&#x27;m biased :-) )</p><p>However, it was obvious that the audience enjoyed it: Rob talked of the <a href="http://www.slideshare.net/AmazonWebServices/evolving-operations-for-bbc-i-player/4" rel="noopener ugc nofollow">benefits of smaller, simpler components</a>; of <a href="http://www.slideshare.net/AmazonWebServices/evolving-operations-for-bbc-i-player/6" rel="noopener ugc nofollow">how much data Video Factory shifts into S3 every day</a>; and on the importance of <a href="http://www.slideshare.net/AmazonWebServices/evolving-operations-for-bbc-i-player/11" rel="noopener ugc nofollow">automation</a> and <a href="http://www.slideshare.net/AmazonWebServices/evolving-operations-for-bbc-i-player/12" rel="noopener ugc nofollow">consistency</a>.</p><p>By re-architecting for smaller, simpler, more easily understandable components, he said, each part also became more reliable, and thus <a href="http://www.slideshare.net/AmazonWebServices/evolving-operations-for-bbc-i-player/19" rel="noopener ugc nofollow">people were more willing to look after the system</a>.</p><h2>News from the cloud</h2><p>The last customer presentation was from <a href="https://www.linkedin.com/pub/chris-birch/0/524/96b" rel="noopener ugc nofollow">Chris Birch</a> of <a href="http://www.news.co.uk/what-we-do/" rel="noopener ugc nofollow">News UK</a>. Like John and Robert before him, Chris told an entertaining and engaging story.</p><p>Much of News UK&#x27;s business is about Sunday publications, and combined with their “paywall” (he didn&#x27;t call it that, but that&#x27;s what the rest of us know it as), this meant that their traffic is highly spiked around Sunday mornings. And the old system could handle <a href="http://www.slideshare.net/AmazonWebServices/news-uk-our-journey-to-cloud/6" rel="noopener ugc nofollow">only 17 transactions per second</a>! But of course things were <em>much</em> faster on the cloud.</p><p>Part of Chris&#x27; talk was about the importance and the difficulty of assessing the Total Cost of Ownership — needed to be able to <a href="http://www.slideshare.net/AmazonWebServices/news-uk-our-journey-to-cloud/10" rel="noopener ugc nofollow">make the business case for moving to the cloud</a>. One thing I found very interesting was the idea that an application&#x27;s “App Book” (documentation on what it is, etc) should also document the app&#x27;s TCO.</p><p>There was also a nice section where Chris said that <a href="http://www.slideshare.net/AmazonWebServices/news-uk-our-journey-to-cloud/14" rel="noopener ugc nofollow">48% of their instances had no tags</a>, so it wasn&#x27;t clear what the instances were doing. However Chris also said that “It&#x27;s really boring switching stuff off”, which I have to say I <em>completely </em>disagree with!</p><h2>The two-pizza team</h2><p>Two of the speakers (sorry, I forget which ones exactly) mentioned the idea of the “two-pizza team”. Basically: a team which requires more than two pizzas will have communication problems. I like this concept — it&#x27;s a good rule of thumb that definitely matches my own experience.</p><h2>And the others…</h2><p>You may notice that I only wrote about four of the ten speakers. That&#x27;s because the other speakers very much failed to hold my attention. I enjoyed the customer talks, all of which were interesting, and engaging, and got a great reaction from the audience; but the talks from the partners, and from Amazon themselves (with the exception of Brent), seemed to be aimed very much at CxO level — at “suits”, one might say — and as such really weren&#x27;t my thing at all.</p><p>So I saw it as a summit of two opposing audiences: CxO versus techies. If the event was larger, then it would make more sense to split into two events, or two tracks in one event.</p><p>As it is, it seems to me that most people would have found half of the talks less than engaging — but, it&#x27;s only a one-day event, so that&#x27;s not such a burden.</p><h2>Wrapping up</h2><p>Overall I really enjoyed the day — the CxO-style talks weren&#x27;t for me, and I didn&#x27;t explore the partner and sponsor stands; but the customer presentations were great, and I had a good chat or two with AWS staff, and I loved swapping stories with the other attendees.</p><p>Oh, and there was <a href="https://twitter.com/pipoe2h/status/524611579145637889" rel="noopener ugc nofollow">highly practical swag</a>!</p><p>I think I&#x27;ll be back — maybe not every time, but it was a good day, and I&#x27;d be happy to do it again sometime. See you there!</p></div>]]></content:encoded></item><item><title><![CDATA[Amazon Web Services fails at diversity]]></title><description><![CDATA[The Amazon Web Services Enterprise Summit yesterday wasn’t exactly a shining beacon of diversity.]]></description><link>https://rachelevans.org/blog/amazon-web-services-fails-at-diversity/</link><guid isPermaLink="false">e2cd0a0f4f6</guid><category><![CDATA[technology]]></category><category><![CDATA[diversity]]></category><dc:creator><![CDATA[Rachel Evans]]></dc:creator><pubDate>Wed, 22 Oct 2014 11:22:28 GMT</pubDate><media:content url="https://rachelevans.org/blog/content/images/2014/10/aws-audience.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://rachelevans.org/blog/content/images/2014/10/aws-audience.jpg" alt="Amazon Web Services fails at diversity"><p>Yesterday I went to the Amazon Web Services Enterprise Summit, in London. I listened to <a href="https://twitter.com/iaingavin" rel="noopener ugc nofollow">this man</a> first, then <a href="http://thenextweb.com/media/2013/10/10/ft-cto-john-odonovan/" rel="noopener ugc nofollow">this man</a>; then <a href="https://www.linkedin.com/pub/chris-wegmann/4/b12/a6" rel="noopener ugc nofollow">this guy</a>, <a href="https://www.facebook.com/oskar.brink" rel="noopener ugc nofollow">this one</a>, and then after lunch <a href="https://twitter.com/brentjaye" rel="noopener ugc nofollow">these</a> <a href="https://twitter.com/rob_shield" rel="noopener ugc nofollow">six</a> <a href="http://www.slideshare.net/AmazonWebServices/transform-it-operations-with-csc" rel="noopener ugc nofollow">men</a> <a href="https://www.linkedin.com/in/mihak" rel="noopener ugc nofollow">all</a> <a href="https://www.linkedin.com/pub/chris-birch/0/524/96b" rel="noopener ugc nofollow">spoke</a> <a href="https://www.linkedin.com/pub/todd-weatherby/0/3ab/376" rel="noopener ugc nofollow">too</a>. And throughout all of it, another man acted as compère.</p><figure class="kg-card kg-image-card"><img src="https://rachelevans.org/blog/content/images/2014/10/aws-headshots.jpg" class="kg-image" alt="Headshots of nine white males"/><figcaption>Nine of the eleven Chosen Ones</figcaption></figure><p>Eleven. White. Males. (I couldn&#x27;t find a picture of the other two, but trust me: the other two were white and male too). Not exactly a shining example of diversity, is it? And I fear that the speaker line-up at AWS re:Invent won&#x27;t exactly be <em>that</em> much better.</p><p>This is is no way a criticism of the eleven men in question: it&#x27;s a criticism of the industry in general, and Amazon Web Services in particular.</p><p>Diversity helps us all. It encourages <em>everyone</em> to participate. For example, having women on stage will help other women to feel like they&#x27;re represented, welcome, and <em>included</em>. And with that inclusion comes a greater array of ideas and perspectives. Surely you want the best people, whatever their gender, race, sexuality? Lack of diversity means that you&#x27;re risking excluding some of the best people who would otherwise feel included.</p><p>So, AWS: just how hard did you try to bring in non-white-male speakers? I find it hard to believe that you tried, but were unable, to find any speakers who were female and/or non-white. Call me cynical, but what I find much easier to believe is that either you couldn&#x27;t be bothered, or that the concept of ensuring diversity in your speakers just didn&#x27;t even occur to you (and straight white non-disabled male is the default state for humans, amirite?).</p><p>Come on Amazon. I love your products, but on the diversity front you&#x27;re setting a bad example. A company of your size, influence and reach is in a fantastic position to lead by example: just as your services help your customers (and, some would say, lead the way), please use your influence to help the industry improve in diversity too, so we can <em>all</em> feel included.</p><p>Thank you.</p><hr/><p class="imageCredit">Headshot images from their various public profiles.</p>]]></content:encoded></item></channel></rss>