Entrepreneur, husband, Dad, and technology geek all contained within a single human being.
1597 stories

Facebook’s ‘10 Year Challenge’ — Harmless Meme, or Training for Age-Progressive Facial Recognition?

1 Comment

Kate O’Neill, writing for Wired:

But let’s play out this idea.

Imagine that you wanted to train a facial recognition algorithm on age-related characteristics and, more specifically, on age progression (e.g., how people are likely to look as they get older). Ideally, you’d want a broad and rigorous dataset with lots of people’s pictures. It would help if you knew they were taken a fixed number of years apart — say, 10 years.

Sure, you could mine Facebook for profile pictures and look at posting dates or EXIF data. But that whole set of profile pictures could end up generating a lot of useless noise. People don’t reliably upload pictures in chronological order, and it’s not uncommon for users to post pictures of something other than themselves as a profile picture. A quick glance through my Facebook friends’ profile pictures shows a friend’s dog who just died, several cartoons, word images, abstract patterns, and more.

In other words, it would help if you had a clean, simple, helpfully labeled set of then-and-now photos.

I think it’s very fair to say we should all assume the worst with Facebook all the time now. That’s I posted my 10-year challenge to Twitter instead of Instagram.

Read the whole story
5 hours ago
Because no one is mining data on Twitter

(face palm)
Waterloo, Canada
4 hours ago
Follow the link, it's a pretty funny joke.
3 hours ago
ok facepalm for me :) that is a good joke!
Share this story

Typewriter Cartography

1 Share

This is my father’s manual typewriter, a Royal Safari II. Or maybe it’s mine — I appropriated it quite a long time ago.


I remember playing with it a bit as a child in the 1980s, but for the most part I’ve rarely used it. But I’ve kept it around anyway, because I’ve always had a nostalgia for old technologies. Maybe I liked the idea of being a person who owns a typewriter.

A couple of weeks ago, I remembered that it was in the basement, and I thought — as I do from time to time — about how nice it would be to have a reason for using it. And then it occurred to me that I should just go with my default reason: maps.

After a few hours of planning and typing, I managed to create a typewriter map and I put it out on Twitter, where it ended up being by far the most popular thing I’ve ever put on that platform. Or probably ever, anywhere.

It’s probably of no surprise to anyone who’s known me for more than five minutes that I chose to start this project by mapping my homeland in the Great Lakes. I think it’s always useful to begin with somewhere familiar when trying something new, because you can use your local knowledge to confirm whether or not the technique is doing justice to the place.

Click here if you want to see a giant high-resolution scan. It’s full of smudges from the ribbon, alongside errors corrected with a generous application of Wite-Out. But I’m quite pleased with its messy, organic, analog nature. Other seemed to be, too.

I hadn’t expected such a warm reception from the internet, but even before that happened, I had considered my experiment a success. So I followed it up with a couple more maps, to get a feel for some different styles. You can click on either of them to have a look in more detail.

It was an interesting diversion from the digital precision of my normal workflow. Sometimes fun, sometimes frustrating, but in any case a chance to mess around with some new challenges.

The ideas here aren’t new. John Krygier has a post about typewriter mapping. Early computer graphics, such as ASCII art, along with early mapping software (like SYMAP), use essentially the same style as what I am doing (though mine is much more rudimentary): constructing images through individual characters.

In any case, now that you’ve seen the maps, read on to learn more about the challenges and decisions that went into their creation.

While this sort of work is cool and interesting, and I give away high-resolution versions of it for free, I can only do it when I take time away from my regular paid client work. If you derive some value from what you’ve seen here, you are welcome to make a donation to support my continued efforts.

Map 1: Rivers of Lake Michigan


Though I just called this project a “diversion” from a digital workflow, all of these maps actually started on the computer. For this particular one, I began with a grid in Adobe Illustrator. Each rectangle in the grid represented one character position on the typewriter. There are ten characters to the inch, at an aspect ratio of 0.6. The final grid was 75 × 60, which would fill a 7.5″ × 10″ space.


Atop that, I dropped some data from Natural Earth. And from there, I began “tracing”: plotting out which characters I could type to represent the rivers and coastline, and where each one should go.


After a little experimentation, I decided that if I wanted to draw linear features, there were three characters that were best to use: ! / _. Together, I could create rudimentary lines that roughly connected together in a pseudo-vector style, even if the typewriter grid itself is basically a raster.

A backslash (\) would also have been great, but that was a character invented pretty much exclusively for use on computers, so it’s not found on my typewriter. As such, I had diagonal lines that sloped somewhat cleanly in one direction, while they stairstepped back down in the opposite direction. Compare the coastline on both sides of Lake Michigan, below.


For the state boundaries, I decided to try something different. I simply filled a bunch of “pixels” in with asterisks, rather than using more “linear”-looking characters. A raster, rather than a pseudo-vector, approach. It creates a small visual distinction between the boundaries and the coastline, which might be pretty hard to do otherwise. There aren’t a lot of symbology options in a situation like this.

The biggest of those options, though, is color: my typewriter has a two-color ribbon, so I tried to make the most of it by setting the rivers off in red. This also helped with a labeling problem: I could name the rivers in red, to distinguish them from any other features. Other than color, though, the only way to vary my labels was to set some in capitals, and some in title case. I’m used to labeling most every class of feature on a map in a different style, but that’s just not possible here. My islands and my cities, for example, look the same (black, title case). The states are lakes are the same, too (black, capitals).

Once I had spent a couple of hours or so on developing a plan, it was time to start typing. I loaded some paper into the typewriter and got to work. At first, I proceeded very linearly: left-to-right, top-to-bottom. But that was tedious. There’s a lot of white space in this pattern, so sometimes I was forced to hit the space bar a few dozen times to advance to the next character on the line, and there was always a chance I might miscount and make a mistake. More importantly, though, following this workflow revealed a problem with my typewriter. Whenever I hit the carriage return lever to go to the next line, there was a chance that I’d somehow get a misalignment. Have a look at these patterns I typed:


Notice how the characters don’t all line up along the left side, but then become more aligned on the right? I’m not sure why it kept happening, but it seemed most often to appear when I would use the carriage return lever. So, instead, I shifter to a different style of typing. I would start to trace features somewhat linearly. For the top left part of the map, for example, I began by typing three asterisks, then I manually moved down one line, then typed four more, then moved down another line, and typed four more, etc. following the line of the state border.


I manually moved the paper up and down and used the backspace and spacebar keys to align myself to where I needed to be at any time. In this way, I mostly avoided misalignments, though smaller ones still kept creeping in. About three-quarters of the way down the page I got a minor leftward shift that you can see in the final product. You can also see where I typed some periods over again to check if it was just my imagination or if it really was misaligned.


Fortunately, it wasn’t enough to ruin my work, but it was a constant danger, and something I am still trying to figure out.

The final product has various interesting smudges where the paper accidentally contacted the ribbon. In particular, I noticed that typing in red always produced a faint black “shadow” a couple of lines above. When the slug hit the red part of the ribbon, a small portion of it would lightly hit the black portion of the ribbon, too. Later on, I started holding scrap paper over my map in order to prevent this, so that the black shadow would go on the scrap.


In sum: my typewriter is not a precision instrument. This makes it a somewhat uncomfortable-feeling tool for a detail-oriented designer like me. I like being able to zoom in to 64,000% in Illustrator and correct errors that are small enough that no human eye could possibly ever see them. But, there’s something attractive about the organic messiness of the typewriter.

Once I was done, I scanned it, and then turned it over to the Robinson Map Library, since I wasn’t sure what to do with it now that I was finished. So, come to Madison if you ever want to see the real thing (this goes for all three maps).

Map 2: Shadow Contours


For this one, I wanted to try and see if I could squeeze some sort of terrain representation out of the typewriter. As I mentioned, early digital graphics used printed characters to create images. And shading could be simulated by using characters of different darkness. The ASCII Art page on Wikipedia has some examples of this.

My goal was to do something like illuminated contours: lines that would get darker on the lower-right side and lighter on the upper-left, to create a depth illusion. So I needed to do something rather like what John Nelson calls “Aspect-Aware Contours.”

Setting this one up required a whole different workflow than my first map. I began with a DEM of Michigan that I always keep on my Google Drive, ready to test out a terrain technique at a moment’s notice:


First off, I cropped and shrank it down to 75 × 100 pixels. Then I further compressed the vertical dimension to 60 pixels. These two separate steps were necessary because the pixels aren’t square on my typewriter, as we saw in the grid earlier: they are taller than they are wide. I needed something that had the same aspect ratio as a 75 × 100 image, but once I had the overall image aspect ratio correct, I needed it to really only use 60 characters vertically, since each character is so tall. In the images below it looks a little squished because it’s being shown with square pixels. But in the end it stretches back out correctly.


From there, I classified it into just a few elevation levels, and smoothed them out a bit via a median filter.


And then it was time to calculate the aspect of each pixel in the raster.


Flat areas have no aspect. Pixels on the boundary between elevation classes, on the other hand, are assigned values based on which direction they are facing. So, now I could tell which areas would be in shadow (facing toward the lower right), and which would be lighter (facing upper left). No, I didn’t compensate for the vertical stretching when calculating aspect, but I should have.

The aspect calculation produced a double line of pixels, one on each side of the boundary between classes. But I really only needed a single line of pixels to represent the contours, so I first cleaned those up. And then I grouped the various aspects into three shades: light, medium, and dark, based on the particular direction they were facing.


Now I had contours with some shading. All that was left was to turn them into individual typewriter characters. I converted this raster into an ASCII file, which looks like this:

Screen Shot 2018-11-25 at 10.46.06 PM.png

Each pixel is represented by a number, and there are four numbers: one for light, one for medium, one for dark, and one for white. From there, it was simply a matter of doing a Find & Replace in a word processor to convert them to the three shading characters I had chosen to use: . + $.


And from there, it was just a matter of typing things out on the typewriter.


I tried a couple of other variations on this idea, as well. I initially hoped to do a proper set of Tanaka contours, with a medium-grey background and white highlights. But the white areas weren’t obvious enough amidst all the typed characters, so it wasn’t working.


Keeping white as the background color helped a lot, so I decided to go with contours that started with at least some darkness to them even on their light side. I also tried doing it with five different shades: . : + & $.


However, I think that was too many — the distinctions aren’t really sufficiently clear between some of them, especially when each character varies so much in darkness just based on how hard I hit the key. So, when I had to halt that particular attempt partway through due to me misreading part of my pattern, I decided to start over with a simpler set of three characters. I reclassified the aspect analysis and re-converted it to characters, and that became the basis of the final attempt described above.

Map 3: Shaded Relief


Since I had been at least modestly successful applying shading to contours, I decided finally to see if I could render a rudimentary shaded relief on the typewriter, as well. I knew it wouldn’t look particularly realistic, but I was hoping it would at least be sufficient.

This time I decided to change geographies and map Africa. As with the previous map, I took a DEM and shrank it down to 75 × 60 pixels, then I generated a shaded relief. I did it in Blender, but I turned off the various realistic shadows, as I thought they’d muddle things up. The end result was basically just a simple GIS hillshade.


did try to compensate for the fact that the image, which had square pixels, would get stretched vertically once it made it to the typewriter. I set my lighting angle to be about 15° off from the typical upper-left light source that is used in shaded relief. However, I think I shifted it 15° in the wrong direction. But the end result seemed to come out well enough.

Once I had the relief, I then classified it into five levels: white, and four shades of grey. While I’d used three shades for my contour map, after having decided five was too many, this time I decided to split the difference.


And then, as before, I converted it to text characters. This time, I used . + @ $ as my set of shades.

Screen Shot 2018-11-26 at 10.39.06 AM.png

I brought this into Illustrator and added it to the same planning grid that I’d developed for my first map. I also brought in more Natural Earth data so that I could include a coastline and some rivers. The relief would be in shades of black, and the other features would be in red.

Screen Shot 2018-11-26 at 10.42.22 AM.png

The end result would be a combination of the techniques from the first map (mostly pseudo-vector) and the second map (more raster-y).

I removed bits of the relief that crept into the ocean, and also in order to make space for the rivers and a few labels I decided to cram in. After about three hours, I had the whole thing planned out. I printed my pattern and typed it up. I had a few false starts where I missed or added a character here or there, but after another three or four hours, I finished the third map.


This one has fewer of those “shadows” that accompany the use of the red portion of the ribbon. I spent a lot of time with a piece of scrap paper trying to prevent those, mostly successfully. This map also only involved me making two mistakes that required Wite-Out. I’m clearly getting better, as the first map had probably closer to ten.

The shaded relief is obviously pretty coarse. I think close up it’s more of just an interesting texture, rather than anything that suggests depth. But, it was still fun to try. If you shrink the map, or step far away from it, or blur it, the relief starts to come out a little bit as the eye focuses less on the individual characters and more on the pattern. I think that’s also true of the contour map.


I may do some more maps later on, but I think now that I’ve explored some of the basic challenges of typewriter mapping, I’ve reached a good point to pause in my efforts. Maybe I’ll come back to it some other time, or maybe I’ll get diverted into another novelty use of old technology. Or maybe I’ll spend time doing all the stuff I was supposed to be doing instead of this. We’ll see.

Read the whole story
2 days ago
Waterloo, Canada
Share this story

The Colorful 80s Vibe of Blank VHS Tape Cases

1 Comment

I don’t know about you, but my house was blanketed with VHS tapes. The tapes were filled with episodes of Star Trek and movies meticulously taped from network TV without commercials — you had a to be a real Johnny-on-the-spot with the pause button or you’d miss a few post-commercial seconds of Chevy Chase’s antics in the G-rated version of National Lampoon’s Vacation. This video is a quick two-minute ode to the colorfully designed cases those tapes were sold in. Total memory bomb seeing these again.

Tags: design   video
Read the whole story
2 days ago
Wow, so well done! I remember about 80% of these
Waterloo, Canada
1 day ago
Same! This was a good refresher.
Share this story

Some delightful developer experiences in 2019.


I once worked at a company that built most of their functionality on top of Facebook's advertising APIs. GraphQL was not publically a thing at that point, but the API design was more or less equivalent to GraphQL. Properties would appear and disappear without warning, and reacting to changes required frequent fire drills.

One conclusion might be that they didn't care much about the experience of integrating with their APIs, but I'm pretty sure another guess is closer to the truth: it's extremely difficult to support integrations into complex and evolving systems, and Facebook Ads very much met those criteria.

This past year has in many ways been the debut party for GraphqQL. While many of its ideas represent a generational improvement from HATEOAS, they're not particularly new.

So what is new?

I wanted to know what developer experiences had been the most rewarding or for folks over the past year, and tweeted for folks more interesting developer experiences. From those responses, plus some browsing through Product Hunt and my own memory, here are some reflections on delightful developer experiences in 2019.


An increasingly common paradigm is allowing folks to write code that runs directly on the platform. This provides a powerful, dynamic interface, and surprisingly reduces complexity in many cases.

Platforms can abstract away the complexity of deployment, managing operating systems, etc, and let their users focus exclusively on the control logic. All particulars, no glue. This is especially prominent for scenarios where decision-making is relatively stateless, although stateful examples are becoming less rare.

Some good examples of code as interface are:

  • "Serverless computing" continues to be a dominant theme, with a feature race emerging among the major cloud providers across Alibaba Cloud Function Compute, AWS Lambda, Microsoft Functions, and Google Cloud Functions. Language support continues to expand, with Java, Python, NodeJS being common place, and differentiation in the long tail. For example, Alibaba supports PHP and AWS supports Go.

    What's maybe more interesting is seeing more niche products enter the serverless ecosystem, either trying to fill in usability gaps or specializing to support narrower usecases:

    Zeit Now promises selective build and deployment of only modified code paths (similar to what you might self-roll with Bazel).

    Firebase has specialized on supporting mobile development, providing a complete platform for mobile applications (mostly by specializing or extending existing Google Cloud functionality).

    Twilio has traditionally relied on their HTTP API, but in 2017 introduced Twilio Functions, which allow users to react to Twilio's events running code on Twilio's servers rather than their own.

  • WebAssembly on Cloudflare Workers allows folks to write WebAssembly, sometimes abbreviated as Wasm, and execute it on Cloudflare's edge compute infrastructure.

    Wasm is a mediocre product but an exceptional platform layer: I think long-term folks won't want to write Wasm directly, and will instead write more familiar languages and compile down to Wasm. That approach is being explored in Fastly's Terrarium, which I write about a bit more later on.

  • AWS Lambda@Edge is in similar vein to Cloudflare Workers or Fastly's Terrarium, but they've reached it from a different direction. For Fastly, the move into Wasm is increased flexibility compared to their existing VCL based configuration. However, Lambda@Edge only supports Node.js, which is much less featureful than AWS Lambda which [supports Java, Node.js, C# and Python.

    In some ways it's unfair to compare AWS Lambda with Lambda@Edge, as it's pretty clear the implementations are distinct and the similarity is mostly a marketing concern, but either way it's interesting to see how physical and efficiency constraints create product limitations and specialization in "code as configuration" offerings.

  • Chrome Extensions are an interesting, different slant on integration through code, and an another aspect of what "edge computing" might mean in the future. Here you write JavaScript applications which users of the Chrome browser can install from the Chrome web store, running on a cloud of internet browers.

Writing code is such a powerful platform interface because it is a fairly unique combination of extreme expressivity while retaining extreme control at the platform tier. However, it's worth pointing out that these offerings are largely Infrastructure-as-a-Service, which can assume a significant degree of technical expertise on behalf of their users. Will we see "code as integration" mechanism expand further beyond IaaS?


Containers such as Docker are an interesting choice of interface. They allow extraordinary customization and configuration, while often choosing to restrict or eliminate the statefulness offered by running a complete virtual machine. However, container security remains a pressing concern in a multi-tenant environment, and containers often provide an inferior interface between platform and user, forcing users to address operating system upgrades and platforms to appease multi-gigabyte images.

That said, I found one particularly interesting example of using containers as an integration point that I think is digging into, which is Azure IoT Edge.

Azure IoT Edge allows you to create and specify Docker containers, that process data on your physical internet-of-things devices, offloading some processing from the cloud to your cloud of physical devices. Containers represent an even higher abstraction than code, as you can control an entire virtual machine underneath. Containers represent some potential security risk in a shared computation environment, but that doesn't apply in the case of running on your IoT devices, so this usecase is a clever increase in flexibility without many downsides.

I also generally think this is an interesting product because it introduces the idea that "edge" is a broad concept, with both physical devices and CDNs representing different facets of edge computation.

All of that said, I do believe containers represent an effective internal interface within companies as they provide an interesting composable interface in terms of the complexity and flexibility tradeoff, but perhaps not working well externally.

Domain Specific Languages

Using domain specific languages to specify integration behavior is a bit of cross between configuration-driven integrations and code-driven integrations. Some of these examples use DSLs as an artifact of their initial implementation (similar to AWS Redshift using the Postgres protocol because the early versions were implement using Postgres), and others use them to improve correctness.

  • Github Actions allow you to write commands in a simple DSL to configure your build, test and deploy workflows:

      action "Deploy to Production" {
        needs = "Provision Database"
        uses = "actions/gcloud"
        runs = "gcloud deploy"

    This is a nice approach, because it describes how the actions relate in their DSL, but they don't try to describe the actions themselves. Instead the actions are described as scripts or shell commands. I could see Github Actions replacing large swathes of continuous integration tooling, and it's done so in a very flexible way that will allow for users to innovate on top of their platform, not trying to define all the ways that someone might want to integrate.

  • Fastly's use of Varnish Configuration Language, sometimes abbreviated as VCL, is another interesting example of a domain specific language. It's a pretty fascinating choice, because VCL is an extremely powerful language that is intentionally constrained to what is possible to do in a high performance web server, but also a rather dense syntax.

      sub pipe_if_local {
        if (client.ip ~ local) {
          return (pipe);

    In this specific case, I think VCL is probably too complex a DSL for wide-spread adoption, and I suspect that Fastly agrees based on their experimentation with Terrarium, which is compiling Rust, C and TypeScript to WebAsembly and running it at the edge.

  • Terraform is pretty fascinating as a DSL written by HashiCorp to represent integrations with third party cloud providers like Alibaba, AWS, Azure or GCP.

      resource "aws_elb" "frontend" {
        name = "frontend-load-balancer"
        listener {
          instance_port     = 8000
          instance_protocol = "http"
          lb_port           = 80
          lb_protocol       = "http"
        instances = ["${aws_instance.app.*.id}"]

    The entire idea of your product being a DSL on top of other products is pretty fascinating, and a powerful testament to how important effective interfaces can be. In this case, I think Terraform's biggest value propositions are in abstraction (sort of theoretically decoupling from vendor specific configuration and making it easy to support multi-cloud, although in practice this is a bit tenuous) and verifiabiility (much better tools to validate correctness than in e.g. YaML or JSON).

IDEs and development environments

Several examples were of powerful IDEs and development environments. Some of these are very focused specific tools, others are development platforms, and others fall somewhere inbetween.

  • Glitch which will run your entire frontend and backend Javascript application, providing an entire online IDE, development and deployment environment. Some of their existing examples use Airtable's API, Slack's API and Google Sheets' API. This is a powerful showcase of just how good Javascript sandboxing has gotten from security and performance isolation perspectives.

  • VSCode is a free, open source IDE from Microsoft that has been getting widespread adoption as a light, configurable and powerful IDE. Beyond being free, it does a lot of interesting things well: debuggers, smart completion, Git integration, extensions, etc.

    Potentially the thing is does best, though, is the Language Server Protocol, which abstracts language support from IDE particulars, allowing one LSP integration to support a wide range of IDEs. This is a very powerful approach to encourage adoption of VSCode, but also will lower the entry barriers for future IDEs as well. These sorts of platforms allow tools to compete on quality, and make it feasible to support hyper-specialized tools for particular workflows. Very excited to see more products that are viral vectors for platforms that can be used by the rest of us.

  • Google Cloud Shell is a pretty powerful idea, allowing folks to interact with and control their Google Cloud environment with a dedicated VM for each individual. This is pretty crafty, as it allows Google to upgrade the clients automatically on their images (reducing backwards compatability overhead), and also provides better security and auditability primitives.

    From a user perspective, I also love that it prevents the proliferation of the much-dreaded "shared management server" anti-pattern, where folks do a bunch of critical work off a single, shared server that is forever running out of disk space, getting its CPU pegged on a bad script, or causing an outage when it goes down.

  • Hyper is a HTML/CSS based terminal application, that allows heavy customization while using familar web-technologies. This opens the door to some neat ideas around embedding terminals into tools for powerful IDEs, without having to (directly) rely on native technologies to do so.

  • Merlin is an editor service for the OCaml programming language that supports autocomplete and such. By default it supports Emacs and Vim, but has also been integrated with other editors such as... VSCode. I think this example is interesting in three different ways. First, it's a reminder of how we can draw inspiration from many places. Second, it's a testament to how well design Emacs and Vim are to remain heavily used and actively extended so many years after their first development. An inspiration! Finally, it's a testament to what a great idea the Language Server Protocol is, because the future it'll enable will allow tools like this to integrate cleanly with any editor.

  • Pharo is a Smalltalk implementation which combines the operating system and IDE into a single bundle. Pharo expects far more from us than most tools--to give up our entire operating system--and I think that's part of what makes it so interesting. Many of the barriers we create for tools are social constructs, we have the technology to do much more than we typically do in our tools.

Progressive configuration

In practice, very few developer-centric tools provide only a single vehicle for integration, but instead provide a range of integration capabilities from simple to rather complex.

Picture of Google Cloud Build's configuration interface.

  • Clubhouse is another standard, modern example. Default configuration done through their UI, and more powerful customization done with a REST API and webhooks.

Programming without code

I won't belabor this too much, but products like Airtable and If This Then That still hold a very special place in my heart. They are these spiritual successors to HyperCard, providing fairly simple tools that "non-programmers" can compose into deeply powerful programs.

These products slot into a void between standard GUI-driven integration and programmatic integration. The difference between these and GUI-driven integration is they aren't particularly opinionated. They aren't designed to solve a specific thing, but rather a tool that can solve many things by providing some composable, open-ended building blocks.

Libraries and tools

This is kind of a grab bag categories of tools that folks called out as offering particularly good developer excperiences, which maybe didn't fit perfectly into other groups.

  • Flutter is a tool for building native apps on iOS and Android from a single codebase. Similar in some aspects to React Native but seemingly to better results.
  • pytest is a Python test runner that simplifies test running and in particular test writing, relative to the standard library's Unittest.
  • Laravel and Laravel Spark are powerful PHP frameworks for developing web applications.
  • Gatsbyjs bundle together the full modern frontend stack into something with all the power and a more graduated learning curve.
  • elm is a language for writing reliable webapps, with powerful type inference in the vein of Haskell.
  • React Hooks are a simpler way to deal with state changes in React.
  • Sourcegraph is a powerful code search tool, that makes it easy to search across large code repositories.
  • OASGraph is a tool to transpate OpenAPI Specifications into GraphQL APIs, which lowers the barrier to start running an GraphQL API.

Things that weren't mentioned

Given the endless hype, I thought it was interesting to just briefly mention a few things that that no one brought up:

  • Blockhain in general or Ethereum in particular as examples of great developer experiences.

    Depending on usecase, it seems like either the wallets are still difficult to use securely, or the transaction costs remain prohibitively high or volatile. Together, it seems like the developer experience for blockchain is still lacking.

  • Chat bots didn't come up either. despite getting a lot of excitement. I think this might be in large part due to US and Europe focus of the folks I know on Twitter, as it seems like chat bots are getting a great deal of use in other markets.

  • React wasn't brought up as much as I expected, although React Hooks were mentioned. Perhaps React is so commonplace at this point that folks don't even think to mention it, much as no one mentioned Python, Ruby or Node.js.

Altogether, this was quite an interesting way to spend some time! I was unfamiliar with many of these tools before today, and it was a good survey to help refresh my thinking about where developer tooling and great developer experiences are headed.

Very hopeful to hear folks suggestions for other impressive and inspiring developer experiences they've had recently!

Read the whole story
6 days ago
Waterloo, Canada
9 days ago
Melbourne, Australia
Share this story

Starship test flight rocket just finished assembly at the @SpaceX Texas launch site. This is an actual picture, not a rendering. pic.twitter.com/k1HkueoXaz

1 Comment

Starship test flight rocket just finished assembly at the @SpaceX Texas launch site. This is an actual picture, not a rendering. pic.twitter.com/k1HkueoXaz

Posted by elonmusk on Friday, January 11th, 2019 3:31am

14326 likes, 3198 retweets
Read the whole story
6 days ago
Waterloo, Canada
Share this story

SpaceX’s Starship reaches new heights as Elon Musk teases Q1 2019 hop tests

1 Comment

In a burst of activity that should probably be expected at this point but still feels like a complete surprise, SpaceX technicians took a major step towards completing the first Starship hopper prototype by combining the last two remaining sections (aft and nose) scarcely six weeks after assembly began.

SpaceX CEO Elon Musk also took to Twitter late last week to offer additional details and post what appears to be the first official render of Starship’s hopper prototype, which is now closer than ever before to looking like the real deal thanks to the incredible drive of the company’s southernmost employees. With the massive rocket’s rough aeroshell and structure now more or less finalized, Musk’s targeted February/March hop test debut remains ambitious to the extreme but is now arguably far from impossible.

Where there was literally just a tent and some construction equipment barely eight weeks ago, SpaceX’s Boca Chica facilities now sport one of the most bizarre developments in recent aerospace history — a vast, ~30 ft (9m) diameter rocket being built en plein air out of tubes and sheets of common steel. At the current pace of work, 24 hours is often enough for wholly unexpected developments to appear, and this Starship hopper (Starhopper) is beginning to look more and more like its concept art as each day passes.

Aside from a few well-earned slow days last weekend, SpaceX technicians, engineers, and contractors have spent the last week or so shaping Starhopper into a form more reminiscent of the conceptual render (clearly hand-painted) Musk posted on Saturday. This primarily involved stacking a tall conical nose section atop a separate cylindrical body section, followed by gradually cladding both the aft section’s legs and barrel in sheets of stainless steel, presumably intended to improve both its aesthetic and aerodynamic characteristics.

Notably, technicians have installed two out of three (?) aerodynamic shrouds at the top of each steel tube leg, bringing Starhopper’s appearance even closer to the smooth and polished aesthetic of its conceptual sibling.

Starhopper’s hopped-up hop test ETA

Musk later replied to a question related to Starhopper’s near-term schedule and stated that the nominal target for its first flight test was – almost unfathomably – four weeks away, although he admitted in the same response that that would probably translate into eight weeks due to “unforeseen issues”, placing the actual launch target sometime between February and March 2019. Just to reiterate, the site Starhopper is currently located on was quite literally empty – aside from the temporary tent – in late November 2018, barely more than six weeks ago.

To plan to go from a blank slate to actual integrated flight tests of a rocket – no matter how low-fidelity – that is 9m (~30 ft) in diameter, at least 40m (~130 ft) tall, could weigh as much as 500 tons (1.1M lbs), and may produce ~600 tons (~1.35M lb/f) of thrust at liftoff is extraordinarily ambitious even for SpaceX. At the end of the day, significant delays to Musk’s truly wild timeline are very likely, but it seems entirely possible at this point that Starhopper really could begin its first hop tests in the first half of 2019, kicking off a test program currently aiming for flights as high as 5 km (3.1 mi) and as long as 6 minutes.

A whole range of things will have to go perfectly right for a timeline as ambitious as this to be realized, including but not limited to successfully acceptance-testing three brand new and recently-redesigned Raptor engines, the completion of Starhopper’s unfamiliar structures, propellant tankage, plumbing, and avionics, and the completion of a rough launch and landing pad and integration facilities, if needed. Aside from those big ticket items, many dozens of other smaller but no less critical tasks will have to be completed with minimal to no unforeseen hurdles if hop tests are to begin just a few months from now.

Regardless, SpaceX has pulled off miraculous tasks much like this in its past, and the possibility that the company’s brilliant, dedicated, and admittedly overworked employees will do so again should not be discounted.

For prompt updates, on-the-ground perspectives, and unique glimpses of SpaceX’s rocket recovery fleet check out our brand new LaunchPad and LandingZone newsletters!

The post SpaceX’s Starship reaches new heights as Elon Musk teases Q1 2019 hop tests appeared first on TESLARATI.com.

Read the whole story
8 days ago
Oooh! shiny rocket!
Waterloo, Canada
Share this story
Next Page of Stories