chasem.co

Poets of the mundane

Paul has always been my favorite Beatle.

I was in junior high school when I got hooked—during that time there’s a good chance that if you found me listening to music on my iPod nano, it was The Beatles.

Perhaps it was Paul’s youthfulness and humor that made him approachable to me at that age in a way that John wasn’t. Paul was someone you might have known in real life, but John and George seemed otherworldly.

That otherworldliness is a part of why John in particular is regarded as the driving creative genius of the group.

But this has never sat right with my love for Paul, so I was delighted to discover Ian Leslie’s 64 Reasons To Celebrate Paul McCartney which makes a very strong case for the boyish Beatle.

It’s a long list full of excellent reasons to rethink your choice of favorite Beatle. There was one aspect in particular that stood out to me.

For McCartney, the domestic isn’t opposed to the world of the imagination; it is a portal to it. He is a poet of the mundane; a writer who will start off writing about his dog, or fixing a hole, and see where it takes him.

I think this is the one that sums up the whole thing, and is what a large part of makes Paul appealing. His work offers me reassurance that inspiration can and does come from the most unassuming of places.

The need to find or manufacture deeper meaning in our work by tracing its inspirations can be paralyzing—Paul is a good reminder for me to not ignore ideas sparked from humble circumstances.

It’s clear that being a “poet of the mundane” extends beyond creative work and into the way we choose to live our lives:

His unashamed “normality” was an act of inverted rebellion, as transgressive, in its way, as Lennon and Yoko posing naked. But neither fans nor critics saw it that way, and to this day it is Lennon who best fits our Romantic idea of a great man; tortured, difficult and deep. Long before it became commonplace for male public figures to hymn the joys of parenting, Paul McCartney was showing us a different way to be a man, and we have never quite forgiven him for it.

This echoes the timeless advice from Stephen King: “Life isn’t a support-system for art. It’s the other way around.”

Apps and the Age of AI

For how much software has evolved and matured I find it strange how many tasks remain unaddressed by niche, purpose-built software. I’m always excited to see how novel ideas can come about from focusing in on a narrow domain.

Ink and Switch’s latest research prototype Embark focuses on one specific task: making travel plans.

Embark uses a humble text-based document as its interface, which it then enriches with additional data and views as needed. There is something thrilling to me in the notion that, of all the options explored, plain text won (and often wins) the day.

As a designer I find it both deeply distressing and blissfully serene that improvements upon plain text as way of viewing, creating, and manipulating data are so rare.

Perhaps more importantly than the medium, Embark brings the features of multiple apps into a single workspace:

Although apps give us access to all kinds of information, they provide only limited mechanisms for bringing it together in useful ways. Whenever a complex task requires multiple apps, we are forced to juggle information across apps, resulting in tedious and error-prone coordination work.

Embark: Dynamic documents for making plans

Ink and Switch’s research identified 3 core problems with the typical model where tasks are completed using a series of individual apps:

  1. Context is not shared across apps
  2. Views are siloed
  3. Apps produce ephemeral output

One of the most exciting aspects of AI for me is its potential to address these challenges. AI actors operating on our behalf, paired with the right platform primitives and protocols, have the potential to form a new model of computing which reduced the coordination cost of managing many apps and interfaces.

I feel optimistic about a future with technology shaped by catalysts like LLMs, federated social protocols, and now Embark. Each offer new avenues for addressing the challenges with a computing model centered around siloed apps.

If the past decade of human computer interaction has been centered around apps, perhaps the next decade will knock those walls down and put users back in control of their data and the ways it is manipulated.

Wherever I can I prefer to work in the browser vs. tools like Figma. As the web platform grows (we seem to be in a sort of golden age at the moment) it becomes easier and easier to do my job with only the raw materials of the web.

Recently I’ve been working on a project at the day job that requires the use of something akin to layout grids in Figma. I was curious how difficult it would be to recreate this on the web.

It took longer than I’d like to admit to figure out the math, but with a single repeating-linear-gradient we can overlay a representation of our grid onto the page. I whipped up a class for this with support for specifying your own number of columns and gutter width.

In the spirit of blogging the things I want to remember:

.grid-overlay {
  --n: var(--columns, 12);
  --g: var(--gap, 16px);
  --c: var(--overlay-color, rgb(255 0 0 / 0.08));
  --column-width: calc(((100% + var(--g)) / var(--n)) - var(--g));

  position: relative;

  &:before {
    content: '';
    position: absolute;
    inset: 0;
    z-index: 1;
    pointer-events: none;
    background: repeating-linear-gradient(
      to right,
      var(--c),
      var(--c) var(--column-width),
      transparent var(--column-width),
      transparent calc(var(--column-width) + var(--g))
    );
  }
}

Slap that class onto your grid container and presto, you’ve got some rails in place to keep everything lined up nice and neat.

A screenshot of a webpage showing a 12 column grid.

repeating-linear-gradient invokes strange powers

Once again I’m left marvelling at the humble power of CSS, and feeling grateful that we live during times when such an expressive yet simple visual language is spoken so ubiquitously.

Check out the demo on Codepen.

Conjurings of the harvest moon

Autumn is my favorite time of the year. The trees outside my apartment here in Oak Park are shining gold, and the air is starting to feel crisp and cold. Sweater weather, if you will.

A towering castle set against a blue sky and surrounded by trees with golden leaves.

Hashimoto Okiie, Autumn at Himeji Castle, 1949

Besides the weather and foliage, the best part of the season may just be its rituals. One of my favorites is an annual rewatching of Over the Garden Wall, which is the coziest, most endearing television series I’ve ever seen.

If the show is to teach us anything, it’s that things are not always what they seem.


I’d be remiss if I didn’t link to this wonderful exploration of vintage postcards that share a vibe with OtGW from the blog Weird Christmas (which is written by, get this, Craig Kringle). The entire site is dedicated to vintage Victorian Christmas cards, which Craig collects and shares online.

A vintage postcard showing figures with pumpkins instead of heads sitting around in a field. A sign in the center of the frame reads "Heaven and how to get there".

Weird Christmas

A few of the show’s scenes appear to take almost direct inspiration from some of the postcards in Craig’s collection. There is truly nothing new under the sun.


I recently discovered that a new animated series on HBO Max, Scavenger’s Reign, is based upon an animated short which appeared online 4 years ago by Joe Bennett and Charles Huettner.

The short caught my attention when it originally released for being a beautiful and wholly original bit of science fiction. Sometimes the internet, like the world, is full of serendipity, and you might rediscover something familiar just like you might stumble into an old friend at a crowded place.

The first six episodes have already premiered, and I’m hooked already. The style is something like a cross between Fantastic Planet, Sable, and Nausicaä of the Valley of the Wind.

A still image from the HBO Max series Scavenger's Reign depicting a vibrant and verdant alien planet with a metallic robot standing in the center of the frame.

I am such a sucker for any media that takes world building seriously, and the world of Scavenger’s Reign is overflowing with details—flora, fauna, and environments as alien as you’ve ever seen. The ecology of Vesta Minor, the planet on which the show takes place, is just as much a character as the humans stranded there.


Since there seems to be an animation theme going here, I’ll briefly mention how excited I am for the upcoming animated Scott Pilgrim series on Netflix: Scott Pilgrim Takes Off. I have a love for both the graphic novels and the 2010 film, whose cast will be returning to voice the characters in the new series.


Robin Sloan recommended the book Ghosts and Demons of India in his latest newsletter, and I picked up a copy of my own just in time for Halloween.

I love books that can be imbibed in small sips like a hot cup of coffee. Ghosts reads like an encyclopedia of creatures and spirits from the Indian subcontinent, and each entry conjures up the most vivid images. I have no idea the pantheon of ghost stories in India was so vast!


Allow me to recommend one of my favorite blogs of late, which excites me every time it appears in my RSS reader. It is the wonderful Going Medieval by Dr. Eleanor Janega, who specializes in “late medieval sexuality, apocalyptic thought, propaganda, and the urban experience in general.” How cool is that??

Dr. Janega uses their expertise to make comparisons and critiques between modern internet culture and that of medieval societies. One of their latest posts was sparked by the recent national test of the Integrated Public Alert and Warning System (IPAWS) on October 4 in the United States, which caused everyone’s smartphones to scream in unison. If you’re like me, you found it surprising and terrifying despite the numerous warnings online in the weeks leading up to the test.

Some folks found it more than surprising, though, and used it as the basis for conspiracy theories related to 5G, vaccines, viruses, and the like.

We often like to think of medieval European societies as unenlightened, unintelligent, and superstitious, but Dr. Janega reminds us that we’re not much better ourselves. Many bogus explanations were offered for the Black Death, and the parallels with how people respond to public health emergencies today are eery.

Definitely go read the whole piece, it’s full of gems like this:

I have repeatedly heard people now refer to the fact that “medieval streets were full of shit” to explain the spread of the Black Death. This is interesting because it is 1) not true – most medieval cities tightly regulated the disposal of human waste very strenuously and 2) would be irrelevant anyway even if it were true (it’s not) because that’s not how yersinia pestis travels.

On sickness and conspiracy

While I was writing this up, Dr. Janega posted another banger just in time for Halloween: You are not, in fact, the granddaughter of the witches they couldn’t burn.


Robin Rendle recently shared how he views the browser as a printing press:

So I’ve never seen myself as a designer or engineer or writer, but as a third thing. It’s sort of pompous and silly to call myself this word though, so I avoid it, but deep down it’s what I’m always thinking whenever someone asks what I do. But here, in this secret society of the newsletter, I will admit to you:

  1. I’ve always seen the browser as a printing press.
  2. Because of that, I’ve always seen myself as a publisher first and then everything else second.

Robin Rendle, The Browser is a Printing Press

I couldn’t agree more. The power of the browser is not that it gave us the ability to write or create art or build programs, but that it allowed anyone to publish those things to the entire world.

The application of the web—design, engineering, writing—are all interests of mine, but for me they’re inevitably second to the printing press itself.


Speaking of Robin, be sure to check out his latest newsletter, The Cascade which focuses on the past, present, and future of CSS. Robin has been exploring new color features in CSS in the latest issues and it’s been a delight to follow along with.


In the spirit of publishing, I’ve been working on a little side project that is an ode to the written word.

While I appreciate the convenience of ebooks and audio books, I have always preferred to own and read physical copies. As a result, I’ve accumulated quite a few books that are becoming increasingly difficult to store and move around. I know at some point I’ll need to slim down my collection, but I wanted to preserve it in its current form.

To that end, I decided the place to start would be creating a database of all the books in my physical collection. I spent a few weeks inputting titles, authors, dates, ISBN numbers, page counts, and more metadata about the titles on my shelves. I still have a few boxes of books to go through and log, but most of my collection is now captured digitally.

I decided to use Airtable for this job, partly because it has such an easy to use API. I wanted to display my collection in a way that was more pleasant to browse than a spreadsheet, so I built my own little frontend for the database.

It’s still very much a WIP, and not as performant as I’d like it to be just yet, but you can take a peek at books.chasem.co

A screenshot of a website displaying Chase McCoy's collection of physical books in a grid layout.

A website can be a bookshelf

Now I feel much more comfortable donating some of my books knowing that I’ll always be able to look back over my collection.


I hope this season find you well. With reverence to the Great Pumpkin,

Chase

In a week that has contained the revelation that aliens are real and have visited our planet, the most exciting news may actually that 3 researchers in Seoul, South Korea claimed to have synthesized a new material that is a superconductor at room temperature and ambient atmospheric pressure.

I’ll leave it to others to discuss the implications of this, but if true it could turn out to be the greatest physics discovery of my lifetime.

It could also, of course, turn out to be false, and plenty of doubts are already being cast about the results and the researchers. But at this moment there are people all around the globe trying to replicate the findings, and we may start hearing about the results in a matter of hours. Some of them are even live-tweeting the effort. There’s an exciting energy of discovery and optimism stemming from this finding.

Just another reminder that every day is science fiction.

We’re seeing a surge of platforms self-sabotaging and choosing to suddenly restrict access to their content. These are all blatant attempts to trap users by digging a moat around the communities that they’ve created.

I can think of 3 obvious reasons why this might all be happening now as opposed to any time over the past decade:

  1. The content on social media platforms is valuable for AI training, and platforms want to capitalize on or keep that value for themselves.
  2. The recent, high-interest-rate environment has companies cutting costs in ways they might not before, and subsidizing API access for third party developers is no longer a bill they’re willing to foot.
  3. The bad behavior of platforms is creating a more competitive environment as new challengers spring up (Bluesky, Posts, Mastodon, and Threads all come to mind).

Those seem obvious, but are they really the cause? Is it one more than the other? Or something else entirely?

I wonder how much of this trend is really just a domino effect of CEOs realizing that they can get away with screwing over their users because they saw Elon Musk (or some other robber baron) get away with it.

Humane (the mysterious company founded by ex-Apple executives) has finally revealed the name of the product they’re hoping to ship this year: the Humane Ai Pin.

I’m as skeptical as the next person about AI and wearables and really anything with as much hypebeast marketing as this product has received. But if I put my skepticism aside for a moment I’m able to appreciate this for what it is—a group of people trying to create a new kind of computer and computing paradigm.

There’s a bit of footage out there of the device in action, but regardless of the specifics I think it’s essential that we never stop asking ourselves what a computer could or should be.

In the new book Make Something Wonderful: Steve Jobs in His Own Words, Steve talks about his love for books and also their shortcomings:

The problem was, you can’t ask Aristotle a question. And I think, as we look towards the next fifty to one hundred years, if we really can come up with these machines that can capture an underlying spirit, or an underlying set of principles, or an underlying way of looking at the world, then, when the next Aristotle comes around, maybe if he carries around one of these machines with him his whole life—his or her whole life—and types in all this stuff, then maybe someday, after this person’s dead and gone, we can ask this machine, “Hey, what would Aristotle have said? What about this?” And maybe we won’t get the right answer, but maybe we will. And that’s really exciting to me. And that’s one of the reasons I’m doing what I’m doing.

Steve Jobs’ speech at the International Design Conference in Aspen, Colorado on June 15, 1983

For all the work we’ve put into creating ways to capture our lives digitally, it doesn’t feel like the ritual of passing that information down to future generations is considered much.

I wonder if this might be a common use case for conversational AIs in the future. You can imagine a ChatGPT trained on the works of Aristotle, waiting to answer new and novel questions. Like Steve says, we won’t always get the right answer, but maybe we will.

The digital book is lovely and full of wisdom—definitely a recommended read.

The eyes and ears of AI

It’s hard to keep up with the progress of AI. It seems as though every week there’s a new breakthrough or advancement that seemingly changes the game. Each step forward brings both a sense of wonder and a feeling of dread.

This past week, OpenAI introduced ChatGPT plugins which “help ChatGPT access up-to-date information, run computations, or use third-party services.”

Though not a perfect analogy, plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data.

OpenAI

OpenAI themselves have published two plugins:

  • A web browser plugin which allows the AI gather information from the internet that was not originally part of its training corpus by searching the web, clicking on links, and reading the contents of webpages.
  • A code interpreter plugin which gives ChatGPT access to a sandboxed Python environment that can execute code as well as handle file uploads and downloads.

Both of these plugins are pretty astonishing in their own right, and unlock even more potential for AI to be a helpful tool (or a dangerous actor).

But what caught my eye the most from OpenAI’s announcement is the ability for developers to create their own ChatGPT plugins which interact with your own APIs, and more specifically the way in which they’re created.

Here’s how you create a third party plugin:

  • You create a JSON manifest on your website at /.well-known/ai-plugin.json which includes some basic information about your plugin including a natural language description of how it works. As an example, here’s the manifest for the Wolfram Alpha plugin.
  • You host an OpenAPI specification for your API and point to it in your plugin manifest.

That’s it! ChatGPT uses your natural language description and the OpenAPI spec to understand how to use your API to perform tasks and answer questions on behalf of a user. The AI figures out how to handle auth, chain subsequent calls, process the resulting data, and format it for display in a human-friendly way.

And just like that, APIs are accessible to anyone with access to an AI.

Importantly, that AI is not only regurgitating information based on a static set of training data, but is an actor in and of itself. It’s browsing the web, executing code, and making API requests on behalf of users (hopefully).

The implications of this are hard to fathom, and much will be discussed, prototyped, and explored in the coming months as people get early access to the plugin feature. But what excites me the most about this model is how easily it will allow for digital bricoleurs to plug artificial intelligence into their homemade tools for personal use.

Have a simple API? You now have the ability to engage with it conversationally. The hardest part is generating an OpenAPI spec (which is not very hard to do, it’s just a .yaml file describing your API), and you can even get ChatGPT to generate that bit for you. Here’s an example of someone successfully generating a spec for the Twilio API using ChatGPT.

It seems to me that this will greatly incentivize companies and products to create interfaces and APIs that are AI-friendly. Consumers will grow to expect AI tools to be able to interface with the other digital products and services they use in the same way that early iPhone users expected their favorite websites to have apps in the App Store.

There are certainly many negative and hard-to-predict consequences of opening up APIs to AI actors, but I am excited about the positives that might come from it, such as software products becoming more malleable via end-user programming and automation.

Don’t want to futz around with complex video editing software? Just ask your AI to extract the first 5 seconds of an MP4 and download the result with a single click. This type of abstraction of code, software, and interface will become ubiquitous.

Of course, I don’t think graphical interfaces are in trouble just yet. Geoffrey Litt points out that trimming video is actually much more intuitive via direct manipulation than via chat.

But when you consider that ChatGPT can write code to build GUIs and can even interact with them programmatically on a user’s behalf, the implications become clear. Everyone will benefit in some way from their own personal interface assistant.

I wonder also how many future products will be APIs only with the expectation that AIs are how users will interact with them?

Simon Willison wrote a great blog post demonstrating this. He wired up a ChatGPT plugin to query data via SQL, and the results, though technically returned as JSON, get displayed in a rich format much more friendly for human consumption.

I wonder if future “social networks” might operate simply as a backend with a set of exposed APIs. Instead of checking an app you might simply ask your AI “what’s up with my friend Leslie?” Or you could instruct your AI to put together a GUI for a social app that’s exactly to your specification.

This will certainly lead to entirely new ways of relating to one another online.

It would be interesting to try this today with good old RSS, which could be easily wired up as a ChatGPT plugin via a JSON feed. Alas, I don’t yet have access to the plugins feature, but I’ve joined the waitlist.

I’m both excited and nervous to see what happens when we combine AI with a medium like the web.

I’m finally getting around to playing Ghost of Tsushima which is impressive all around. But the thing that has impressed me most is… wind??

The game rejects the normal interface of a minimap to guide players, and instead uses the wind and the environment to show the way forward.

When The Guiding Wind blows in Tsushima, the entire game world responds. The trees bend over, pointing you onward. The pampas grass ripples like the surface of water. Leaves and petals swirl around the scene. The controller emits the sound of gusting wind, and the player can swipe the touch pad to blow the winds and set the environment in motion.

Such a simple mechanic is so unexpected and beautiful and calming in a world of cutting edge graphics and 4K 60FPS VR madness. Video games (and everything else) today are so over the top, but in the end it’s something simple like the wind that gets you.

🍃 Let the guiding wind blow 🍃