The problem was, you can’t ask Aristotle a question. And I think, as we look towards the next fifty to one hundred years, if we really can come up with these machines that can capture an underlying spirit, or an underlying set of principles, or an underlying way of looking at the world, then, when the next Aristotle comes around, maybe if he carries around one of these machines with him his whole life—his or her whole life—and types in all this stuff, then maybe someday, after this person’s dead and gone, we can ask this machine, “Hey, what would Aristotle have said? What about this?” And maybe we won’t get the right answer, but maybe we will. And that’s really exciting to me. And that’s one of the reasons I’m doing what I’m doing.
For all the work we’ve put into creating ways to capture our lives digitally, it doesn’t feel like the ritual of passing that information down to future generations is considered much.
I wonder if this might be a common use case for conversational AIs in the future. You can imagine a ChatGPT trained on the works of Aristotle, waiting to answer new and novel questions. Like Steve says, we won’t always get the right answer, but maybe we will.
The digital book is lovely and full of wisdom—definitely a recommended read.
It’s hard to keep up with the progress of AI. It seems as though every week there’s a new breakthrough or advancement that seemingly changes the game. Each step forward brings both a sense of wonder and a feeling of dread.
This past week, OpenAI introduced ChatGPT plugins which “help ChatGPT access up-to-date information, run computations, or use third-party services.”
Though not a perfect analogy, plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data.
A web browser plugin which allows the AI gather information from the internet that was not originally part of its training corpus by searching the web, clicking on links, and reading the contents of webpages.
A code interpreter plugin which gives ChatGPT access to a sandboxed Python environment that can execute code as well as handle file uploads and downloads.
Both of these plugins are pretty astonishing in their own right, and unlock even more potential for AI to be a helpful tool (or a dangerous actor).
But what caught my eye the most from OpenAI’s announcement is the ability for developers to create their own ChatGPT plugins which interact with your own APIs, and more specifically the way in which they’re created.
Here’s how you create a third party plugin:
You create a JSON manifest on your website at /.well-known/ai-plugin.json which includes some basic information about your plugin including a natural language description of how it works. As an example, here’s the manifest for the Wolfram Alpha plugin.
You host an OpenAPI specification for your API and point to it in your plugin manifest.
That’s it! ChatGPT uses your natural language description and the OpenAPI spec to understand how to use your API to perform tasks and answer questions on behalf of a user. The AI figures out how to handle auth, chain subsequent calls, process the resulting data, and format it for display in a human-friendly way.
And just like that, APIs are accessible to anyone with access to an AI.
Importantly, that AI is not only regurgitating information based on a static set of training data, but is an actor in and of itself. It’s browsing the web, executing code, and making API requests on behalf of users (hopefully).
The implications of this are hard to fathom, and much will be discussed, prototyped, and explored in the coming months as people get early access to the plugin feature. But what excites me the most about this model is how easily it will allow for digital bricoleurs to plug artificial intelligence into their homemade tools for personal use.
Have a simple API? You now have the ability to engage with it conversationally. The hardest part is generating an OpenAPI spec (which is not very hard to do, it’s just a .yaml file describing your API), and you can even get ChatGPT to generate that bit for you. Here’s an example of someone successfully generating a spec for the Twilio API using ChatGPT.
It seems to me that this will greatly incentivize companies and products to create interfaces and APIs that are AI-friendly. Consumers will grow to expect AI tools to be able to interface with the other digital products and services they use in the same way that early iPhone users expected their favorite websites to have apps in the App Store.
There are certainly many negative and hard-to-predict consequences of opening up APIs to AI actors, but I am excited about the positives that might come from it, such as software products becoming more malleable via end-user programming and automation.
Don’t want to futz around with complex video editing software? Just ask your AI to extract the first 5 seconds of an MP4 and download the result with a single click. This type of abstraction of code, software, and interface will become ubiquitous.
But when you consider that ChatGPT can write code to build GUIs and can even interact with them programmatically on a user’s behalf, the implications become clear. Everyone will benefit in some way from their own personal interface assistant.
I wonder also how many future products will be APIs only with the expectation that AIs are how users will interact with them?
Simon Willison wrote a great blog post demonstrating this. He wired up a ChatGPT plugin to query data via SQL, and the results, though technically returned as JSON, get displayed in a rich format much more friendly for human consumption.
I wonder if future “social networks” might operate simply as a backend with a set of exposed APIs. Instead of checking an app you might simply ask your AI “what’s up with my friend Leslie?” Or you could instruct your AI to put together a GUI for a social app that’s exactly to your specification.
It would be interesting to try this today with good old RSS, which could be easily wired up as a ChatGPT plugin via a JSON feed. Alas, I don’t yet have access to the plugins feature, but I’ve joined the waitlist.
I’m both excited and nervous to see what happens when we combine AI with a medium like the web.
I’m finally getting around to playing Ghost of Tsushima which is impressive all around. But the thing that has impressed me most is… wind??
The game rejects the normal interface of a minimap to guide players, and instead uses the wind and the environment to show the way forward.
When The Guiding Wind blows in Tsushima, the entire game world responds. The trees bend over, pointing you onward. The pampas grass ripples like the surface of water. Leaves and petals swirl around the scene. The controller emits the sound of gusting wind, and the player can swipe the touch pad to blow the winds and set the environment in motion.
Such a simple mechanic is so unexpected and beautiful and calming in a world of cutting edge graphics and 4K 60FPS VR madness. Video games (and everything else) today are so over the top, but in the end it’s something simple like the wind that gets you.
For a while now I’ve been saying that science fiction works by a kind of double action, like the glasses people wear when watching 3D movies. One lens of science fiction’s aesthetic machinery portrays some future that might actually come to pass; it’s a kind of proleptic realism. The other lens presents a metaphorical vision of our current moment, like a symbol in a poem. Together the two views combine and pop into a vision of History, extending magically into the future.
I read that and then, a day later, stumbled upon a thought experiment published on the wonderfully quirky website of Ville-Matias Heikkilä.
The thought experiment, titled “Inverted computer culture”, asks the reader to image a world where computing is seen “as practice of an ancient and unchanging tradition.”
It is considered essential to be in a properly alert and rested state of mind when using a computer. Even to seasoned users, every session is special, and the purpose of the session must be clear in mind before sitting down. The outer world is often hurried and flashy, but computers provide a “sacred space” for relaxing, slowing down and concentrating on a specific idea without distractions.
What a dream. I encourage you to read the piece which is quite short. It struck me as being exemplary of the aforementioned double action of science fiction—both a vision of the future and a metaphor for the current moment. You can imagine how a fictional immune response to our current culture might drive us toward a world of computing and technology like the one imagined here.
The story’s alright, but the last paragraph is something else. It captures so many of the feelings I have about computing and the web:
As she sat there, lost in her work, she knew that she would never leave this place, this sacred space where the computers whispered secrets to those who knew how to listen. She would be here always, she thought, a part of this ancient tradition, a keeper of the flame of knowledge. And in that moment, she knew that she had found her true home.
I have a lot of nostalgia for the era of blogging that I grew up with during the first decade or so of the 2000s.
Of course there was a ton of great content about technology and internet culture, but more importantly to me it was a time of great commentary and experimentation on the form of blogging and publishing.
As social media and smartphones were weaving their ways into our lives, there was a group of bloggers constructing their own worlds. Before Twitter apps and podcast clients became the UI playgrounds of most designers, it was personal sites and weblogs that were pioneering the medium.
Looking back, this is probably where my meta-fascination with the web came from. For me the most interesting part has always been the part analyzing and discussing itself.
Robin Sloan puts it well (as he is wont to do):
Back in the 2000s, a lot of blogs were about blogs, about blogging. If that sounds exhaustingly meta, well, yes — but it was also SUPER generative. When the thing can describe itself, when it becomes the natural place to discuss and debate itself, I am telling you: some flywheel gets spinning, and powerful things start to happen.
Design, programming, and writing started for me on the web. I can recall the progression from a plain text editor to the Tumblr theme editor to learning self-hosted WordPress.
All of that was driven by the desire to tinker and experiment with the web’s form. How many ways could you design a simple weblog? What different formats were possible that no one had imagined before?
Earlier this week I listened to Jason Kottke’s recent appearance on John Gruber’s podcast and was delighted to hear them discuss this very topic. Jason is one of the original innovators of the blog form, and I’ve been following his blog, kottke.org, since I was old enough to care about random shit on the internet.
Jason and John have an interesting conversation during the podcast (starting around 25 minutes in) about how the first few generations of bloggers on the web defined its shape. Moving from print to digital mediums afforded a labyrinth of new avenues to explore.
It’s always important to remind ourselves that many of the things we take for granted today on the web and in digital design had to be invented by someone.
Early weblogs did not immediately arrive at the conclusion of chronological streams—some broke content up into “issues”, some simply changed the content of their homepages entirely.
It wasn’t until later that the reverse-chronological, paginated-or-endless scrolling list of entries was introduced and eventually became the de-facto presentation of content on the web. That standard lives on today in the design of Twitter, Instagram, etc., and it’s fascinating to see that tradition fading away as more sites embrace algorithmic feeds.
By the way, I’d be remiss here if I didn’t mention Amy Hoy’s amazing piece How the blog broke the web. Comparing the title of her piece with the title of this one, it’s clear that not everyone sees this shift in form as a positive one, but she does a great job in outlining the history and the role that blogs played in shaping the form of the web. Her particular focus on early content management systems like Movable Type is fascinating.
Another great example that Jason and John discuss on the podcast is the idea of titling blog posts.
They point out that many early sites didn’t use titles for blog posts, a pattern which resembles the future form of Tweets, Facebook posts, text messages, and more. But the rise of RSS readers, many of which made the assumption that entries have titles and design their UIs around that, forced many bloggers to add titles to their posts to work well in the environment so popular with their readers.
Jason mentions that this was one of the driving factor for kottke.org to start adding titles to posts!
This is an incredible example of the medium shaping the message, where the UI design of RSS readers heavily influenced the form of content being published. When optimizing for the web, those early bloggers and the social networks of today both arrived at the same conclusion—titles are unnecessary and add an undue burden to publishing content.
This difference is the very reason why sending an email feels heavier than sending a tweet. Bloggers not using titles on their blog posts figured out tweeting long before Twitter did.
When referring to the early bloggers at suck.com, Jason said something that I think describes this entire revolution pretty well.
[…]there was in information to be gotten from not only what they linked to, but how they linked to it, which word they decided to make the hyperlink.
It’s not often that you have an entirely new stylistic primitive added to your writing toolbox. For decades you could bold, italicize, underline, uppercase, footnote, etc. and all of a sudden something entirely new—the hyperlink.
With linking out to other sites being such a core part of blogging, it’s no surprise that the interaction design of linking was largely discussed and experimented with. Here’s a post from Shawn Blanc discussing all the ways that various blogs of the time handled posts primary geared towards linking to and commenting on other sites.
Another similar example is URL slugs—the short string of text at the end of a web address identifying a single post. For many of my favorite bloggers, the URL slug is a small but subtle way to convey a message that may or may not be the same as the message of the post itself. One other stylistic primitive unique to the web.
The different ways in which bloggers designed their site or linked to words became a part of their unique style, and it gave their each of them an entirely new way to express themselves.
It’s hard to communicate how grateful I feel for this era of experimentation on the web, and specifically for Jason Kottke’s influence on me as a designer. The past 25 years have been a special time to experience the internet.
There was a time when I thought my career might be curved towards blogging full-time and running my own version of something like kottke.org. Through exploring that I found my way to what I really loved—design and software. My work continues to benefit from what I learned studying bloggers and publishers online.
Whether you care much about writing or not, I encourage you to have a blog. Write about what interests you, take great care of how you present it to the world, and you might be surprised where it takes you. There are new forms around every corner.
The recent fad of the metaverse is all about digitizing the physical world and moving our shared experiences (even more so) onto the internet.
I wonder what an opposite approach might look like—one where, instead of making the physical digital, we instead attempt to bring the online world into our physical spaces (and no, I don’t remotely mean AR or VR).
The first thing that comes to mind for me is Berg’s now-defunct Little Printer project from back in 2012 or so. Little Printer was a web-connected thermal printer that lived in your home and allowed you to receive print-outs of digital publications, your daily agenda, messages from friends, etc.
Little Printer was an attempt at bridging the physical and digital, essentially creating a social network manifested as a physical object in the home and consumed via paper and ink.
Personal websites are the digital homesteads for many. Those sites live somewhere on a web server, quietly humming away in a warehouse meant to keep them online and secure. For each of us those servers represent empty rooms waiting to be decorated with our thoughts, feelings, interests, and personalities. We then invite strangers from all over the world to step inside and have a look.
Like the Little Printer, I wish that my web server could exist in my home as a physical object that could be touched, observed, and interacted with.
Hosting a web server yourself is surprisingly difficult today given the advances we’ve made in consumer technology over the last few decades. Hosting content on someone else’s server has become as simple as dragging and dropping a folder onto your web browser. There are countless business that will happily rent out online space to for very cheap (or even free, with the hopes that eventually you’ll upgrade and give them money).
We’re all tenants of a digital shopping mall, sharing space controlled by corporate entities who may not share our values or interests.
When someone visits my website, I wish it could feel more like inviting them into my home. What if my website lived in my home with me?
Imagine if having a web server in the home was as common as any other appliance such as a refridgerator. You might look over and see your friend (or a welcome stranger!) browsing your website. You could see what they’re browsing—look at photos with them, listen to a song together, whatever—and start a conversation about any of it.
Ever since we’ve decided that servers are something heavy, enigmatic, gigantic black boxes belonging to corporations - not individuals - we have slowly lost agency towards our own small space on the Internet. But actually, servers are just computers. Just as your favorite cassette player or portable game console, they are something that you can possess and understand and enjoy.
It is boundary-violating, to have a website in the corner of your bedroom. Websites are meant to be in the cloud. Eternal, somehow, transcendent, like the voice of code floating down from the sky. But no, there it is. It is real! I can kick it! Argumentum ad lapidem.
Those fixated with the idea of the metaverse might are interested in bringing real-world objects into the cloud. I wonder instead how we might try to bring objects from the cloud into the real world and into our homes. How would we design webpages differently if our materials included the servers that they’re hosted on?
I remember the first time I saw a Mac in person. I was in middle school, but on the campus of the nearby college because my dad had a gig as a stand-in drummer for a local band.
While hanging out backstage—something I often had the privilege of doing from a young age as the son of a drummer—I saw a girl, sitting on the ground, typing away on a brand new MacBook Air.
The Air had just been introduced to the world, and I remember rewatching the announcement video online. Steve Jobs talked about the computer at Macworld only to reveal that it had been on stage with him the entire time inside a manilla envelope. He opened it and pulled out the thinnest computer in the world. I had no idea a computer could even look like that.
After my dad’s show I immediately pointed out the girl and her computer, and I remember him sharing my excitement so much that he asked the girl if we could look at it a bit closer. She was kind and happy to show it off and even let me hold it. From then on, I was hooked. I knew that’s the computer I’d own one day, and sure enough I’d get my first Mac, a MacBook Air, a few years later in high school.
And now Apple has introduced a MacBook Air thinner than the original iPhone. I wonder what middle school me, who coveted but did not own an iPhone at the time, would think about that.
I received the new M2 MacBook Air (in Midnight) a few months ago and I’ve been smitten with it. It is a cool, dark slab of silent compute, and it feels dense and book-ish in the most satisfying way.
The battery life deserves its own mention, and feels like a leap ahead for personal computers in its own right.
In all honesty I thought the time had come when a computer could not longer really excite me in the way that original MacBook Air did. But, this new one takes me right back there. It reminds me how lucky we all are to carry around devices that can conjure up all sorts of magic. And it takes me back to my beginnings in software when people wrote about the design of new iOS and Mac apps like they were art critics.
My life and friends and relationships and career are all in there, wound up with the electrons.
In setting up and using this new computer for the first time, however, I’ve realized how much devices today are like shells. The real computers, the ones that store our data and perform tasks on our behalf, are behemoths sitting in data centers. Setting up a new computer today is mostly a task of signing into various web applications to access your data, not transferring data onto the machine itself.
Our computers have become internet computers. And that might mean that the physical devices we own will trend towards nothingness—their goal is no longer to impress or inspire, but to be so small and light as to fall away entirely.
There’s something about that which makes me feel a bit melancholy. It feels like the days of computing devices being objects with personality and conviviality are fading. The computer is no longer a centerpiece, it’s an accessory, a thin client for some other machine or machines which are hidden away from us.
Since I was a kid the space program has been an object of my fascination, and even as an adult I’ve been captured by the heroics of NASA and other organizations launching probes and telescopes into the far reaches of space.
But something has never sat quite right with me about the recently renewed interest in human space travel, especially from CEOs of private companies like Musk and Bezos.
I think it’s always been a combination of two things:
There are so many problems here on Earth, many of which could be solved with the resources being invested into sending humans to another world.
Wherever you stand on the matter, whether you’re a Musk fanboy, an unaligned Mars obsessive, or just biplanetary/curious, I invite you to come imagine with me what it would take, and what it would really mean, for people to go put their footprints in the Martian sand.
Look at that! The webpage on the right is the canvas, and the code on the left is the medium. They’ve even built in visual editing tools such as a color picker.
Webpages have always been destinations, but this invites them to be starting points—blank canvases, even. Your browser now invites you to extend and reimagine the web in whatever way you see fit.
It has always been possible to run user scripts and styles via browser extensions, but the developer experience of creating an extension has never been particularly beginner friendly. I’ve personally never seen extension development integrated so seamlessly and directly into the browser. Arc even implies that working with the web might be just as important as browsing it—in Arc’s interface, the Boost editor sits at the same level as the page you’re browsing, not in some nested panel that feels secondary to the experience.
Under the hood, Boosts are just a folder of HTML, JS, and CSS files. You can zip them up and send them to a friend, if you’d like. One can imagine a way to easily share boosts on the web in some sort of marketplace built right into Arc. There could be entire forums dedicated to sharing boosts around like Winamp skins for websites.
Boosts empowers even non technical users to create with the web rather than simply consuming it. The possibilities are exciting, and it’s refreshing that they’re being explored by a startup in 2022 at all.
I’m interested in software that works for us, our creativity, and our attention. The web is a powerful building material with a variety of textures, and it’s time we had tools that let all of us take advantage of such an incredible resource.
From The Browser Company’s email to Arc beta members about Boosts:
So, what happens to the internet when changing the internet is this easy?
If you read any of my writing you know that Robin Sloan is one of my favorite internet thinkers. He’s just published a spec for a new web protocol he’s designed called Spring ‘83.
I’d encourage you to read the spec yourself and check out the demo client. Just look at it:
It’s just so wonderfully web-ish.
Robin frames this work around imagining alternative “ways of relating” across the internet. So many of our current ways of relating online (social media, RSS, email) do a create job of syndicating and delivering content, but they sacrifice the presentation of the content and in doing so lose a key piece of what makes the web so great: creativity, expression, and freedom.
I hadn’t thought about this much before, but subconsciously I’ve always been aware of it on some level because when I read RSS I much prefer to jump out to a person’s website rather than reading in the client. It feels more personal and imparts the voice of the author. Reading on the source website feels like stepping into someone’s home for a chat, whereas reading in an RSS client feels more… clinical.
Whostyles calls on website owners to make a chunk of CSS available at a well-known URL that others can embed into their sites to match the style of the original. If everyone implemented a whostyle for their site, an RSS client could use it to personalize the display of feed entries.
Spring ‘83 feels like it’s getting at these ideas, and it’s doing so by embracing HTML/CSS as a creative tool and a medium unto themselves.