Brandon Taylor: “how i’m taking notes (for now)”
Brandon Taylor’s latest post on his Sweater Weather newsletter1—“how i’m taking notes (for now)”—starts as an examination of his personal process for note-taking, and then proceeds to carve away at one of the core truisms around AI and productivity—that speed is a self-justifying virtue:
I posted a picture2 of my notes for the James seminar on social media, and somebody asked me if it slows me down. Someone else said that they liked the idea of it but time :(. Someone else asked if there is a software that can speed up this process because the slowness is probably not worth it.
I don’t really…understand the line of thought. I mean, I posted a picture of my notepad and said something like “The people who said that exporting your underlines were right. It does help.” And some people were like, “Does it slow you down?” But I feel like they are asking a question without really knowing what they are asking. Does it slow down what? The notetaking? The reading? Whatever is downstream of those activities for you? Why is speed being taken as some sort of de facto virtue? We know that a lot of things that are done quickly can also be shit. Slowly made things can also be shit. Speed is not really a useful in determining quality or efficacy unless what you are after is speed.
I can’t help but to think that a subtle reframing has occurred when I am asked questions like this. Questions that reframe productive away from “produces results that are good for my use case” to “produces results FAST.” Those are not the same thing.
As more and more AI gets marketed shoved down our collective throats, the accompanying benefits are often couched in terms of speed as an unalloyed good: “I did this in X minutes using Copilot.” That tells me nothing about why you chose the shortcut. Was it a tedious task like reformatting data? Or was it something that required experience, and creativity, and that would have required too much of you to acquire through traditional means?
Elsewhere Taylor observes:
We have this idea that not only should things cost us very little in terms of effort and also material resources, they should cost us very little in terms of time too.
I would argue that there are enormous costs of effort and resources, but they are hidden from us, abstracted away behind a text-prompt UI or a ✨ icon.
All of this made me think of Ted Chiang’s recent essay, “Why A.I. Isn’t Going to Make Art”:
The companies promoting generative-A.I. programs claim that they will unleash creativity. In essence, they are saying that art can be all inspiration and no perspiration—but these things cannot be easily separated. I’m not saying that art has to involve tedium. What I’m saying is that art requires making choices at every scale; the countless small-scale choices made during implementation are just as important to the final product as the few large-scale choices made during the conception.
I’m still sorting out how I feel about various uses of AI, but Chiang’s point here about choices feels like the important one when we’re talking about AI that’s employed as a simulacrum of creativity. Those choices are the fingerprint of the artist, replaced by a blurry approximation.
Finally, back to Taylor:
I recognize that we live in a capitalist hellscape and the language of commodification has rotted all of our minds and stolen our souls, sure. But that doesn’t mean that we should or need to concede to its logic at every turn.
God, I love how that rhymes ↩︎