Recent Notes
TIL: Hyparquet
I came across a nice dependency-free library today, hyparquet, for parsing Parquet files in JavaScript.
It supports many compression formats including Snappy and Zstd, and reads Parquet files in chunks using range requests rather than loading the entire file. You can read just the metadata, or particular rows and columns.
It’s a good companion to Jeff Heer’s flechette, a lightweight library for reading and writing Arrow files. (API reference)
See also: fzstd, a small library for decompressing zstd files.
Advanced Essay
In the 1970s, decades before computers began to compete at the highest levels of chess, a new variant of the game was invented, called Advanced Chess. The idea was that instead of a human player competing against a computer, the human and machine would play together on the same side.
Thinking about this made me wonder about other forms of human–AI collaboration. In particular, I became curious about how the existence of AI models on the reader’s side might fundamentally change the essay form.
First, what is an essay?
Whenever you start thinking about how to cleave apart knowledge into its constituent components and dynamically reassemble them, there’s a risk of blurring the lines between the various forms of media entirely. So, for the purposes of this discussion, I’m going to say that an essay consists of a sequence of words and other elements intentionally arranged by its author in a particular order.1 This includes computational essays, in which the essay itself is formed from a computational medium in which text and images can exist alongside of dynamic illustrations, simulations, and other forms of interactive media.2
Constraints on essays
Traditionally, essays have been written for humans to read, which imposes a constraint on the essay form (and other art forms), which is that human attention is finite and valuable. Every second, you are gifted with exactly one second of the reader’s attention, and you had better make good use of it.3
I’ve sometimes wished for the “extended” version of an essay, or for a briefer version. But today, essays are typically provided in a one-size-fits-all fashion that does not adapt to the reader, and that the reader cannot easily adapt to themselves.
How does this change in the presence of reader-side AI models?
Compared to human attention, AI attention is potentially much, much cheaper. This lowers the cost of adding extra information to an essay so long as it does not interfere with the “main track” designed for people to read.
Imagine if the essay came bundled with a whole bunch of extra material:
- Some of it can be text, but it can also be images, video, code, and links.
- Some of it can be organized, and some of it can be messy. The LLM can sift through and pull out the relevant pieces.
- Some of it can be physically situated at particular parts of the essay, pointing out that a piece of extra material goes well with a particular section.
- Some of it can be written by the primary author, but it can also include works by others that “pair well” with the primary piece.
- Some of the context can even come purely from the reader’s side.
The idea of curating a context for a particular piece of writing seems like a very powerful idea.
So, what can the AI do with all of this extra information?
It can take advantage of not only the material provided by the author, but also the context it understands about the reader. The writer knows things the reader doesn’t, and the reader knows things the writer cannot.4
For example, the AI can…
- Use its knowledge of the reader’s interests, experience level, and goals to select and dynamically annotate the essay with relevant extra material provided by the author.
- For example, you can imagine Ed Tufte-style sidenotes synthesized on the reader side to summarize interesting pieces of extra content that are particularly relevant for this specific reader.
- Allow the reader to establish different lenses for reading a work.
- For example, maybe you’re interested in the author’s personal background and how the piece came to be (think ‘behind-the-scenes’).
- Or maybe what you want are lots of concrete examples of the abstract points discussed in the main text.
- Maybe the reader really likes bridges.
- The AI could also intervene on the main text in various ways, though this makes me uneasy given the current levels of the technology. (I’ve been seeing incorrect summaries of emails in my Mail app ever since I updated to the latest version of iOS.)
- You could almost think of the AI as performing an editing pass, but applied on the reader side rather than the writer side. Perhaps Strunk & White will gain a new life?
The User Interface
How would this look from the reader’s perspective? A few quick points here. I’d love to experiment with this, but I don’t currently have the time!
- Highlighting is undervalued. Drawing attention to various parts of the text is an amazing way to encourage a reader to contemplate a text differently.
- Dynamic sidenotes seem like a pretty good idea for contextually inserting information without interfering with the main flow of the text.
- the AI can edit or augment the main text while making clear that it was edited. Here you can imagine the AI changing sentences, removing them, eliding entire sections, rearranging the material, all to draw out a particular set of connections or remove material irrelevant to the reader’s current frame. (Related: An AI-Resilient Text Rendering Technique for Reading and Skimming Documents)
Existing examples
There’s a lot of prior art, though of course none of it has been designed with the intention of LLM use, since LLMs only appeared on the scene after these works were published.
- Michael Pollan, aside from being an excellent example of nominative determinism, has written many books about food, with later books compressing and distilling down the material from his previous books to a more essential core.
- Can’t Hurt Me is a combination book–podcast by David Goggins. When the audiobook was being recorded, David and his co-author both sat in a recording booth and effectively recorded a short podcast episode following the reading of each chapter. I really like the combination of formal and informal interspersed with the discussion afterwards, illuminating the ideas in the chapter that it follows.
- The Annotated Alice I remember reading this book as a kid. There’s a whole genre of annotated works where often one page contains the original work and the companion page is filled with contextual annotations.
- Just spotted: A summary blog post + 1.5 hour roundtable discussion on a new AI alignment result.
- This seems like a very silly and incomplete list of examples. If you have suggestions I would find that very welcome! Please email me or send me a message.
Foot-sidenotes
So, that’s it for now. If I had had more time, I would have written a shorter essay. But in fact, there may be an advantage to this style of writing, and a future AI can condense the ideas context-specifically for individual readers based on their interests. 🙃
In service of that future, here are some extended side-footnotes (or foot-sidenotes) that I (or my AI) might return to if I decide to tinker with this concept later.
- In light of this essay, it’s funny to think about how students in the Harry Potter universe were unhappily stuck with their textbooks.
- It’s interesting to think about the sliding scale of how strongly an essay reconfigures itself towards the reader, or, put another way, how much control is ceded by the writer.
- In the case of chess, the human and machine collaborate on an external goal, namely winning the game against an outside opponent. Eventually, the human may become unnecessary in achieving that goal (eg. see Advanced Chess Obituary). Advanced Essays are concerned with human understanding, though, so the smarter the AI, the more it should be able to help.
- Related: Hofstadter’s Variations on a Theme as the Crux of Creativity. Hofstadter introduces the idea of an implicosphere, or implicit counterfactual sphere, the sphere of implications a concept.
- I like the idea of introducing a background context for reading because it curates a space for new connections to be made between existing ideas.
- I visualize this as two clouds of ideas that are moving ever closer to each other, until they begin to overlap, allowing new connections to be made between the interiors of these idea clouds, sparking like lightning. To me, this is one of the underpinnings of creativity.
- Managing these clouds might be something that AI will eventually be well-suited to do, creating a context conducive to creative work.
- Another kind of context that an author can provide is a direction for associations, pointing the AI towards a particular area. There’s a very large set of possible contexts in which a piece can be read, but perhaps the author has some ideas about some contexts that pair well.
- I particularly like the way that readerside LLMs encourage writers to put much more of themselves alongside their work. Particularly in the case of writing text, extra material is effectively free to store due to its small size. And since LLMs can do reader-side synthesis, information can be much cheaper for the writer to produce since they don’t have to expend much effort on organization.
- All this reminds me of a funny suggestion I once made to a friend, which is that they should make an initial commit of their code with all of the vowels left out, then commit a second one, with a message to the effect of
oops, forgot the vowels
. Or maybe even one letter at a time. - It also reminds me of a story from Raymond Smullyan about how one should learn to play piano by going one (physical) piano key at a time, spending a week practicing middle C. Not all arrangements of the same material are equally good! (See also: Tidying Up Art)
- Stephen Wolfram’s book A New Kind of Science might make for an interesting test case, because the second half of that book is a collection of historical footnotes that could probably be usefully attached to various points of the text.
- On the writer’s side there could be elicitation tools that read the essay and then pepper the author with questions to acquire contextual information for every part of the essay. Techniques from journalistic/police/courtroom interviews would be useful here.
- I often approach writing iteratively, saying the same thing again and again from many different angles, and traversing the same territory from multiple directions. That set of “essay samples” would make for an interesting background context to whichever one I finally decide is authoritative.
This is probably only roughly right, but the key idea I’m thinking about here is how we can augment a traditional essay, given the new possibilities and constraints of AI assistance on the reader side.
To date, the best instantiations of the computational notebook idea are found in Mathematica, which introduced the format, and Observable notebooks, which are an innovative browser-based take on some of the same ideas.
I spend a lot of my time thinking about data visualizations, which are not entirely unlike the written word. In that context, one consequence of the finiteness of human attention is that all of the visual elements on the screen are competing for the finite, precious resource of the reader’s attention, which makes it necessary to be very intentional about choosing what to show, emphasize, or hide. But there is no best visualization – it’s a function of not only the data, but also the audience and their interests and goals. Presenting an expert with a visualization design for a beginner will often cause them to be dissatisfied, as the simplifications that were necessary for basic comprehension precluded some of the more advanced insights, or omitted some important controls. . Fortunately, there’s more of a practice of providing alternate views of the same data so that you can meet the reader (or user) where they are.
This feels a bit like late binding in dynamic programming languages, and like Julia’s just-in-time-ahead-of-time compilation strategy. The idea is that the author of the library has written the logic, but does not know what concrete types their function will be called with. The user of the library has the values in their hand at the time they go to call the function, so the compiler just-in-time specializes the code and compiles an optimized version of the function for the exact types of values the user wants to call the function with. This is analogous to a writer who knows some things but lacks the full context of the reader’s experience level and interests, where the AI model can dynamically adapt the work to the interests of and capabilities of the reader.
TIL: Rust-analyzer can expand macros
If you use an editor with Rust’s LSP integration, you can put your cursor on a particular derive
macro, such as Clone
in #[derive(Clone)]
, or an the name of a macro application, such as matches!
in matches(foo, bar)
, and select “Expand Macros Recursively”, which will open up a side buffer showing the expanded code from that macro.
This is extraordinarily useful when trying to understand what code is generated from a derive, or when debugging an issue with your own macros.
I discovered this through a comment made by David Barsky in the Rust Zulip.
Formatting Code with a Git Hook
Here’s a git pre-commit hook to auto-format code on commit. It’s useful if you’re working with code whose formatting guidelines differ from the ones you’ve configured in your code editor.
# .git/hooks/pre-commit
# Store list of staged files that match your target pattern.
# The `git diff` command returns a list of staged files.
files=
if [; then
# Add all of the previously staged files back to the staging area
# to check in the formatted files.
fi
# Exit with an error when there are no longer any staged changes,
# since otherwise git will create an empty commit.
if ; then
fi
Quicker Netlify Deploys
I use Netlify to host small static sites like this one. It does its job well and is very convenient to use.
But I noticed that since the standard way of setting things up is to link Netlify directly with your Git repo, deploys could take a while. When you push your code, Netlify builds the site on their own infrastructure before deploying it.
I recently came across a very neat alternative where you can build your site locally and deploy it to Netlify directly with a single HTTP request. This is really convenient when you don’t need the overhead that comes with more careful deploy management.
The link above does a good job of describing the process, but the upshot is that the following command is all you need to have a deploy running live just a few seconds later.
Right now I use it like this (these build commands go in a justfile
) to ensure that there’s a corresponding commit for every deployed version:
pub: no-uncommitted-changes build
@rm -f dist.zip
@zip -q -r dist.zip dist;
@curl -H "Content-Type: application/zip" \
-H "Authorization: Bearer $(cat .netlify-token)" \
--data-binary "@dist.zip" \
https://api.netlify.com/api/v1/sites/what.yuri.is/deploys | jq
@rm -f dist.zip
@echo ""
@echo "Deployed!"
# Halts with an error if the repository contains uncommitted changes
no-uncommitted-changes:
@ git diff --exit-code > /dev/null || (echo "Please commit changes to the following files before proceeding:" && git status --short)
build:
zola build