LLMs Make Context More Valuable

Reader-side LLMs enable information filtering and synthesis to be done on behalf of the reader.

This increases the value of curated contextual information, which can now be sifted through efficiently, and automatically integrated with the main text.

LLMs have issues with provenance and reliability, though, which is why I’m interested in author-provided context, which not only serves as a trust signal but also tames hallucinations, making catastrophic failure less likely.

Existing media formats have lightweight interaction mechanisms such as footnotes and annotations which augment a text with additional information. However, these interactions are usually limited to a small set of pre-determined annotations and footnotes, and the author likely had ideas they decided were not worth incorporating into the final work.

Similarly, commented code only contains a fragment of its relevant context and rarely explains every design decision. Some design decisions can be inferred from the surrounding context of the code itself, but other information cannot be inferred, and has to come from knowledge of the broader system and its environment.

I wonder what might be a good testing ground for these ideas. One thought is that large programming projects often have public design discussions about the broader tradeoffs involved in particular architectural or feature decisions. And as far as other kinds of writing, the Bible is probably one of the most analyzed and annotated pieces of text in human history, though that one’s a bit out of my area of expertise…

See also: Advanced Essay