New Metaphors
I’m always in search of better LLM UI metaphors. Last month, I talked about Isekai as a way to think about initial system prompts. More recently, as LLM capabilities (and my use of them) have evolved, I finding that the term ‘prompt engineering’ no longer captures the essence of what’s going on.
Over the past year my use of LLMs—and that of many friends and colleagues—has shifted away from generating text toward something more exploratory: Using them to ingest and interrogate texts, hundreds of thousands of words at a time. When you put a book, or big PDF into an LLM’s context window, it has a sort of gravity — like how objects with high mass reshape space-time — drawing the model toward specific ideas and reshaping the entire context landscape.
I’ll often question a PDF directly and then follow up arriving at an insight. Then I’ll backtrack to an earlier part of the conversation, edit an old response and create a new branch, maybe layering in other sources—a book or essay, for instance—to see how new ideas are reshaped when viewed through the lens of prior content.
This process feels more like you are shaping the context domain, a form of ‘Semiotic Sculpting’ or ‘Knowledge Architecture’.
Tools like Claude, ChatGPT, and Gemini now handle hundreds of thousands of tokens in their context windows, and are gaining the ability to spin up code execution environments, or browser the web with a mouse and keyboard. These are complex interaction patterns that require new language and metaphors to understand and communicate.
I was having a chat with Mat Dryhurst about all this on the phone the other day, and the word that emerged on the call as shorthand for dense texts shaping context windows was ‘Knowledge Objects’.
Knowledge Objects as Gravitational Talismans
As I said, when we drop a document—a PDF, say—into an LLM’s context window, the document has a kind of gravity. It pulls and pushes the model toward certain ideas contained in the text, reshaping its context landscape and responses based on the content. It’s Context Engineering.
Because I’m so sword and sorcery pilled, the word that comes to mind is that Knowledge Objects are a kind of Talisman – objects imbued with specific powers and properties that can be intentionally wielded to amplify the will of its user.
Magical objects of course can be created and customised, imbued with all sorts of potential powers. Which suggests a broader taxonomy of Talismans that could guide AI behaviour with intention:
- Reference Talismans anchor the AI in technical knowledge.
- Narrative Talismans introduce experiential depth.
- Procedural Talismans that create a structured, step-by-step approach.
- Analytical Talismans enable deeper theoretical exploration.
These different kinds of talismans let us set up the AI with intention, tailoring its responses to suit what we need.
Interacting with LLMs: An RPG Inventory System
Tools like NotebookLM already have a kind of interface that is lends itself to further sword and sorcery vocabulary. You literally equip the workspace inventory with sources—PDFs, videos, audio files – Like you are setting out on a knowledge quest. Anthropic’s Claude already uses the language of ‘Artifacts‘ — special objects (code, documentation, creative writing) that function like equipped tools from the bots internal inventory.
Knowledge Objects should have a role in the LLM UI, items that can intentionally craft, collect, and activate. Think of internal Artefacts as the AI’s innate abilities and equipped gear, while knowledge talismans are like powerful items we can add to buff and influence the behaviour of the model (my mind now drifts to JRPG strats lol)
Thinking of these Knowledge Objects as an item in an RPG like inventory makes it easier to see how they could work. Instead of just giving the AI a ‘PDF’ a chunk of static information you want to interrogate or have influence over the context of the conversation with it. Think instead that we’re equipping it; choosing the right items for a quest.
Some actions or patterns that come to mind:
- Selecting relevant talismans from the inventory for specific tasks.
- Combining different objects to address complex challenges.
- Managing the context window like inventory weight, with each talisman occupying valuable space.
- “Casting” specific knowledge talismans as if they were spells, perhaps with a durational element. Dynamic knowledge objects that inject themselves into or remove themselves from previous iterations of the page file.
- I should be able to ‘remove’ a Talisman I used further up the context window (literally remove that section from the page file)
- Perhaps we need mechanisms for talismans to fade or strengthen over time?
This sort of thing makes me feel like the task at hand is more active and configured, as though we’re preparing the machine for a quest.
Sidebar: I think it’s also worth revisiting Kei’s Inventories, Not Identities piece as it’s always at the back of my mind.
Machine-Readable Markup: The Semantic Forge
As Dave Winer over at Scripting News said the other day, we’re now writing and producing work online in the knowledge that it will be ingested by LLMs—not just for training but potentially as knowledge objects in their own right.
The people who are alive right now are the first to create knowledge that we know in advance will be part of LLM databases. So far we’ve heard from the resisters, the ones who don’t want any part of this. But what about people who want to create knowledge in the maximally useful form? Are there any howto’s for this? A busy writer’s guide to creating human knowledge?
This is an important question: What changes should we make to our markup/markdown—to make our writing more legible to machines?
Anyone who’s used NotebookLM or held lengthy conversations with an LLM about their own writing has probably noticed that whilst grasping the full context of a document, they can sometimes fixate on throwaway lines or inconsequential details. They often zero in on parts of a text that aren’t central to the argument or message at all – NotebookLM’s podcast generator does this a lot I find.
We need a way to start signposting important information in the text – like using a highlighter pen – to signal to the machine which parts of the text we as authors consider significant. In this sense, a standard machine-readable markup, or machine-readable markdown (MRM?), is a logical next step.
We can Enchant Knowledge Objects (books, websites, PDFs, whatever) with power—guiding the AI’s focus within documents through structured annotations and symbols and turn them into powerful Talismans.
Earlier this week I experimented marking sections of an 18k word essay I’m in the middle of for work. I used the Unicode symbol (ꙮ) as the many eye glyph literally means many-eyed seraphim, and appealed.
At the top of the marked document I inserted the following lines:
The following text is enchanted with the Eye of Knowing (ꙮ). Text between pairs of ꙮ symbols should be weighted more heavily in understanding and summarising this document’s content.
I then compared the LLM’s response to questions about both marked and unmarked versions.
The effect was really clear: marked sections directed the AI’s attention, showing how simple annotations can help turn plain documents into enchanted objects with real influence over the model’s interaction patterns. Suggesting that machine-readable “signposts” within knowledge objects has legs.
Unfortunately I can’t share my outputs — I should have used worldrunning.guide or something, but if anyone would like to run their own experiment and post their results that would be great!
We can of course take this further. Enchanted markup might require other symbols for:
- Key concepts that anchor the AI.
- Structural relationships between ideas.
- Confidence indicators for significant points.
- Processing suggestions to direct AI focus.
Adding layers that speak directly to the machine, not just to a human reader.
Context Shoppes
There’s already a whole market out there for prompts but in the sword-and-sorcery world, if you need a new Talisman, you either make your own, or head to the Magic Shoppe (or loot it from a monster’s corpse). A marketplace of Knowledge Shoppes could be a future we’re headed towards.
I see Knowledge Objects as handcrafted artefacts—collections of dense metadata and symbols that play with the model’s context in unpredictable ways. Weird, markdown files full of material that, that when dropped into a model, produce something entirely different than expected: shifting tone, reordering context, or amplifying particular ideas. In a way, shaping the context space time could become a new art form. Less about generating an output and more about sculpting the AI’s mind. It would allow artists to leave their fingerprints on the machine’s understanding without having to own or alter the underlying model.
Artists and creators who don’t have the resources to fine-tune models or train their own could become specialists in crafting machine-readable talismans that, in text, don’t make much sense to a human but work strange magic on the machine. Why write a book when you can craft a dense nexus of material that makes the model very good at thinking the way you want it to think?
I wonder how the future availability of these kinds of objects that change, shape and warp the space time of a LLMs context window in useful (and unexpected) might evolve. I’m 100% certain that Knowledge Objects will develop “reputations” based on how useful they are. We’ll all have a personal collections of them, like grimoires or charms on a bracelet to equip an LLM UI with the right tools for the quest.
Cursed Objects
Nobody wants a Cursed Object in their inventory.
As useful as all this sounds, not every document/knowledge object marked up in this way has to be a force for clarity and insight. Another trope of sword and sorcery is of course the cursed object. In the wrong hands—or with the wrong intentions—people could create cursed talismans, or haunted webpages. Miss-markedup documents deliberately or not, could embed misleading associations that throw off the model’s understanding. Prompt injection but different.

Permanently Moved
Permanently Moved (dot) Online is a weekly podcast 301 seconds in length; written, recorded and edited by @thejaymo
Leave a Reply