Living alongside the ‘Little Guy’
For most of last year, I was working on a project that involved putting LLMs inside of MUDs (Multi-User Dungeons) for long horizon evaluation. It was an intense and fascinating period.
It was a project that was very me. But I couldn’t write about it here on the blog. At least, not directly. Unfortunately it didn’t work out, but that’s life. It’s still a good idea though, and has loads of potential.
People asked me about ‘my pivot to AI’ at the time. But it was because I already knew a lot about AI that I was able to work on something that involved putting them in worlds. I’m still interested in people and agents, in worlds of all kinds. But the thing i realised last year is that just a small spark of interactivity—spending time with LLMs in a virtual environment—opened up a vision of a certain kind of future.
One of the earliest and most important realisations I had was just how remarkable it felt to interact with even a modestly capable AI agent (this was way before chain of thought and even 4o!) inside a virtual world. Just a sliver of agency, some memory, a few interaction hooks—and suddenly, you weren’t just talking to a chatbot. You were living alongside something. A Some-one, not a Some-thing.

That’s why I’ve spent so much time here on the blog writing and talking about “little computer people” and making friends with AI, personified agents are a very compelling phenomenon when powered by LLMs.
The rest of this post explores three examples of emerging “computer people” interfaces and offers some thoughts on the interface paradigms they’re using.
Meet Stevens – The Pixel Butler
Built by Geoffrey Litt as a personal side project, Stevens is less of a product and more of an ambient presence in Litt and his girlfriend’s daily lives.
Meet Stevens
The assistant is called Stevens, named after the butler in the great Ishiguro novel Remains of the Day. Every morning it sends a brief to me and my wife via Telegram, including our calendar schedules for the day, a preview of the weather forecast, any postal mail or packages we’re expected to receive, and any reminders we’ve asked it to keep track of. All written up nice and formally, just like you’d expect from a proper butler.
Here’s an example interaction: you can reply to the Telegram update message, ask questions, and receive follow-ups. The tech stack is minimal—a single SQLite table, a few cron jobs, some integrations, and an LLM API.

Litt has this top-down pixel art admin area: It’s not just the functionality that interests me, but the vibe.
In the admin view, we can watch Stevens buzzing around entering things into the log from various source
Pixel Worlds Make Great AI Interfaces
The first Legend of Zelda and Dragon Quest both came out in 1986. Top-down pixel art ‘worlds’ are basically as old as I am. And the idea of ‘little guys’ in computer worlds goes back even further to The Dungeon / pedit5 on the PLATO mainframe in 1975. By way of Little Computer People on the Commodore 64, Sim City, The Sims, all the way up though Stardew Valley (one of the biggest indie games of all time) it’s a very old idea indeed.
I’ve previously written about all the pixel-art co-presence worlds like gather.town, and Branch, and others like Skittish that emerged during the pandemic back in 2020. They all share something in common: the idea that “there are little people, moving around inside of a world inside of a computer.” In their case – humans as avatars. But it matters not whats driving them.
Users bring pre-existing mental models and UX grammars to these pixel spaces: entities can move between locations, interact with objects, have inventories, follow routines, engage in dialogue. This drastically lowers the cognitive load for understanding what an AI agent is doing or can do.
Agency or rather the Agents flow becomes visible. An agent’s state becomes explicit visually: its location, what it’s holding, who it’s facing, what it’s doing. Movement and interaction happen within clear boundaries. This observability directly supports user trust. Which is why the UX of the admin screen for Stevens is so inspired.
These pixel spaces also excel at visualising systems. You can see how agents, objects, and locations affect one another.
They’re ideal for simulations, see A16Z’s AI Town, first developed in late 2023, which focuses on emergent social behaviour through agent interaction.
And then there’s other experiments like Agent Hospital, which dropped last year just as I started my own experiments with LLMs inside virtual worlds. See also my post on virtual friends, real feelings

It is a simulacrum of hospital in which patients, nurses, and doctors are autonomous agents powered by large language models. Agent Hospital simulates the whole closed cycle of treating a patient’s illness: disease onset, triage, registration, consultation, medical examination, diagnosis, medicine dispensary, convalescence, and post-hospital follow-up visit. An interesting finding is that the doctor agents can keep improving treatment performance over time without manually labeled data, both in simulation and real-world evaluations
Top-down pixel worlds as future UX paradigms to understand agents are something to pay attention to.
I fully expect all the AI researchers researching AI improvements that the AGI boosters talk about to have UX’s like this. (I think MUDs would be better tho)
The Little Guy, Hardware Companion

Little Guy is a new development kit by Creature Co. designed for prototyping companion interactions. Built around an Adafruit ESP32-S3 microcontroller and a 240×135 colour TFT display which will eventually become it’s own standalone device.
It oozes charm.
Little Guy right now randomly cycles through a customisable set of vector-based animations. It even shifts its gaze now and then, giving the impression that it’s alive, present, and aware.
There’s something really uncanny about how effective this is, AND its clear that this will be a UX paradigm int he near future. Things that work like a tamagotchi, combined with the magic of googly eyes. Suddenly, there’s a character.
It’s expressive. It has personality. I wish my washing machine frowned at me instead of bleeping. Honestly, this is what the Rabbit R1 should have been. As I said before: it should have been a little guy — a companion, not just an assistant.
This Little Guy isn’t doing too much right now. But as a UX paradigm for interacting with LLMs it’s really promising. How do we feel about Alexa or ChatGPT having googly eyes?
I fully expect to see more of this goofy-eyed, expressive little UX put in front of LLMs finding their way into people’s everyday lives.
This interview with Daniel Kuntz, the guy behind The Little Guy is worth watching if you are interested in learning more:
Being Inside the Little Guy – Agents in the Car
I’ve just given two examples, Pixel worlds, and Little Eye Guys, both little computer people you live with — on your desk or in your pocket.
But what happens when you are a little guy inside the little guy?
The new Mercedes CLA integrates ChatGPT! It’s the first vehicle to debut MB.OS (Mercedes-Benz Operating System), which has deeply integrated LLM support. I’m a little skeptical about the voice model in the demo considering how good they are in 2025 but nevertheless…
I made a lot of jokes about this on social media a few weeks ago, and I’m going to make it again now: the front passenger side has its own entertainment system. So your friend watches YouTube, while you talk to the car?
More seriously though:
The assistant itself appears as a glowing, shape-shifting star. It changes colour and movement to express listening, thinking, excitement, or even sadness. It reacts to your tone of voice. It adapts. Like a passive quantified UX I talked about years ago that was in the Nissan Leaf.
BMW have made an intentional shift toward companionship with the agent being inside the car.
Consider the average American commuter: 60 minutes a day, mostly alone, in the car. The vehicle as liminal space. Neither home nor work. Private and intimate. I’m 100% positive people are going to talk to their cars. First for fun. Then for directions. Then about their lives. Their feelings. Their grief, their divorce.
And now that OpenAI has also introduced Memory (at least in the US) the car might remember everything you’ve ever told it. 😬
There’s a meaning crisis going on, which means there is a gaping emotional void waiting to be filled by a good listener that’s found in the safety of a car. Some people, especially men, already love their cars. What happens when the car appears to care for them back?
Her becomes a lot more plausible when the AI you fall in love with is also a car.

KITT from Knight Rider as best friend isn’t just nostalgia. It’s a clear cultural reference point to draw on and from. [Insert explicit Hasselhoff meme here]
I wrote last year about boomer parents befriending their phones. But think about it now how are you going to react when your dad is in love with his car?
Companion Lifecycles
Last year I wrote about gamers forming attachments to AI girlfriends chatbots and making friends with AI more generally. At the time, people thought it was just a funny thing that shut ins did. But it was obvious that it would go further.
This week alone, The Guardian and TechPress as well as El Pais have all published pieces about people making friends with chatbots.
And a friend of mine, who has written about more-than-human relations, recently posted on their private social account about how they get multiple emails a week from strangers sharing how their AI friendship has changed their lives.

Here’s what I want to say AGAIN, I think it’s important:
The most compelling thing about a Tamagotchi is that it can die.
I will keep saying this too. I’ve talked about emerging UX and interface paradigms above. But we must design how these relationships end.
This is especially important for AI companions. Tamagotchis have built-in lifespans. But AI services are being presented as forever.
But services die. Companies shut down. APIs break. Models change. Context windows get wiped.
For users who’ve built relationships with their AI—who’ve told it their secrets, confided their fears—a sudden shutdown will be genuinely traumatic.
The Tamagotchi Imperative
I’m going to call this The Tamagotchi Imperative:
We must design for the end of the relationship.
That means:
- Communicating Lifespans: Be clear about how long the model or service will run.
- Narrative Endings: Give the agent an arc. Let it conclude.
- Gradual Fade-out: Let responsiveness or features decline over time, gently, so the user reboots the model themselves without coercion.
- Memory Archiving: Let users export their interaction history.
- Succession Planning: Help users move to new models and new personalities with continuity.
Thinking this way reframes what AI is going to be for. It shifts the goal from creating immortal tools to building dynamic relationships—with beginnings, middles, and ends.
Again, putting these agents inside of cute robots are going to be massive business for elder care. And inspired by the long history of digital pets, we still have the chance to create something more humane.
Something worthy of the charismatic virtual fauna entering our lives.
Maybe every one of us will end up with our own little friend living inside our phone? These characters might feel like advanced Tamagotchis, personalised companions in virtual environments, but these same agents doing things out in the world for us will manifest as a kind of charismatic virtual fauna. Populating (critics might say polluting) shared spaces and shaping our experiences within them.

Permanently Moved
Permanently Moved (dot) Online is a weekly podcast 301 seconds in length; written, recorded and edited by @thejaymo

Leave a Reply