AI Agent Metaphors | 2404

|

|

Like the Metaverse, history is repeating itself with zero contact with the last time a lot of smart people had conversations about AI Agents.

Full Show Notes: https://www.thejaymo.net/2024/04/13/2404-ai-agent-metaphors/

Support the show! 
Subscribe to my zine
Watch on Youtube

Permanently moved is a personal podcast 301 seconds in length, written and recorded by @thejaymo


AI Agent Metaphors

I’m currently reading Dan Bricklin’s 2009 book ‘On Technology‘.

It’s a collection of blog posts written during the 2000’s and a fascinating window into the kinds of topics that were being discussed in that era. 

There’s great post reproduced dated April 4, 2004 called Metaphors, Not Conversations. Reading it the other night, it caused me to experience a kind of temporal whiplash. It’s about embedded software agents and artificial intelligence in computing UX.

Exactly the same topic of conversation that is occurring right now across social media  about the future of AI. Like the Metaverse, history is repeating itself with absolutely zero contact with the last time a lot of smart people had these same exact conversations.

The beginning of the essay links to a July 2000 New York Times article titled ‘Microsoft Sees Software ‘Agent’ as Way to Avoid Distractions’.

There’s some fantastic zingers in it. Including a quip from Jakob Nielsen a ‘Silicon Valley expert on software usability’ who said. “Most Internet entrepreneurs treat the users’ attention as a Third World country to be strip-mined”. Which is a line that can generate enough hot takes to fill a whole episode by itself … anyways.

The main thrust of the article it’s a focus on Microsoft executive Eric Horvitz and his work on human computer interface design. He said back then in 2000 that if he were to write a book, it would be called ‘My Battle With Attention‘(Same). What he was proposing as a fix was a context aware Attentional User Interface:

Using statistical probability and decision-theory techniques that draw inferences from a user’s behavior, the team is developing software meant to shield people from information overload while they are working.

This is the same team that invented Clippy. But the article makes it clear that despite its failure, there was still a strong will at Microsoft to create an agent or co-pilot to help a user navigate a computing environment. 

Bill Gates too has always been enthusiastic about software agents. In his 1995 book The Road Ahead (which is brilliant by the way) he describes an Agent as “a filter that has taken on a personality and seems to show initiative. An agent’s job is to assist you. In the Information Age, that means the agent is there to help you find information.”. And again in an interview with Entertainment Weekly in 2000 he talked about personalised TV Guides.

The “TV guide” will almost be like a search portal where you’ll customize and say, “I’m never interested in this, but I am particularly interested in that.”

It is no surprise then that Microsoft has gone all in with it’s OpenAI investment and launched so many AI products. LLMs speak to ambitions embedded deep in the DNA of Microsoft, present since the very beginning.

I have at this point read a lot of 90’s material on computing visions. Which has resulted in frustration about contemporary conversations around the same topics. For example: The recent discourse sparked by NVIDIAs AI powered NPC announcement would have benefited greatly by an understanding of Leonard N. Foner’s early 90’s work on software agent interactions in MUDs .

Anyways I digress again.

It seems to me that 1990’s ideas around software agents fell into into two camps. They are one. a sort of context aware passive filter, or two an active participant alongside a user. 

Responding to the NYT article Bricklin talks about metaphors and their importance in software design. A good metaphor he says: “aids in developing trust between the program and the user”. And what he takes issue with is that “the metaphor proposed for many of the agents and assistants (…) is of a “magic” program that says, “I know, trust me, I’ll tell you. (…) An easy metaphor to invent, but one with very little transparency”. Biricklin by the way is the inventor of Visicalc – the first electronic spreadsheet.

He goes on to talk about why “this is the answer, trust me” is a very poor interface metaphor. Which I can illustrate with a recent example – the uproar over ChatGPTs confident assertions of hallucinated facts.

We did however actually get the passive agent based future imagined in the 1990’s. It’s the Netflix Algorithm or Amazon’s recommendation engine. And they were all built with the hot computer science Microsoft were jazzed about later on that 2000 NYT piece – Bayesian Inference. 

What’s happened to more recently is that recommendation agents have became active ones in passive clothing.

The algorithmic feed. TikTok, Twitter, Instagram and Facebook etc  all utilise content filters that lack transparency and lean heavily on ‘trust me’ as an interaction metaphor. They aren’t helpful agents at all, they all restrict human agency inside their systems.

The gap between the Microsoft article on Bayesian filters and the first Netflix recommendation algorithm was 10 years. We learnt a lot about how humans react to AI Agent in techno-social systems in the 90’s. and Computer science has been dreaming about software agents since the very beginning. 

In 1995 Gates thought that in the future an agent would be there to help you find information and make sense of things. But it turns out they’ve been weaponised against us in order to boost dwell time and increase ad venues. Social media is a mess and Google has become practically useless.

So I am right now, deeply sceptical about the current rush towards the development of active software agents built on top of LLMs. We’ve been through all this before, and it could take 10 years.

And as Birkman says – these are easy metaphors with little transparency. 

Prefer Email? 📨

Join 4,518 other subscribers.

Or subscribe to my physical zine mailing list from £5 a month


Leave a Comment 💬

Click to Expand

One response to “AI Agent Metaphors | 2404”

Leave a Reply

Your email address will not be published. Required fields are marked *

Prefer Email? 📨

Subscribe to receive new posts straight to your inbox!

Join 4,518 other subscribers.

Continue reading