It's Time To Build AI | UX
Bridging the Capability Overhang from Generative AI to Generative UI
In early April, a conversation between swyx and Maggie (and later Geoffrey and Linus) snowballed into a SF meetup centered around our joint interests as UX people crossing over into AI. The original plan was for 15-30 people, then Ivan Zhao graciously offered Notion HQ1 to host 80 people, and interest bloomed into almost 400 signups with unfortunately more people on waitlist than we could accommodate. We put out a Call For Demos, 68 people submitted, and 16 presented live.
But this isn’t just a meetup recap blogpost.
It is a call to arms.
We are no stranger to manifesting meetups, but even by SF standards the event was a smashing success, with many great UX people all in one room. We aren’t surprised; we think the potential in and demand for novel AI UX interaction is enormous:
OpenAI staff often comment on how ChatGPT is primarily a “UX innovation” that made existing capability accessible, just nobody “jumped on the opportunity”
Linus has ranted about how Most knowledge work isn't a text-generation task, the sorry state of PromptOps Tools and prompt engineering, and that text is the lowest denominator, not endgame.
Maggie, co-organizer of Future of Coding London, and champion of End User Programming by developers for truly personal computing (e.g. programming portals), also warns about Generative AI and the Expanding Dark Forest.
Geoffrey, ever a champion of Malleable Software, and in particular spreadsheets over text, has reminded people that Chat will never feel like driving a car.
swyx likes smol ideas that stick in big brains. And making people go “ooh!”
We think what Nat Friedman calls the AI capabilities overhang is in part due to people not exploring the “latent space” of AI UX (including Generative UIs, many of which were explored in the demos featured below).
As Linus said in his opening comments:
“I spent a lot of time exploring different kind of interfaces for highlighting and structuring and helping people navigate text and one of my key takeaways from that year was that we have to go beyond just staring at walls of text and prompting.”
We fundamentally believe that the ultimate potential for LLMs is not merely to build “ChatGPT for your docs” (though that is great and needed too!). To do that, we must break out of the textbox, but also create spaces to share new UX paradigms and concepts with each other, both online and IRL, in a lightweight fashion that encourages collaboration, inspiration, and friendly competition rather than fundraising.
If you too, believe that there is a better world waiting for us on the other side of the border box, ping us to help.
xoxo,
Your fellow AI | UX enjoyers
PS: If you are in 🗽 NYC - the first AI | UX will be hosted by Paul Butler on May 17 - register/share it here!
To start events in your city, come coordinate on our Discord.
Full Meetup Video
Timestamps and individual submitted demos
6:15 Alex Brinsmead - MindPilot: Mindmapping Canvas, with Chat
9:34 Alvin Ghouas - ProductStudio: 3D editor with Stable Diffusion Img2Depth
12:40 Amelia Wattenberger - PenPal: Beautiful writing app with affordances for feeling, inspirations, suggestions, summaries, and praise!
I'm really curious to see how we can move beyond generating text. Prompts are basically pieces of context - what if we could prompt text like we prompt MidJourney? … Once you have primitives, let users customize it!
19:00 Apoorva Srinivasan - Personalized Generative Learning Graph: Zero in on what you need to know, with resources suggested
22:00 Geoffrey Litt - AI in Potluck - a Computational Medium: Editing a spreadsheet in Potluck with natural language, falling back to direct manipulation
29:25 Gray Crawford - Mapping Image Generation to Movement: GANBreeder/ArtBreeder mapped with a Leap Motion Controller
(long dinner break)
1:11:42 Jeremy Nixon - Omni: for Level 4 Syntopical Reading, Proactive Search, Omnidirectional Linking
1:19:40 Kabir Goel - 3 demos: Projectional Editor for Text, Tone Switching Palette for Text (inspired by FigJam), Zoom Previews for Text
1:24:40 Kasra Kyanzadeh - Feedpaper: a Calmer Twitter Client (open source)
1:29:00 Marie and Michael Fester - MarkPrompt: AI-native CMS: Ingest any Markdown and expose as a Chat component
1:35:20 Mary Rose Cook - LLM Augmented Animation: Animate by choosing from options each time
1:40:24 Max Krieger - Seemixer: Point at website elements and decide what you want them to look like. "It'd be cool to program by pointing at things instead of typing them."
1:45:00 Miguel Acevedo - OpenCode: Programming Tutor that prompts you to think critically as you solve problems
1:50:20 Paul Shen - LLMs as Function Blocks: Programming like Factorio!
1:57:50 Rob Haisfield - AI Zettelkasten: With prompts on your notes taken
2:02:19 Sean Grove - Generative UI ChatGPT Plugin: Making ephemeral UI with filterable/sortable/tables, ffmpeg, and video controls
With special thanks to Emily at Notion HQ for going out of her way to be an incredible venue host helping us with food, A/V and even post-meetup recordings (with timestamps!). Notion seem to not mind at all that I once “reverse prompt engineered Notion AI”.
Would be so good to get an updated article on this topic given there is more movement here now e.g. v0, Figma, replit and so many more.